Automated Precision Measurements

Product testing requires the acquisition of a number of data points that are then compared to a product specification. The challenge for the test engineer is to design an automated test system that can make the required measurements with an acceptable level of error. Such measurements are precision measurements.

The ability to quantify the errors in the test-system measurement is the factor that determines whether the measurement is a precision measurement. The errors may be unacceptably large; but if they are quantified, then the measurement is still a precision measurement.

System Design Process

The design of a test system to meet a given level of performance can be reduced to a systematic process. As Figure 1 shows, the design methodology becomes an iterative process of design, evaluation of uncertainties and the removal of system measurement errors until the desired performance is attained. This process was developed during the design of the HP 3458A Test System and has been used for many other test-system designs.

This process, however, is deceptively simple. If the test system must meet its design performance goals, then the measurement uncertainty must be determined accurately. To accomplish this, it is necessary to identify all of the sources of measurement error within the system, quantify them and eliminate them if possible. This is not an easy task.

Measurement Errors

Measurement errors fall into one of several categories: standards errors, systematic errors, calibration errors, drift errors, random errors and operator errors.1 Figure 2 shows a few of the more commonly encountered sources of error.

In listing the errors for a given test system, all possibilities, no matter how unlikely, must be considered. For example, ambient-temperature changes might have been determined, but what happens if the power-line voltage changes? A 10% change in line voltage could dramatically change the internal temperature in the equipment rack.

Also, what about the presence of people near the test system? Anyone who has attempted to make high-resistance measurements knows how easy it is for moving people to induce charge in test cables and cause erroneous readings to occur.

Even nearby radio transmitters can cause measurement errors. In one instance in our facility, a periodic noise failure was traced to a radio transmitter activated by the security team. In short, all sources of error must be considered.

Once the error sources are identified, quantify them. This may be as easy as reading a manufacturer’s data sheet to determine an accuracy specification or to require that other measurements and characterizations be performed. Often, as is the case with very high-performance test systems, it is necessary to build all, or part, of the test system to evaluate its performance.

It also is important to determine how to combine the error terms to determine the overall measurement uncertainty. Randomly occurring errors, such as noise and repeatability, should be added as the root-sum-square:

while systematic errors must be added linearly.

Reducing Measurement Errors

Once the potential measurement errors are identified and quantified, the next step is to determine how to remove them. There are two ways to accomplish this. The errors may be eliminated or compensated for in hardware or software. Any remaining errors are classed as system measurement uncertainties (Figure 3).

Experience has shown that about 80% of the test-system errors can be removed. Of those, 20% are removed through system-hardware design, while software and measurement methodologies account for the remaining 80%.

Real-World Examples

This process is illustrated by some of the design and performance criteria for an automated test system designed for the Hewlett-Packard 3458A Multimeter. One of the system requirements was to verify basic resistance accuracies of ± 2 ppm. No commercially available programmable resistance standard was available that met the design requirements, so the test-system design process was used to find a workable solution.

As shown in Figure 4, the largest contributor to the uncertainty was temperature variations from the environment and power-line variations. With the sources of error identified, the solution was obvious—eliminate the temperature variations by controlling the equipment temperature and installing a power-line conditioner. This approach coupled with on-site calibration and software compensation for the remaining errors provided more than adequate measurement accuracy.

DCV Linearity Example

Another system requirement was verifying basic DCV linearity of 0.1 ppm. Normally, this level of performance only can be verified in a standards lab with sources such as a Josepheson Junction Array. However, by using the design process and rigorously removing as many sources of error as possible, an uncertainty of ± 0.03 ppm was achieved.

This level of performance was obtained only after a great deal of effort and many passes through the design-process loop. The greatest challenge was identifying all the sources of error. Figure 5 outlines the final implementation.

The largest reduction in measurement error was achieved by using a bootstrap technique in combination with a Golden 3458A, as illustrated in Figure 6. The bootstrap technique only requires the system source to be short-term stable. The actual reference for accuracy and linearity is a fully characterized Golden 3458A at the data points in question.

The linearity and gain errors in the Golden 3458A are correction factors provided by the system software to compensate for the source’s gain and linearity errors. The Golden 3458A was chosen as the reference standard because an evaluation of the source showed it to be less repeatable and less stable over time than the Golden 3458A.

Summary

By treating test-system design as a process with clearly defined steps to identify and quantify measurement errors, you can achieve high degrees of accuracy and keep development costs low.

Reference

1. Coombs, C. F., Electronic Instrument Handbook, McGraw Hill 1995, Chapters 1, 2.

About the Author

Bertram S. Kolts is a Manufacturing Test Engineer at Hewlett-Packard, a position he has held for 18 years. Before that, he spent eight years doing integrated circuit testing for HP. Mr. Kolts received a B.S.E.E. degree from Colorado State University. Hewlett-Packard, Loveland Manufacturing Center, 815 SW 14th St., Loveland, CO 80537, (970) 679-2988.

Figure 2.

Standards Errors

Uncertainties in primary and secondary standards

Systematic Errors

Gain and offset

Input/output impedances

Cable and connector losses

Calibration Errors

Nonlinearities

Environmental changes of temperature, humidity and pressure

Drift Errors

Changes of component values or instrument performance over time

Random Errors

Noise

Repeatability

Thermal Offsets

Power Line Variations

EMI and RFI

Operator Errors

Insufficient warm-up time

Use of wrong cables or connectors

Proximity of people to test-system causing temperature changes or inducing charge in cables (for example, making very high-resistance measurements)

Figure 3.

Elimination

Utilize more accurate test equipment

Controlled environment—temperature and humidity

Better grounding and shielding

Power-line conditioning

Measurement techniques

Compensation

Characterize parameters in question-and-use software compensation to remove errors

Bootstrapping

Measurement Techniques

Uncertainty

Remaining errors that are not eliminated or compensated for Uncertainty in determining all sources of error

Figure 4.

Problems

Product spec is ± 2 ppm

Source temperature coefficient is 2 ppm/°C

Manufacturing floor temperature range is ± 3°C

Power-line variations cause a ± 1.5°C change in source internal temperature

Measurements ³ 10 MW exhibit >200 ppm of noise due to the movement of people near the test system

Solutions

Control the internal temperature of the equipment rack

Install a power-line conditioner

Calibrate the resistors in the equipment rack and compensate for the resulting errors in software

Double-shield resistance cables to reduce noise caused by movement of people

Results

Equipment rack temperature is held to ± 1°C

Source internal temperature changes with power-line variations and rack temperature changes reduced to <0.1°C

Noise in measurements ³ 10 MW due to the environment <5 ppm Uncertainty of resistance measurements reduced to <0.5 ppm

for a 60-day period

Figure 5.

Problems

Need to verify DCV linearity of 0.1 ppm

Source has 0.2 ppm of noise

Source linearity is >0.1 ppm

Power-line variations cause a ± 1.5°C change in source internal temperature

Test system uses relay multiplexers that cause thermal offsets due to temperature gradients and coil heating

System test cables have thermal offsets due to temperature gradients

Solutions

Average readings to eliminate reading noise

Bootstrap the source with a Golden 3458A, whose linearity has been characterized on a Josepheson Junction Array and use software compensation to remove the linearity errors of the Golden 3458A

Use latching relays with pulsed coils to remove coil self-heating errors

Control the test-equipment rack temperature to further reduce temperature gradients

Install a power-line conditioner

Use beryllium-copper connectors for all test cables to reduce thermal offsets

Results

Equipment-rack temperature is held to ± 1°C

Source internal temperature changes with power-line variations and rack-temperature changes reduced to <0.1°C

Uncertainty of linearity measurements reduced to <0.03 ppm

Copyright 1997 Nelson Publishing Inc.

May 1997

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!