Understand The Tradeoffs Of Increasing Resolution By Averaging

Jan. 11, 2012
How to use Allan Variance to optimize sample size when averaging analog-to-digital converter readings.

Many sensor applications measure a dc signal that occasionally changes, such as weight and temperature. The repeatability or stability of the measurement results is of the utmost importance in many of these applications.

Ideally, a fixed dc input to an analog-to-digital converter (ADC) should result in the same output code for every conversion. But especially with a very precise ADC, you should expect to see a range of output codes for a given input voltage. This is a result of circuit noise within the ADC, as well as whatever noise might be present in the input signal.

If you apply a dc signal to a high-resolution ADC and record several thousand readings, the result can be a distribution of codes (Fig. 1). This noise leads to uncertainty in the measurement results.

Fig 1. Ideally, a data converter with a dc input would have only one code as its output. This graph shows the frequency of occurrence (a histogram) of codes output by a typical 16-bit ADC with a dc input. Several codes appear due to internal noise in the ADC, and noise may be present in the input signal.

You would think that high resolution would give you more certainty. But the higher resolution also shows you the effects of whatever noise is in the system, potentially reducing the certainty of the measurement.

Averaging the signal can help reduce the effects of noise on the dc measurement results. The process of averaging is well known. The numbers (successive samples) to be averaged are summed and divided by the number of them (N):

This will decrease the output data rate by a factor of N and increase the settling time of the measurement system. These results often are acceptable prices to pay for the increased resolution and stability of the results.

The more successive samples included in the average of a signal that remains constant, which also includes uncorrelated noise, will result in more reduction of the noise component. If the signal is dc, and the noise component is random, then with each successive sample averaged, the signal-to-noise ratio (SNR) will improve.

In fact, it can be shown that the improvement is proportional to the square root of the number of samples in the average. The standard deviation of the average of a number of noisy samples of the same signal is the standard deviation of the original signal divided by the square root of N:

Because the noise reduces by a factor of the square root of the number of averages, there is a diminishing return associated with more averaging. Furthermore, this assumes that each value of the actual signal used in the average is the same.

In reality, this is not the case, as no real signal is exactly a perfect dc signal. Any quantity we measure will exhibit some level of slow drift, and this will appear in averages over time.

The Allan Variance

If it takes a long time to acquire the necessary number of samples, the data might drift during that time. For a specific type of data and drift, there is a maximum number of samples to average, which is beneficial. Using a statistical analysis tool called the Allan Variance, you can determine the maximum number of samples to average.

Consider the data obtained from a data converter shown in Figure 2. This is 30,000 data points representing nine minutes of data captured. By simply looking at the data set, you can probably discern a slight drift in the data over that time period.

To evaluate the effectiveness of averaging, the Allan Variance is used. It calculates the maximum number of samples to average that is beneficial by taking multiple sets of longer and longer averages and then measures the resultant noise (the variance) from each set:

The Allan Variance In Averaging
Input data averages   Allan Variance array
1+2, 3+4, 5+6, 7+8, . . .     Variance (two-point averages)
1+2+3, 4+5+6, 7+8+9, . . .     Variance (three-point averages)
1+2+3+4, 5+6+7+8, . . .     Variance (four-point averages)
1+2+3+4+5, 6+7+8+9+10, . . .      Variance (five-point averages)
…and so on.

The Allan Variance shows the change in noise as the number of samples used in the average is increased. If the data is not stable, the results for larger numbers of samples used in the averages will not show the desired improvement in noise.

The Allan Variance plot indicates the optimum number of samples to average. Increasing the number of samples above a certain point doesn’t lead to a further reduction of noise. This effect is caused by the fact that using more points in the average starts to include the data drift, and this drift becomes more of a factor than the higher-frequency noise.

The results are plotted on a log-log plot where the optimum number of samples to average for that set of data becomes immediately obvious (Fig. 3). There is no benefit to average more than about 500 samples of the data from Figure 2. The data drift starts to influence averages of more than 500 points and gives you degraded noise performance.

Figure 4 shows a similar set of data to that in Figure 2. Although the random noise is similar in amplitude, there isn’t any observable drift. If we use the same Allan Variance technique, we would expect this data to benefit from a higher number of samples in each average (Fig. 5). This data shows that using up to 2000 samples in each average result is beneficial.

While the best way to combat noise is to take care that it doesn’t get into your signal in the first place, averaging can reduce the noise by the square root of the number of averages. Be aware, though, that this works only as long as the signal isn’t drifting.

From the data sets shown here, the low-drift data set benefits from longer averaging, reducing the variance by almost an order of magnitude over the set that includes some drift. Allan Variance can help you see that, but fixing the drift is a problem you’ll have to address.

References

1. Allen, D.W., “Allan Variance,” www.allanstime.com/AllanVariance/

2. Downs, R., “Signal Chain Basics (Part 48): Implementing Averaging Filters,” PlanetAnalog, December 5, 2010

3. Learn more about ADCs at www.ti.com/adc-ca

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!