Many people use the words “accuracy,” “precision,” and “resolution” as if they were synonyms, even though they are all very different. For example, datasheets for products as diverse as test & measurement systems and semiconductors list these misused terms interchangeably. This misunderstanding is becoming a more important problem in power electronics systems.
Newer VLSI devices have core and I/O voltages that need to be changed dynamically to save power in various modes. Some of these microcontrollers as well as other VLSI devices can have five or six different modes, each requiring different voltages. Moreover, the voltages often are specified to tight tolerances of 1% accuracy over temperature at, say, 1.05 V and then switching to 0.95 V in other modes.
When system specifications require dynamic voltage modes at high accuracy, what do those datasheets mean? The VLSI device manufacturer really means that under any circumstances you could encounter, the voltage must be “x” volts within a percent over temperature. So before we consider delivering the power to the processor, we need to consider the difference between accuracy, precision, and resolution.
Accuracy is the exactness of the measurement compared to a known standard of reference such as NIST standards, also known as the variation or uncertainty of the measured value or closeness of the result to a known and accepted standard. Precision is the fineness to which an instrument can be read repeatedly and reliably—or simply stated, it’s equal to repeatability or reproducibility. Finally, resolution is the least discernable or smallest change detectable in the quantity, such as the least significant digit or least significant bit.
OUT ON THE RANGE
Let’s say you have a rifle with a telescopic sight. When you shoot with it, you get a scattered pattern (Fig. 1). That’s bad accuracy, bad precision, and bad resolution. So, you decide there’s something wrong with your optics and sighting. You get better optics with sharper and greater magnification and go shooting again.
This time, you have much tighter distribution, or precision. But on average, you’re just as far from the bull’s eye, so you still have poor accuracy (Fig. 2). The real problem wasn’t that the scope did a poor job of showing the target, which would imply poor resolution. Rather, the scope was wrong or sighted incorrectly.
The resolution is fine, as you can ascertain that you are off the mark by just a couple of rings. You have greatly improved the precision, but the accuracy didn’t get any better. In other words, the repeatability from shot to shot (precision) is much better. But the “correctness” (accuracy) of the shots, or their distance from the bull’s eye, did not improve at all. Resolution can be improved, then, by adding more granularity or more rings to see how far we are off the mark.
We can improve precision without improving accuracy. Does it work the other way, too? Can we improve accuracy (Fig. 3) without improving prevision? Yes, we can. If we had just aligned the scope properly instead of working on its optics, we could have a pattern that was scattered yet centered on the bull’s eye. There’s no improvement in precision, but plenty of improved accuracy. Finally, you would get high accuracy (Fig. 4) with high precision by working on both the alignment and the optics.
Resolution would mean we could expand or magnify the center of the bull’s eye with even more rings to the extent where we could see if we were simply putting rounds through the same hole over and over again or to determine just how far the shot-to-shot variations are. If we expand the resolution, we might be able to increase the precision and accuracy. But without resolution, we’re flying blind. High accuracy and high precision without resolution aren’t useful, and high resolution without good accuracy and precision is wasted and useless. See how these three factors work together to bring a clear, differentiated meaning to measurement effectiveness?
BACK TO ELECTRONICS
Another way to consider this in the electronics world would be if we wanted to measure a 1-V dc signal with a meter. The accuracy would indicate how close the displayed value would be to a traceable reference if we measure the voltage. Would 1 V actually be displayed as 1 V on the display, either digital or analog?
Meanwhile, the resolution would indicate the least discernable increment of the volt we can detect and display if we apply a 1-V source—for example, 1.000000 V, or 1-µV resolution. As for precision, if we apply exactly 1 V over and over to our system, which can resolve 1 µV, we would get exactly 1 V instead of 0.999999 V or 1.000001 V. Assuming we have sufficient accuracy and good resolution, the precision would be good as well.
So, accuracy, precision, and resolution are three very different factors. You need to consider them separately when it’s time to review components and instruments as you’re making decisions about what you’re specifying and measuring. More importantly, you need to ask questions about accuracy, precision, and resolution and make sure your colleagues are clear about using these terms properly.