An expert viewpoint brought to Electronic Design by Agilent Technologies, Inc.
Today's data-transfer schemes are reaching amazing speeds in the gigabit-per-second range, but nothing in life comes for free. In this case, although we're accustomed to treating such schemes as digital circuits, at these rates we must also look at them as analog circuits. We must start dealing with a whole new set of parameters to specify, test and verify a system's proper operation.
One of the most important parameters has become jitter, which is the misalignment of the significant edges in a sequence of data bits from their ideal time positions. Large values can cause transmission errors and have a major impact on a system's bit error ratio (BER), so engineers must be able to measure and quantify it.
However, you can take the same system, measure jitter with different setups from a handful of test suppliers, and potentially get many different results. Which one should you believe? Multiple techniques exist depending on which aspect of jitter you want to measure, but there's no "silver bullet" that covers every measurement case. When we measure jitter, there are no well-established reference standards, so do you really know how accurate your measurements are?
There are several aspects to consider. First, every instrumentation supplier should work with the identical source datastream to determine its accuracy, but nobody has yet defined what this "standard" jitter reference signal should look like.
Next you have to select which type of equipment to use and what kind of display gives the information you want: an eye diagram with a DSO (digital storage oscilloscope) or a DCA (digital communications analyzer); a bathtub curve with a BER tester; a histogram to show variances in cycle-to-cycle jitter, time-interval error (TIE), rise time or duty cycle with a DCA or DSO; a simple view of the signal over time with a fast DSO; or perhaps an FFT of that time-domain trace.
When you need to decompose total jitter into its subcomponents, each of these techniques has strengths and weaknesses. You must also recognize that the design of the measurement algorithms in your instruments has an impact on results, and each can differ.
Before throwing your hands up in frustration, consider that the telecom and enterprise arenas have taken the trouble to document jitter specifications and measurements through standards bodies using well-known instrumentation. It is possible.
Unfortunately, the problem isn't nearly as well defined for high-speed computer buses and interconnects where the many new schemes have little in common when it comes to specifying and measuring jitter. Amazingly, even the semantics are different from those in the telecom space. No universally recognized jitter standard exists against which we can compare results, so improvements in measurement techniques are difficult to compare with what's already available.
Recognizing this problem, a year and a half ago Agilent held an internal meeting known as the JitterFest, where engineers and researchers from various divisions along with application specialists met for a week to analyze the problem and suggest some solutions. It took the group six months to digest their findings and issue a report that basically concluded, "We know lots more about jitter, but we recognize that we need a ¡®golden' jitter source to accurately compare measurement techniques."
After spending more time analyzing the problem, the engineers met again earlier this year at JitterFest II and ended up with what they consider a rock-solid jitter reference source and a better understanding of the different measurement techniques. Comments Karl Kachigan, Signal Integrity Marketing Engineer at Agilent, "We assembled a jitter reference that we believe is traceable, but it isn't something that would be viable as a commercial product because it costs half a million dollars."
Based on the knowledge gained at the JitterFests, Agilent has reviewed its work with some people at NIST and is encouraging that agency to create a jitter reference source that the entire industry can use to compare and improve measurement techniques.
The bottom line is that while Agilent has been motivated to refine its techniques, other test companies might find it extremely difficult to do likewise until a jitter standard exists. It would be wonderful if an agency such as NIST were able and willing to set this industry standard. Your only solution in the short term is to thoroughly understand the various jitter-measurement viewpoints and determine which measurement techniques are best for your application.
To learn more about jitter and how to measure it with today's instruments, check out the many resources at: www.agilent.com/find/curveOl-sept04