Electronic Design
Characterize Jitter Measurements On 10G Signals

Characterize Jitter Measurements On 10G Signals

With the continued quest for ever-higher performance, the unit interval (UI) for a data valid window continues to shrink. At a 1-Gbit/s rate, the UI is 1000 ps, shrinking to 200 ps at 5 Gbits/s and a minute 100 ps at 10 Gbits/s. With a 100-ps data valid window, very small amounts of Tj (total jitter) can be tolerated before a system won’t consistently transmit and receive data reliably.

At these speeds, Tj results will need to be substantially less than 100 ps with Rj (random jitter) in the sub-1-ps range. What techniques and tools can be used to characterize these femtosecond timing systems?

Fundamentally, high-speed I/O design is more challenging than ever before as speeds increase. Many of the latest standards require a bit error rate (BER) of 10–12 at the physical layer. As the UI gets smaller, maintaining that error rate becomes increasingly difficult.

This ultimately means that device-level jitter has to continue shrinking. For example, SuperSpeed USB 3.0 at 5 Gbits/s specifies Rj at 2.42 ps RMS, while the Small Form-Factor Pluggable (SFP) standard at 10 Gbits/s specifies Tj at 28 ps with an Rj requirement of approximately 1 ps.


Timing jitter is the unwelcome companion of all electrical systems that use voltage transitions to represent timing information. As signaling rates continue to climb and voltage swings shrink to conserve power, the jitter in a system becomes a significant percentage of the signaling interval.

Under these circumstances, jitter becomes a fundamental performance limit. The ability to characterize jitter is critical to successfully deploying high-speed Gen 3 systems that dependably meet their performance requirements.

On every clock, the data levels and falling and rising edges (Fig. 1) are presented at D as shown. This latching of data is the critical aspect of data communication. Tools, whether they are oscilloscopes or software simulations, show this as the eye diagram.

Also on every clock, the timing position of the edge (if any) contributes to the clock versus data delay statistical distribution. This displacement is known as jitter or time-interval error (TIE).

TIE is the measurement of a signal’s timing error relative to a known or recovered clock. In serial data applications, TIE is typically called jitter. TIE is important because it shows the cumulative effect that even a small amount of jitter can have over time. In an illustrative example, the TIE standard deviation (Fig. 2) is 9.6 ps with a clock edge every 1 ns.

There are, however, other ways of measuring jitter on a single waveform including period jitter and cycle-to-cycle jitter. Period jitter is the measurement of a signal, generally a repeating signal, from one edge to another similar repeating edge. Common period measurement tools measure from the rising edge of a signal to the next rising edge.

With some data transmission methods, such as double-data-rate (DDR) memory, both the rising and falling edges are used to clock data bits. In this case, the measurement becomes a half-period measurement. After taking a significant sample of period measurements, the standard deviation and peak values can be resolved. This statistical information is the period jitter in the signal.

Cycle-to-cycle jitter is the application of simple arithmetic to the period measurements just taken. If timing information for two adjacent periods is known, the difference between them is the cycle-to-cycle change: period one minus period two. Again, after taking a significant sample of periods and measuring the difference between them, the standard deviation and peak values can be resolved. This statistical information is the cycle-to-cycle jitter.


Separating jitter into its constituent components provides increased precision and insight into the root cause of BER performance. The jitter model (Fig. 3) most commonly used is based on a hierarchy. While there are other ways to analyze jitter, the Fibre Channel-Methodologies for Jitter and Signal Quality Specification (FC-MJSQ) T11 working group has endorsed this methodology, which is the most popular because it directly shows the components relevant to the BER performance.

In this hierarchy, the total jitter (Tj) is first separated into two categories, random jitter (Rj) and deterministic jitter (Dj). The deterministic jitter is further subdivided into several categories: periodic jitter (Pj, also sometimes called sinusoidal jitter or Sj), duty-cycle-dependent jitter (DCD), and data-dependent jitter (DDj, also known as inter-symbol interference, ISI). An additional category (bounded and uncorrelated jitter, or BUj) is sometimes used.

Measuring each of the components that make up Tj at high signaling rates requires instruments with a low noise floor, flat frequency response, a low jitter-measurement floor, and low trigger jitter. For instance, ON Semiconductor found that it required instruments with less than 200-fs RMS system jitter along with high bandwidth to characterize its high-speed ECL devices. The chip designer found that signal shifts of a few picoseconds, and even some in the femtosecond range, could disrupt transmit (TX) and receive (RX) performance.1

While most serial communication standards specify tolerances or limits for jitter, standards can be vague in their specifications or use different philosophies toward analyzing jitter. Standards documents tend to outline quantifiable jitter limits but may not offer much guidance toward determining which type is most critical in a given application. All forms of jitter have the potential to disrupt system BER, and different tools have different strengths in detecting them.


The most common instrument for acquiring and analyzing jitter is the real-time oscilloscope. Modern digitizing instruments have kept pace with increasing data rates and can be equipped with integrated software applications that perform detailed analysis of jitter and its components.

But the range of choices is not limited to real-time digitizing signal analyzers (DSAs) and digital phosphor oscilloscopes (DPOs). Other entirely different tools have their strengths, and some of their measurement capabilities overlap. These include bit-error-rate testers (BERTs), jitter analyzers, counter timers, and spectrum analyzers.

Because it is one of the most common measurement tools used in electronics research, development, and engineering, the real-time oscilloscope is likely to be the first line of defense when jitter issues need to be investigated. The DSA/DPO can address almost every kind of jitter measurement within reach of its bandwidth and resolution.

The DSA/DPO approach owes its jitter-measurement versatility to the fact that it can capture a very long time window for many operational cycles of the device under test (DUT). Because the oscilloscope’s sample memory preserves a long history of waveform activity, we can study attributes as varied as rise time, pulse width, and jitter of all kinds. Applicable specifications from high-end oscilloscopes capable of handling 10-Gbit/s data rates include:

  • A 20-GHz bandwidth
  • A low jitter noise floor of nominally 300 fs (300 × 10–15 s), which minimizes the oscilloscope’s contribution to the DUT jitter measurement
  • 8-bit acquisition, which provides dynamic range sufficient for most current serial standards and suits 16-level modulation schemes

An important part of the equation is a toolset that automates jitter measurements and analysis. Jitter measurement is a rarefied discipline, but one that lends itself to application-specific software solutions (assuming the oscilloscope platform supports such functions).

There are some applications whose demands exceed the capabilities of the real-time DSA/DPO. The instrument’s real-time bandwidth and resolution must be weighed against the DUT’s data rate and its harmonics. And, some forms of multi-level modulation may tax the instrument’s ability to distinguish between levels. In such instances, one of the other jitter measurement tools will be more appropriate.


The sampling oscilloscope brings wide bandwidth to the task of jitter measurement. It may be the only effective option when observing signals that have data rates up to 60 Gbits/s. Moreover, the sampling oscilloscope is appropriate when it is necessary to capture the harmonics of relatively “slow” signals.

Sampling oscilloscopes rely on repetitive input patterns to build up a waveform acquisition made up of samples taken over numerous cycles. Many types of serial devices offer diagnostic loops that can produce these repeating waveform streams, or an external data generator can be used as a driving source.

The sampling oscilloscope can be equipped with application-specific software packages for jitter/noise analysis that provide such jitter analysis capabilities as jitter separation, noise separation, and BER eye estimation.


Timing accuracy is the most important specification for single-shot timing measurements because it determines how close these measurements will be to the real values. It accounts for both the repeatability and resolution specifications.

Also, timing accuracy is based upon a number of factors, including sample interval, time base accuracy, quantization error, interpolation error, amplifier vertical noise, and sample clock jitter. Each of these factors contributes to the timing error. The combination of all these factors results in the delta timing-accuracy specification (DTA). For high-end oscilloscopes, the delta time-accuracy (DTA) specification is similar to Equation 1, where:

A = input signal amplitude (V)
trm = 10% to 90% measured rise time (s)
N = input-referred noise (VRMS)
tj = short/medium term aperture uncertainty (sRMS)
TBA = timebase accuracy (2 ppm)
duration = delta-time measurement (s)

All of the above assumes an edge shape that results from a Gaussian filter response.

The details of a particular instrument’s specified DTA can be found in its manual. Generally, the specification means that for any edge-to-edge timing measurement, you can determine the accuracy of the result, guaranteed and traceable to the NIST.

The equation above includes scale and signal amplitude, input noise, and other influencing factors. The full topic of DTA is too complex to cover in this discussion, though it should be considered when attempting to characterize timing system at femtosecond levels.


Measurement resolution defines the ability to reliably detect a measurement change. It should not be confused with measurement accuracy or even measurement repeatability.

With timing measurements, resolution is the ability to discern small changes in a signal’s timing, whether the change is intentional or a result of noise. Items as substantial as hardware-counter bit width, or even the counter’s electrical bandwidth, can limit timing resolution. Or it could be something as obscure as the software that performs the mathematical averaging limiting timing resolution.

In hardware timers, like the typical time-interval analyzer (TIA, SIA), timing resolution is limited in hardware to hundreds of femtoseconds. If a hardware counter or its equivalent circuit is clocked at 5 GHz, it can’t detect a change any smaller than 0.2 ps. This is a physical limitation of the device.

Sample rate, interpolation accuracy, and software-based math libraries limit timing resolution in real-time oscilloscopes. Using sample rates of 50 Gsamples/s and SIN(X)/X interpolation, resolution of tens of femtoseconds is possible. Because the resolution in this case is based on math libraries, the real resolving power is in the sub-femtosecond range (0.0001 ps).

Resolution implies the ability to measure a very small change in timing. But this may not always be true. What happens when the change is smaller than the intrinsic measurement noise within the instrument? Thus, the overall system noise floor must be considered when measuring small amplitudes of noise or jitter. Simply knowing the system resolution is not sufficient to understanding the true limit of resolution, accuracy, or overall capability.


Jitter noise floor (JNF) is the intrinsic instrument noise portion of a jitter measurement. It sets the lower limit on detectable jitter. Jitter amplitudes near the JNF become objectively unobservable. One method of verifying JNF is to measure a noise-free, perfectly timed signal. While perfect signals are rare, suitably good sources can be used to demonstrate jitter noise floors.

High-precision RF generators with low phase noise are commonly recommended for this testing. Other methods include using a shorted transmission line so the reflect pulse is unchanging and the reflected pulse width is measured.

The JNF equation for a high-end oscilloscope is Equation 2, where:

A = input signal amplitude (V)
trm = 10% to 90% measured risetime (s)
N = input-referred noise (VRMS)
FSj = full-scale range of input
tj = short/medium term aperture uncertainty (sRMS)

All of the above again assumes an edge shape that results from a Gaussian filter response.

TIE is used to measured JNF because it includes any phase error in the signal, whether the error is high-frequency or low-frequency in nature, single-event or accumulated error. Further, with real-time instruments the reference for the TIE method can be a calculated perfect clock. An example shows a very low TIE (Fig. 4) of 328 fs RMS on an oscillator using a DPO/DSA real-time oscilloscope.

Another factor affecting JNF measurement is determining the frequency band of jitter noise to include in the results. All noise, including jitter, can have frequency components with wavelengths from kilometers to angstroms. When measuring JNF, limits on the included frequency range should also be stated. Typically, these numbers are representative of JNF for the longest record length and maximum sample rate. 

One of the top-performing FPGAs on the market today is the Altera Stratix IV, with a data rate of 11.3 Gbits/s. As shown in a test report (Fig. 5) generated from data collected using a high-performance sampling oscilloscope, Tj tested out at 22.18 ps with Rj at just 395 fs.


Jitter analysis is a challenge that grows in importance with the steady advance of system performance and data rates. As the UI for a data valid window gets smaller, Tj needs to fall below 100 ps with Rj under 1 ps. There are several approaches to making ultra-precise jitter measurements, each with unique strengths and limitations.

Real-time oscilloscopes can handle most jitter tests in conjunction with analysis applications that deliver both statistical and waveform information. When the signals in question are running at speeds beyond 20 Gbits/s, the sampling oscilloscope takes over for jitter measurements. For extremely precise jitter measurements, it is becoming increasingly necessary to pay close attention to a given instrument’s timing accuracy, resolution, and jitter noise floor.

1. Tektronix (n.d.) Case Study: ON Semiconductor; retrieved from www2.tek.com/cmsreplive/pmrep/4130/2006.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.