Although early USB 3.0 products have appeared in the marketplace, the mainstream switch over to SuperSpeed USB has yet to happen. Part of the problem is that USB 2.0 is truly ubiquitous and inexpensive to produce. High-bandwidth devices, such as video cameras and storage devices, have been the first to migrate to SuperSpeed USB. However, for now at least, cost considerations are limiting USB 3.0 implementation to higher-end products.
Beside the inherent challenges of broadly deploying any new industry standard, USB 3.0 wasn’t just a trivial leap compared to USB 2.0, as it offers a massive tenfold performance increase. And while performance marches on, consumer expectations for low-cost interconnects haven’t changed. This puts significant pressure on engineers to work around a channel intended for much slower speeds while still ensuring reliability, interoperability, and high performance under a variety of conditions. Never before has testing to ensure physical-layer (PHY) compliance and certification been so critical or important.
USB 3.0 shares characteristics found in a number of other high-speed serial technologies such as PCI Express and Serial ATA: 8b/10b encoding, significant channel attenuation, and spread-spectrum clocking. In this article, we’ll review methods for compliance and how to get the most accurate, repeatable measurements possible for transmitters, receivers, and cables and interconnects. With these tips in hand, your trip to the SuperSpeed Platform Integration Lab (PIL) is likely to be even more productive.
High Speed Vs. SuperSpeed
USB 3.0 addresses the need for increased bandwidth to support applications that provide a more real-time experience. With billions of USB devices in use, backwards compatibility to support legacy USB 2.0 devices was a given in USB 3.0. Still, there are several key PHY differences between USB 2.0 and 3.0 (Table 1).
SuperSpeed USB compliance testing has changed significantly to accommodate the new challenges associated with a higher-speed interface. Validation of a USB 2.0 receiver involves performing a receiver sensitivity test. A USB 2.0 device must respond to a test packet at or above 150 mV and ignore (squelch) signals below 100 mV.
SuperSpeed USB receivers, on the other hand, must function with many more signal impairments and therefore the test requirements are more demanding than USB 2.0. Designers must also consider transmission-line effects and the use of equalization including de-emphasis at the transmitter and continuous-time linear equalization (CTLE) at the receiver. Jitter-tolerance testing at the receiver is now a requirement, and the use of spread-spectrum clocking (SSC) and asynchronous reference clocks can lead to interoperability issues.
Another important part of evaluating USB 3.0 serial-data links is the complicated interaction between the measured waveform and the behavior of the interconnect channel. It is no longer possible to assume that because the transmitter output meets the eye-diagram mask, a design will work against all channels up to a given amount of loss. To understand transmitter margin given worse-case channels, you will need to model channel and cable combinations beyond compliance requirements and use channel modeling software to analyze channel effects (Fig. 1).
Continue on next page
Transmitter Testing For Compliance
Transmitter testing is facilitated through the use of various test patterns (Table 2). Each pattern was selected for characteristics related to the test under which the pattern is evaluated. CP0, a D0.0 scrambled sequence, is used to measure deterministic jitter (Dj) such as data-dependent jitter (DDJ) while CP1, an unscrambled D10.2 full-rate clock pattern, does not produce DDJ and is therefore better suited for evaluating random jitter (RJ).
Jitter and eye height are measured with 1 million consecutive unit intervals after applying an equalizer function and appropriate clock-recovery settings (second-order phase-locked loop or PLL, closed-loop bandwidth of 10 MHz, and damping factor of 0.707). Jitter results are calculated by projecting beyond the measured data population to extract jitter performance at 1 x 10-12 bit error rate (BER) levels. For example, with jitter extrapolation the target RJ is calculated by multiplying measured RJ (rms) by 14.069.
Figure 2 illustrates the normative transmitter compliance setup, including the reference test channel and cable. Test point 2 (TP2) is located closest to the device under test (DUT), while test point 1 (TP1) is the far-end measurement point. All transmitter normative measurements are performed on signals at TP1.
After the signal is acquired at TP1, the data is processed using a software tool called SigTest, similar to official PCI Express compliance testing. For applications that require pre-compliance, characterization, or debug, other available tools provide additional insight into design behavior over varied conditions or parameters. A high-speed oscilloscope with USB 3.0-specific software provides automated normative and informative PHY transmitter tests. These tools save time by ensuring that the test equipment is configured properly.
After the tests are completed, a detailed pass/fail test report highlights where design issues may occur. If there are discrepancies between different test locations (e.g., company lab, test house), the tests should be run again using the saved data from previous test runs.
In cases where more analysis is required, jitter-analysis and eye-analysis software can be helpful for troubleshooting and design characterization. For example, multiple eye diagrams can be displayed at one time, allowing the engineer to analyze the effects of different clock-recovery techniques or software channel models. Also, different filters can be applied to analyze the effects of SSC for resolving system-interoperability issues.
With significant channel attenuation, SuperSpeed USB requires some form of compensation to open the eye at the receiver. Equalization, in the form of de-emphasis, is used at the transmitter. The nominal de-emphasis ratio specified is 3.5 dB or 1.5x in linear scale. As an example, with a transition bit level of 150 mVp-p, the non-transition bit level would be 100 mVp-p.
CTLE-compliance equalization implementations include on-die, active receiver equalization, or passive high-frequency filters such as those found on cable equalizers. This model is well suited for compliance testing because of its simplicity in describing the transfer function. A CTLE is implemented with a set of poles and zeroes in the frequency domain resulting in peaks at desired frequencies.
Continue on next page
CTLE implementations are simpler to design and consume less power than alternative techniques. However, in some instances they may not be adequate due to limitations in adaptation, precision, and noise amplification. Alternative techniques include feed-forward equalization (FFE) and decision-feedback equalization (DFE), which uses data samples weighted with scale factors to compensate for channel loss.
CTLE and FFE are linear equalizers. As such, both suffer from signal-to-noise degradation through boosting of high-frequency noise. DFE, however, uses a nonlinear component in a feedback loop, minimizing noise amplification and compensating for inter-symbol interference (ISI). Figure 3 illustrates an example of a 5-Gbit/s signal after significant channel attenuation along with the equalized signals using de-emphasis, CTLE, and DFE techniques.
USB 3.0 Receiver Testing
Receiver testing for USB 3.0 is similar to other high-speed serial bus receiver compliance testing. It’s generally split up into three phases starting with stressed-eye calibration, then jitter-tolerance testing, and finally analysis. Let’s take a look at a flowchart of this process (Fig. 4).
Stressed-eye calibration involves a worst-case signal that is usually impaired both horizontally (by added jitter) and vertically (by setting the amplitude to the lowest a receiver would see when deployed). Stressed-eye calibration must be performed when any of the test fixtures, cabling, or instrumentation have been changed.
Jitter-tolerance testing uses the calibrated stressed eye as input, and it then applies additional sinusoidal jitter (SJ) of increasing frequency. This applied SJ exercises the clock-recovery circuitry inside the receiver, so not only is the receiver being tested using worst-case signal conditions, but its clock recovery is also explicitly tested. Lastly, analysis evaluates whether additional design tasks need to be performed after testing to achieve compliance.
Stressed-eye calibration involves first setting up the test equipment with compliant fixtures, cables, and channels (Fig. 5). The next step is to iteratively measure and adjust various types of applied stresses such as jitter. The calibration step is performed without the DUT, with compliant test fixtures and channels and with specific data patterns generated by the test equipment. The test instrumentation should be able to perform two functions—pattern generation with the ability to add various types of stresses and signal analysis such as jitter and eye measurements.
Three impairment calibrations must be made to calibrate the stressed eye: RJ, SJ, and eye height. Each of these requires particular settings on the pattern generator and analyzer. Stressed-eye calibration must be performed once for each set of cables, adapters, and instrumentation.
Because they use different sets of adapters and reference channels, hosts and devices will have different stressed-eye calibrations. Once complete, the settings for the calibrated eye can be reused and must be recalibrated only if something in the equipment setup changes.
Additional Pattern-Generator Requirements
Now that we have covered the items requiring calibration, let’s look at the additional requirements of the pattern generator for each step of the calibration, including the data pattern to be used, the amount of de-emphasis, and whether or not SSC should be enabled. Within the stressed-eye calibration recipe, the two patterns listed are CP0 and CP1. For reference, Table 3 lists all USB 3.0 compliance patterns.
Continue on next page
CP0 is an 8b/10b-encoded, PRBS-16 data pattern (the result of subjecting the D0.0 character to scrambling and encoding in a USB 3.0 transmitter). After 8b/10b encoding, the longest run length of ones or zeros is 5 bits, reduced from the longest run length of 16 bits in a PRBS-16 pattern. CP3 is a pattern similar to the 8b/10b-encoded PRBS-16 in that it contains both the shortest (lone bit) and longest sequences of identical bits.
CP1 is a clock pattern used for the RJ calibration. Many instruments implement a dual-Dirac method of random and deterministic jitter separation for the RJ measurement. Using a clock pattern circumvents one of the drawbacks of the dual-Dirac method, which is the tendency to report DDJ as RJ, especially on long patterns. By using a clock pattern, DDJ as a result of ISI is eliminated from the jitter measurement, resulting in a more accurate RJ measurement.
The lossy channel (i.e., a USB 3.0 reference channel and cable) between the pattern generator and the analyzer causes frequency-dependent loss in the form of eye closure, both vertically and horizontally (Fig. 6). To combat this loss, transmitter de-emphasis is used to boost the high-frequency components of the signal so the received eye is good enough for an operational link at a BER of 10-12 or better.
The eye diagrams illustrate that without de-emphasis, all amplitudes are nominally the same. With de-emphasis, transition bits have higher amplitude relative to non-transition bits, effectively boosting the high-frequency components of the signal.
After passing through lossy channels and cables, the signal without de-emphasis suffers from ISI and has more eye closure than the signal without de-emphasis. Meanwhile, the signal with de-emphasis is fully open. From this, we see that the amount of de-emphasis affects the amount of ISI and DDJ and therefore impacts the eye opening at the receiver.
SSC is commonly used in synchronous digital systems (USB 3.0 included) to reduce electromagnetic interference (EMI). Without SSC, the frequency spectrum of the digital stream would have a high-magnitude sharp peak at the carrier frequency (i.e., 5 Gbits/s) and its harmonics, possibly exceeding regulatory limits (Fig. 7).
To prevent this problem, SSC is used to spread out the energy of the frequency spectrum. The carrier frequency is modulated, in this case by a triangle wave. The amount of frequency “spreading” for receiver testing is 5000 ppm, or 25 MHz, with the frequency modulation cycling at 33 kHz, or every 30 µs, shown as one period of the triangle wave. After SSC, the energy in the frequency spectrum is spread out, and no single frequency violates government limits.
As noted earlier, receiver-side equalization in USB 3.0 improves signals that have been impaired by ISI from frequency-dependent loss stemming from the reference channel and cabling. The concept is the same as for de-emphasis—the high-frequency components of the signal are boosted via signal-processing methods.
Although receiver equalization circuitry in a device or host is implementation-specific, the USB 3.0 standard specifies CTLE for compliance testing (Fig. 8). This CTLE must be implemented by the reference receiver such as a bit error-rate tester (BERT) or oscilloscope before making compliance test measurements (both for transmitter testing, and in this case, receiver stressed-eye calibration), often in the form of software emulation.
Continue on next page
The use of CTLE emulation for jitter measurements mainly impacts jitter that is affected by signal-processing methods, namely ISI. CTLE emulation does not affect jitter components that are not correlated to the data pattern such as RJ and SJ, although the use of the CTLE is required for both of these measurements according to the compliance test specification (CTS). On the other hand, eye height is directly impacted because ISI contributes to its measurement.
Jitter measurements must be made using a clock recovery “golden PLL” with a compliant jitter-transfer function (JTF), as shown by the blue trace in Figure 9. The JTF dictates how much jitter is transferred from the incoming signal to the downstream analyzer. In this case, the –3-dB cutoff is 4.9 MHz.
At lower SJ frequencies (along the sloped part of the JTF, and where the PLL loop response is flat), the recovered clock tracks the jitter on the data signal. Thus, the jitter in the data relative to the clock is attenuated according to the JTF. At higher SJ frequencies where the JTF flattens out and the PLL response slopes downward, the SJ present in the signal is transferred to the downstream analyzer. The use of a compliant JTF is specified for all measurements except for SJ during stressed-eye calibration.
Once the stressed eye has been calibrated, testing of the receiver can commence. USB 3.0 requires BER testing, unlike its predecessor, USB 2.0. BER testing in the form of a jitter-tolerance test is the only test required for receiver testing. The jitter tolerance test exercises the receiver using worst-case input signal conditions (the stressed eye calibrated in the previous section). On top of the stressed eye, a series of SJ frequencies and amplitudes covering the frequency range surrounding the –3-dB cutoff frequency of the JTF is injected into the test signal while the error detector monitors the receiver for mistakes or bit errors and calculates the BER.
As USB 3.0 starts to move into the mainstream, successful compliance and certification testing for transmitters and receivers is vital to bringing new products to market. These products must not only work well with other USB 3.0 devices, but also meet consumers’ expectations for performance and reliability under a wide range of conditions.
With the dramatic increase in performance comes a range of new test requirements that make design and certification more challenging compared to previous-generation standards. Fortunately, a complete set of testing tools and resources is available to help you achieve SuperSpeed USB logo certification.