Much of today’s dialogue about mixed-signal testing concerns the integration of analog functionality into digital ICs. Primarily, the focus is on digital automatic test equipment (ATE) that incorporates some of the functionality traditionally found in mixed-signal test equipment.
In the meantime, some of the most interesting developments in mixed-signal ATE have been quietly taking place around high-growth markets, specifically RF, smart power, hard disk drive (HDD) and high-speed datacom. In fact, these are the market segments that currently drive many new tester decisions in the traditional mixed-signal ATE market.
The HDD and datacom markets are especially interesting. They deal with the problem of digital test capability, not from the usual points of view of complexity and pin count but rather with raw speed, accuracy and waveform fidelity. Instead of Big Digital (D)-Big Analog (A), the most pressing problem in the mixed-signal world today could be labeled Fast D-Big A.
Digital data-rate requirements for HDD and datacom devices currently exceed 300 Mb/s and are on their way to 500 and 600 Mb/s. Coupled to these data rates are unprecedented analog requirements including timing measurements made to picosecond accuracy and waveform synthesis >1 GS/s.
Even digitally intensive or next-generation mixed-signal ATE falls short of the mark for these applications. It cannot achieve the required data rates and does not offer features needed to adequately test these high-performance circuits.
To appreciate the difference in digital test solutions, it is helpful to have some idea of how Fast D differs from Big D. While it is easy to grasp the concept of a microcontroller with some integrated A/Ds and D/As, this is really a very simplistic form of mixed-signal testing, since the digital and analog portions of the IC are distinct and generally tested as separate entities.
Far more interesting are the HDD and high-speed datacom cases. These differ from the digital-and-analog-sharing-the-same-die problem in many ways:
Data rates are much higher, often by as much as an order of magnitude. Correspondingly, tester specs such as minimum pulse width and rise/fall times must also track increases in a tester’s maximum data rate.
Fast-D devices incorporate timing recovery functions. This means that the data and clock signals are one in the same, and data output timing cannot be predicted.
The format of the digital signals is not a traditional TTL or ECL waveform. Nontraditional encoding schemes such as MLT3 and GCR are used to condition signals for media such as twisted-pair cable or disk drives (Figure 1).
Timing and waveshape characteristics of the device under test (DUT) must be measured to levels that are far more stringent that those required for Big-D devices, with characteristic jitter and delay measurements as low as a few picoseconds.
Real-Time Speed
Data rates greater than 300 Mb/s are serious business and are usually beyond the scope of MUXing traditional digital resources together. Even if enough tester channels and timing markers can be combined to affect the desired data rate, the test- head drivers and comparators of Big-D testers are generally not up to the task of creating the resultant waveforms.
Using a MUX configuration of any sort to create high-speed clock or data signals invariably leads to problems such as data stagger, where the mismatch in timing markers causes multimodal jitter on the input data. Bandwidth limitations in the test-head electronics or device interconnects also will add data-dependent jitter.
The combination of these two effects will wreak havoc on timing recovery circuits that attempt to recover a clock from this data stream. The resultant clock jitter will make the DUT appear to function far worse than it will in its end application and may even prevent the part from functioning at all while being tested.
The only reliable solution is having unmultiplexed tester resources that run at the actual data rate of the DUT. This eliminates the problem of data stagger since the same timing marker is used for each occurrence of a rising or falling edge. High- bandwidth pin electronics and high-performance mainframe-to-test-head interconnects (coax cables, for example) also are required to minimize effects such as data-dependent jitter.
Where Are the Bits?
Fast-D devices are not exactly causal. In other words, stimulating the device with digital data does not necessarily mean that the data will appear at a given output when you expect it to do so.
Figure 2 shows a timing recovery circuit that takes in a serial data signal and attempts to extract the data stream and provide an accompanying clock. When these devices are used in their final application, there is no need for the data and clock outputs to have any known relationship to the input data. Remaining downstream ICs simply clock data in based on the recovered clock.
If you must verify the operation of the data receiver, this is no trivial matter. Since digital testers like to have device responses defined ahead of time, the only solution on these platforms is to run the pattern many times. This would attempt to get a timing combination that passes—problematic since the timing can change from run to run—or try to build a data cache on the device interface board that operates at several hundred megahertz.
Fast-D solutions should have the capability to change timing to any desired value, not just predetermined time sets, while the pattern is still running. This would avoid the problem of stopping and restarting the DUT.
Fast-D testers also should provide real-time data recording capability, along with functional logic testing, at rates up to four times the DUT data rate. This allows you to characterize the DUT and perform a robust test of receiver functionality.
Generating Worst-Case Waveforms
Devices designed to talk to imperfect transmission media use many different encoding schemes to allow a high concentration of data into a band-limited environment. These include multilevel waveforms or signals that have undergone very specific analog conditioning to minimize interactions between adjacent symbols. Along with verifying the digital functionality of various DUT functions, it is critical to measure the analog aspects of pins, such as transmitter outputs, or to stress receiver inputs with worst-case waveforms.
DUT transmitter outputs are measured to verify both amplitude and timing integrity. High-bandwidth waveform digitizers with rise times below 100 ps are usually required for high-speed datacom devices to implement tests such as pulse masking or eye diagrams. Also critical is the amount of jitter produced by a HDD and datacom devices.
For this reason, instrumentation that can make statistically based measurements of random jitter using tens of thousands of samples is required. Current high-speed datacom devices are pushing the noise floor of such measurements to below 10-ps rms.
Along with characterizing DUT outputs, the Fast-D tester must also provide high-speed digital inputs that replicate the worst conditions under which the DUT is expected to operate. While these signals convey digital information, they are combined with analog waveform corruption as shown in Figure 3. In this case, the best instrument to use is not a digital resource, but rather an arbitrary waveform generator (AWG) that can operate at rates up to eight times the symbol rate of the DUT.
You can use any mathematical description desired to corrupt the base-data repeatably from tester to tester. Since the output of the AWG contains digital information that must be decoded by the DUT, it is critical that it be controlled from the tester’s digital pattern to ensure that it operates in lock step with the digital resources in the tester.
Other Fields of Interest
While disk-drive and datacom devices demonstrate the most complex aspects of Fast-D devices, there are many simpler and far more common examples of where the analog aspects of digital test resources are critical. Take, for example, a high-speed analog-to-digital converter (ADC). This type of device is becoming far more prevalent with sample rates spanning the range from tens of megasamples to nearly 1 GS/s.
To a first order, you must face the problem of sourcing or capturing digital data in real time at these extremely high rates. Digital resources in the tester not only must functionally test data at extremely high rates, but also record digital signal processing (DSP)-based information for later mathematical analysis. Very few pieces of equipment have this capability at rates exceeding 100 MS/s.
Another more subtle problem lies just beneath the surface of high-speed converter testing. It is impossible to achieve good signal-to-noise ratios (SNR) in test devices if there is any significant amount of jitter between the DUT’s sample clock and analog input signals.
For example, a sine wave test that uses a 10-MHz input sine wave and hopes to achieve an SNR of at least 65 dB would have to source less than 10-ps rms jitter to the DUT. This means that clock and waveform jitter will often be much more of a problem for these devices than the resolution of the tester’s waveform source. Effective DSP resolution of the tester, for example, can be improved by oversampling techniques; jitter can never be improved.
Conclusions
Virtually every digital tester today bills itself as mixed-signal-capable and even traditional mixed-signal platforms are rushing to increase pin count and marginally increase data rates. It is important to remember that bigger does not mean better.
Many of the most challenging mixed-signal devices require real-time digital performance that is well beyond the data rates of these Big-D machines. In addition, the analog characteristics of digital test resources become critical to perform a reasonable test of even the most basic types of mixed-signal devices.
Very often, the requirements of VLSI digital and ultra-high-speed digital cannot be simultaneously satisfied. Also, it is unreasonable to expect you to constantly choose between one set of ATE requirements or the other.
The solution is a flexible ATE architecture based on VLSI digital, but which allows the addition of Fast-D resources on selected pins under program control as shown in Figure 4. This allows you to achieve both maximum performance and configuration flexibility. Initially, such an architecture is very cost-effective and allows you to upgrade digital capability as new applications emerge.
With such an architecture, you get the best of both the Big-D and Fast-D worlds. The requirements of the two are very different. When evaluating ATE capabilities, it is important to be sure that both test problems are adequately addressed.
About the Author
Ken Lanier is the market manager at LTX. He received a B.S.E.E. degree from Worcester Polytechnic Institute in 1984 and previously held positions including applications engineer, group leader and product specialist. LTX, LTX Park at University Ave., Westwood, MA 02090-2306, (617) 329-7550.
Copyright 1996 Nelson Publishing Inc.
December 1996
|