As any mixed-signal test engineer will tell you, synchronization is key to a successful digital signal processor (DSP)-based test. More specifically, three things count when providing clocks to a device under test or the DSP-based instruments inside a tester:
- Frequency coherence.
- Frequency resolution.
- Clock jitter.
Frequency coherence means that all the frequencies used in a test are periodic with each other. In other words, if an analog instrument is collecting data for a certain period of time, the data it is measuring has to repeat over exactly that same period of time. Figures 1a and 1b (see the February 2001 issue of Evaluation Engineering) show the effect of having coherent and slightly noncoherent data in both the time and the frequency domain.
Noncoherence is a big problem when the sampling rate of an analog signal is slow relative to the frequency of the sine wave you are trying to measure, such as when undersampling a fast analog waveform. In this case, the errors accumulate over a very long period of time, making the effects of the noncoherence even worse.
Frequency resolution indicates how precisely you can set a frequency in the tester. This gets back to coherence since a user may ask for a specific frequency. But if the tester has too coarse a resolution to make exactly that frequency, the analog and digital instruments will have slightly different periods.
Clock jitter indicates how stable a sampling clock is. If too much jitter is present, then noise will be added to all the analog measurements. There are two types of jitter: random jitter, which is typically Guassian in nature, and multimodal jitter, which occurs when clock edges develop at distinct points in time.
Figures 2a and 2b (see the February 2001 issue of Evaluation Engineering) show the effects of added jitter when digitizing an analog signal in both the time and frequency domains. In Figure 2a, an unjittered signal has a very good ratio of signal-to-noise and distortion (SINAD). In Figure 2b, random and multimodal jitter is added, which causes the clock edge to appear at distinct points. When this clock is used to measure an analog signal, the error will appear at specific frequencies, which is a function of the test frequency(ies) and the locations of the clock edge. In this example, the edges occur at eight randomly spaced locations.
When it comes to generating clocks for large-scale ATE, two techniques can be used.
The first technique, used primarily in digital testers, is composed of a constant master-clock frequency and analog verniers to fabricate frequencies that are not a direct submultiple of this clock. The application of this technique is used in some system-on-a-chip (SOC) testers.1 This technique is timing based since it uses verniers to create the correct period in the time domain.
The second technique uses a continuously variable master clock to directly make the desired frequency. It traditionally has been used in mixed-signal testers to provide optimum coherence, resolution, and jitter. This technique is frequency based since it directly divides down a master-clock frequency to make the desired clocks. A laboratory-quality clock synthesizer (typically direct digital synthesizer-based) that operates off a standard reference frequency generates the master clock.
The difference between these two techniques is illustrated in Figures 3a and 3b (see the February 2001 issue of Evaluation Engineering). In the timing-based approach, a fixed-frequency master clock (100 MHz or 10 ns/cycle for this example) is divided to make a desired clock frequency (47.6 MHz or 21 ns/cycle in this case). Since a digital divider cannot be used to directly make the 47.6 MHz, the divider has to toggle between 2 and 3 to provide an average divider of 2.1.
Since the clock must appear as though it really is running at exactly 47.6 MHz, the edges of the clock are continuously moved by reprogramming analog timing verniers on the fly so the result appears correct. This requires that a very fast math engine be used to determine when to switch the divider values and then calculate the remainder of the period that must be corrected by the timing verniers.
The frequency-based method is comparatively simple. In this case, the master clock can be varied over a wide range so a fixed divider can be used to make the desired clock. In this example, the master clock is set to 95.2 MHz and divided by 2.
Pros and Cons
So, what are the advantages of one technique over the other? The time-based method historically has been used in digital testers because it is a mostly digital implementation and allows greater resolution when doing period-switching on the fly. As it turns out, mixed-signal components seldom, if ever, need period-switching capability.
The implementation of the frequency-based method can be more difficult, especially if the clock must be multiplied to a higher frequency. Doing this requires a phase-locked loop (PLL)-based circuit that would have to accept a variable reference frequency. This is worth the effort, however, since the analog performance advantages lie with the frequency-based method, specifically:
- Frequency Resolution: The period resolution of the timing-based system is limited to the resolution of the timing vernier, which can be up to tens of picoseconds. As data rates go higher and higher (over a gigahertz for common data communications applications), this becomes a serious limitation. The master clock for the frequency-based technique, by comparison, generally has a resolution on the order of 0.01 ppm.
- Coherence: Since analog resources tend to be clocked off a very high-resolution master clock, it is possible, indeed likely, that the analog and digital resources will not have the exact same periodicity unless specifically compensated for by the user. This can lead to spectral leakage, especially in higher-frequency applications.
- Jitter: Since the timing-based technique most likely will change the analog vernier value each cycle, the digital clock will suffer the effects of linearity and other errors. High-speed verniers are notorious for being nonlinear, and even complex calibration schemes are subject to errors that will limit how much correction can be achieved.
Since the edges of the clock will be at a different location each cycle, jitter will be introduced. By comparison, the frequency-based technique will use the same vernier setting each time, eliminating these errors and providing a more stable signal.
The nature of the jitter introduced by the timing method is especially problematic since it is multimodal. Whereas random jitter makes the noise floor worse, multimodal jitter introduces frequency components in the measurement that aren’t really there. Worse yet, the number and locations of frequencies introduced depend on the particular implementation of the verniers and never will be repeatable from tester to tester. For multitone testing of asymmetric digital subscriber lines (ADSL) and other devices, the multimodal jitter can be especially troublesome.
Trying to exactly quantify the typical performance of each technique can be difficult. But there are some other tester specifications—the result of the same error sources—that we can use as guidelines.
Timing-based systems rely on the accuracy of a timing vernier. Testers with digital pins have timing-accuracy specifications based on the same verniers used to generate clocks. Timing errors for digital pins can exceed 100 ps for a typical tester, which speaks to the error that the clock generator is likely to have.
Frequency-based systems use the same mechanism as a high-speed clocking option in a mixed-signal tester. This consists of a precision clock source with a high-speed (low-jitter) clock divider. Jitter specifications for these options typically are in the range of 1 to 5 ps.
Just adding a high-purity clock option to the tester does not solve the problem unless it can be used to clock both the DUT and the analog instruments in the tester. Remember that if the clock on the tester’s DSP instruments is jittering, so will the analog outputs. It does not matter if the clock to the DUT is jittering or if the analog input is jittering. Both will produce the same errors shown in Figure 2.
So while SOC testers have to deal with digitally complex circuits, it is important to maintain mixed-signal tester features to adequately test the analog elements of these devices. A stable, high-resolution clocking system in the digital subsystem is key to low-noise, accurate mixed-signal testing.
The author thanks Pinakin Modi of Analog Devices for his contributions to this article.
Larson, E., “SOC Designs Challenge ATE Timing Architecture,” EE-Evaluation Engineering, June 2000, pp. 20-27.
About the Author
Ken Lanier is a market manager at LTX. Previously, he held positions in test applications engineering, engineering management, and product management. Mr. Lanier received a B.S.E.E. from Worcester Polytechnic Institute in 1984. LTX, Fusion Division, LTX Park at University Ave., Westwood, MA 02090, 781-461-1000.
Published by EE-Evaluation Engineering
All contents © 2001 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.