IDDQ Test

Many leading-edge electronics companies implement IDDQ (quiescent supply current) testing as a standard operation in production test to detect malfunctioning VLSI devices. But why not combine IDDQ with accepted failure analysis techniques to speed up the entire defect analysis process?

This integration efficiently analyzes VLSI functional failures within the constraints of today’s design and manufacturing environment. For our discussion, VLSI functional failures are defined as failures which pass DC parametric tests but fail to perform some digital function.

If the design, layout, fabrication and test-program development of a device are done by a single team in one facility, then analysis of a functional failure might be straightforward. Usually, the team can determine a minimum set of states which will reproduce the failing condition and focus easily on a relatively small physical area on the device responsible for the fault.

But this ideal situation occurs less frequently today. The design, layout, fabrication and test-program development often are subcontracted. Then if problems occur, calling the respective teams together for a solution may be very difficult, or only possible with an intolerable delay.

Fortunately, failure-analysis techniques have been developed to identify a physical location responsible for improper circuit function. For example, emission microscopy detects light from recombination of electron/hole pairs which often are strongly emitted from defect sites or circuitry in an abnormal state.

If you are going to apply emission microscopy or any other technique to a functional failure, the device must be stimulated in the failing condition, usually by external instrumentation or ATE. Most new VLSI devices exceed 200 I/O pins, and clock frequencies greater than 50 MHz are not uncommon. It is not feasible to dedicate ATE to each failure analysis workstation, so a powerful and portable tester or logic verification must be used.

If all test patterns of the device are available, the failure analyst must choose the particular stimulus, or test pattern, which will allow a fault to be isolated most efficiently. Since most tests consist of more than 100,000 patterns, this is a formidable problem.

But if this challenge is met, then application of most failure-analysis techniques becomes much more efficient. We have found that the use of IDDQ, can assist in solving this problem.

IDDQ is used in most production test programs to detect low-level leakages internal to VLSI devices. These leakages can indicate defects which will cause reliability failures. To use this measurement in a test program, the device must be static or remain in a particular logic state long enough for a precision current measurement to be made.

In production tests, typically only a few logic states are chosen to apply the IDDQ test. This choice is a trade-off between test coverage and test time. Halting a functional test and making a precise current measurement at thousands of states will result in test times which are unacceptable in a manufacturing environment.

But in the failure analysis environment, halting a device at many test patterns is feasible. This procedure provides a signature of the device’s static current versus applied test stimuli, which may be different from that of a good device.

This signature shows when there is a physical difference between a good and a failing device in the test-pattern set. There may be 100,000 test patterns in a typical test and the device may go through many test patterns before the actual failing event propagates to an output pin.

Figure 1 shows signatures obtained from failing and known-good, high-pin-count ASICs. The horizontal axis is the test pattern, or vector, number. The vertical axis is static current.

This signature was taken with all outputs tristated, so it represents only internal currents. The test program detected a failure at an output pin near test pattern 400 in Figure 1. There are significant differences in static current as early as 40 patterns into the test.

This signature provides insight into which test pattern should be applied when trying to localize the problem with emission microscopy and liquid crystal hot-spot analysis. The latter is useful in isolating anomalous heat dissipation on a VLSI device. Many other techniques are available, but they can damage the device or alter the failing condition.

The difference in current at test pattern 62 would be a good first choice to halt a static device and perform emission microscopy and liquid crystal hot-spot analysis.

Some VLSI device designs are not completely static and may not maintain a stable logic state when halted. Some designs will remain static long enough for an IDDQ measurement to be made, but not for more than several hundred milliseconds.

By definition, the devices under analysis have known faults. Even if the design is fully static, the failing device may not maintain a stable state. Stability is especially important in emission microscopy, where image acquisition times may be very long.

We do know that the behavior producing the static-current signature is stable and repeatable, so it is usually best if a loop is chosen containing the static-current signature of interest. The size of the loop is chosen to produce a reasonable duty cycle of increased currents.

Inspection of Figure 1 reveals that the failing device exhibits higher current almost 50% of the time between test patterns 1 and 250. This would be a good choice for a test-pattern loop while applying emission microscopy or liquid crystal hot-spot analysis.

Usually, failures are not submitted for analysis one at a time. Typically, many failures from immature designs are submitted, and binning these failures into groups is a critical first step in most analyses.

For example, it is not always certain if a failure is the result of a defect, a design problem or a manufacturing problem. Many times, failures falling into all three of these categories are submitted from early fabrication lots. Inferences made from the static currents can be very helpful in binning failures and forming an overall analysis strategy.

Although examples of emission microscopy and liquid crystal hot-spot detection are used here, most other failure analysis techniques are also more efficiently applied if a small set of test patterns produces the failing condition. Electron beam probe, for example, requires signal averaging at a sample point. This can be quite time-consuming if a large test pattern is required.

These failure analysis challenges will be present for some time to come and probably increase with future design innovations, changes in economics of manufacture, and technology improvements. And IDDQ will continue to be a useful tool in failure analysis of VLSI designs.

About the Authors

John Sylvestri is an Advisory Engineer for IBM Analytical Services. He graduated from Union College with a B.S.E.E. degree and has been with IBM since 1980.

Peter Ouimet is a Staff Engineer at IBM Analytical Services. He is a graduate of Ohio State University with B.S.E.E. and M.S. degrees.

IBM Analytical Services, 1580 Rt. 52, Hopewell Junction, NY 12533-6531, (800) 228-5227.

John Sylvestri is an Advisory Engineer for IBM Analytical Services. He graduated from Union College with a B.S.E.E. degree and has been with IBM since 1980.

Peter Ouimet is a Staff Engineer at IBM Analytical Services. He is a graduate of Ohio State University with B.S.E.E. and M.S. degrees.

IBM Analytical Services, 1580 Rt. 52, Hopewell Junction, NY 12533-6531, (800) 228-5227.

Copyright 1995 Nelson Publishing Inc.

Copyright 1995 Nelson Publishing Inc.

May 1995


Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!