Test Vector Compression Makes More From Less

Design automation is the key to the development of very large ICs. Optimizing the connection and layout of millions of gates to efficiently perform complex functions is not a job to which humans are well suited. There simply are too many variables and constraints that simultaneously must be addressed.

Similarly, the design of the necessary test circuitry and test vectors for the production of large ICs also has become automated. Before design for testability (DFT) was comprehensively addressed by modern synthesis tools, IC designers gave most of their attention to accomplishing a circuit’s functional performance. Ensuring testability of the manufactured IC was of secondary importance.

Unfortunately, after an IC has been designed, the faults that can be identified by externally applying test vectors to the device pins may be very limited. The automatic insertion of test circuitry during the design process improves fault visibility. Compared to handcrafted and possibly lengthy ad hoc approaches that may use less silicon area, automated DFT tools guarantee a timely and scalable solution.

Scan

Scan technology is a popular example of an automated test methodology. All scan operate in a similar manner.1

  • A design incorporates test-mode circuitry to link the design’s latches into a long shift register (a scan chain).
  • Test data is shifted into the register in the test mode.
  • The new states of the latches are applied to the device circuitry in the normal operating mode.
  • The resulting latch states are shifted out of the register in the test mode.
  • Output data is compared to expected values to determine pass/fail.

Insertion of the required scan circuitry is handled automatically by the design software.

Working from the synthesized circuit model, automatic test pattern generation (ATPG) algorithms determine suitable test vectors and expected responses. In traditional uncompressed stored pattern testing, complete bit-level compare information for all tests is stored in the automatic test equipment (ATE). All test responses are unloaded from scan chains to the ATE and compared bit for bit. The location of a fault can be found by knowing which test bit failed.

Analysis of the scan process has proven, however, that no more than 1% to 5% of the data bits applied in test patterns actually are necessary. Nevertheless, because of the shift-register nature of the technique, all test-vector locations must be filled whether or not they actively contribute to faultfinding. As designs have become larger and scan chains much longer, increasing test times and ATE test-vector memory requirements have prompted some rethinking.

Scan Compression

To reduce the test application time (TAT), a single, long scan chain can be broken into N smaller chains, assuming there are sufficient I/O pins available. Using ATE that can deal with N serial bit streams in parallel, the time taken to load and unload the scan chains can be made significantly smaller. While a big improvement, this technique by itself does not address the very large amount of ATE memory required to store uncompressed test patterns.

Several approaches have been developed to reduce test vector and output compare data volume. One method “…relies on preselecting a small subset of test responses captured in the scan chains and performing bit-level comparison on only those, using normal ATE datalogging features,” said Lee Todd, manufacturing test marketing group director at Cadence Design Systems. “Issues include how well subsets derived from artificial fault models align with real defect behavior and to what extent ignoring the majority of captured responses hides side effects that could be useful for diagnostics.”

Also, newer ATPG programs may be better at vector compaction. For example, the 1999 version of TetraMAX™ from Synopsys is said to generate 40% to 50% fewer vectors to reach identical fault coverage levels than previous products.2

The latest TetraMAX TenX version provides distributed processing for compute-intensive ATPG tasks across multiple CPUs, workstations, and servers. Accelerating ATPG reduces the cost of test by minimizing manufacturing test-development time.

Actual compression schemes involve on-chip generation of some portion of the test-vector data. Several companies claim volume compression by a factor of at least 10 while maintaining 99%+ test coverage. A factor of 100 has been reported for certain designs.

Figure 1. 
Linear Feedback Shift Register Implementing the Polynomial X16+X5+X3+X2+1

Different kinds of on-chip pseudorandom pattern generators (PRPGs) provide most of the test-vector bits. Figure 1 shows one type, the linear feedback shift register (LFSR), which can be started from a known seed value. An LFSR typically is as long as the longest scan chain. But if only the seed value must be stored by the ATE, then data compression by a factor of about N is possible. As the contents of the register change because of feedback and the shifting action, different patterns are loaded into the various scan chains.

As part of the company’s TestKompress™ technology, Mentor Graphics has developed a ring generator, a form of linear finite state machine (LFSM), that fills multiple scan chains (Figure 2). With feedback and a continuous input of a few bits of data from the ATE, the on-chip generator provides the required patterns from a relatively small circuit. A network of exclusive OR gates distributes the ring outputs to the scan chains.3

Figure 2.  A 32-Stage Ring Generator Implementing the Polynomial X32+X18+X14+X9+1
Source: Mentor Graphics

For completely on-chip built-in self-test (BIST) solutions, ideally all the device circuitry would be tested without involving conventional ATE. However, only relying on pseudorandom vector testing may not achieve a high test-coverage percentage for certain types of circuitry.

In these cases, ATE performs functional test of part of the IC. Also, scan-accessible test points often are inserted into the design to improve testability. It is common to use ATE to control the BIST circuitry, although this activity only may involve a few pins.

Having no requirement for additional test points is one of the major advantages of automated scan design processes such as TestKompress or Cadence’s Testbench tools. It is not so much a matter of the test points themselves as it is one of not disrupting the design flow to insert them.

In test-vector compression schemes, the input mechanism is only part of the story. The output data from all of the scan chains must be compressed before it can be compared with the expected response. A wide variety of mechanisms has been developed to complement input decompression schemes. For example, the multiple input signature register (MISR) often is used in combination with an LFSR (Figure 3). Mentor’s selective-response compactor (SRC) works in combination with the ring-generator LFSM.

Figure 3. Typical LFSR/MISR Scan Test Approach

These complementary pairs of circuits determine the effectiveness of the scan test process and place constraints on operation of the ATPG. For example, MISR compression is lossy, so all the data that might aid fault diagnosis cannot be provided. Combinations of bits may mask faults. Aliasing could occur in which multiple faults can be detected only if the number of faults is odd; an even number of faults does not change the state of the output.

Also, because the outputs of the scan chains must correspond exactly to the expected simulated states, any don’t-care bits will corrupt the eventual signature. Unfortunately, some uncontrolled circuit elements such as memories may affect the contents of scan chains. Don’t-care states must be eliminated if MISR compression is used.

Mentor’s SRC is more complex than a MISR, but it avoids some of these problems. Additional control registers driven from the ATE are used to program the output compactor to ignore don’t-care bits from certain scan chains during a particular test cycle. Data from the ATE is required to fill these registers, but it is a small overhead and has little effect on the overall compression factor.

These types of considerations highlight the job of the ATPG in precisely determining how all the states of all the scan chain latches will propagate through to the output, including the effect of the output compression circuit. Compressed data is compared with a stored expected response in the same way as uncompressed systems operate. In this case, however, only the fact that a fault has occurred can be determined, not what it was.

When a fault is detected, a bypass facility allows the output from the relevant scan chain to be shifted out in its entirety. To do this, the test vector must be reloaded and the test cycle run again, but with the bypass mode selected. The ATE will correspondingly use a full-length expected response for comparison with the scan chain output, which allows the source of the problem to be pinpointed.

Scan-In Context

Scan test-vector compression is an elegant technical achievement, but it has been undertaken to address the cost of test, one of semiconductor manu-facturing’s biggest problems. Cadence’s Mr. Todd explained, “Controlling and reducing the cost of test involve scan test-data compaction as a key element. However, compaction is only part of a comprehensive analysis and solution set that must be applied to create the most cost-effective solution.

“The cost of test basically is a function of the cost of the tester and the time spent on the tester,” he continued. “Cost is most effectively controlled by applying a set of methods that depends on the design, the cost of test, and tester environmental constraints rather than imposing a fixed compression scheme. Here are some examples of possible methods:

  • “Techniques for reduced pin-count testing extend the life of depreciated testers to next-generation packaging I/O profiles.
  • “Tester repeat op codes dramatically reduce test data volume in some applications.
  • “On-chip MISRs effectively reduce the test data volume and the test time simply by freeing up more pins for scan-in.
  • “I/O test structures facilitate testing of I/O logic without requiring tester pin contact.”

High-speed device testing and new types of faults are two other current-generation manufacturing issues. They both relate to smaller device geometry and affect the use of scan testing. In a paper discussing at-speed scan testing, two methods of operating scan chains to detect delay faults are presented.

“In the skew-load method, the last shift clock is applied at-speed to launch the transition, followed by an at-speed functional clock to capture [the effects of] the transition into a scan element…. In contrast, the broadside method does not rely on an at-speed shift clock. Instead, it uses an at-speed functional clock to launch the transition, followed by one or more at-speed functional clocks to capture [the effects of] the transition.”4

The broadside method is the more practical approach to implement at high speeds because the scan-chain shift clock does not have to operate at high speed. However, the use of two or more functional clocks implies the need for sequential rather than only combinational test vectors. Sequential patterns are more difficult to generate. And it’s not just this requirement that will challenge ATPG performance.

The authors of the paper stated that at-speed testing is no longer considered an experimental addition to a test suite. ATPG tools now are expected not only to accurately report at-speed coverage, but also to provide the complete framework for at-speed testability analysis. ATPG tools will be required to provide the same level of diagnostic support for at-speed faults as for stuck-at test patterns today. Finally, ATPG tools must work within greater constraints because all at-speed testing should be restricted to latch-to-latch paths only.

High-speed testing has long been the forte of BIST schemes. Dwayne Burek, the chief technical director-marketing at LogicVision, said, “BIST represents the ultimate in test-vector compaction because the stimulus and response are fully contained on the device under test. Logic BIST can run concurrently on multiple clock domains operating at different frequencies. LogicVision’s logic and memory BIST have been used successfully to test at frequencies up to 500 MHz.”

Traditionally, the term BIST has referred to approaches in which all test vectors were generated by on-chip circuitry. Typically, a PRPG loaded associated scan chains with test data. The scan chain outputs were compressed via a MISR, and at the end of the test, the output signature was compared to an expected result. To improve fault coverage and reduce the number of test cycles required, many schemes now seed the PRPG with specific starting values to better control the sequence of vectors that follows.

The distinction between BIST and ATPG technologies is becoming confused as the terminology blurs. For example, SoCBIST from Synopsys, a product that works with the company’s TetraMAX ATPG, “uses test patterns that are generated by a deterministic ATPG, compressed into LFSR seeds and signatures, and applied to a modified logic BIST architecture….”5

Definitions seem to be matters of degree. In a logic BIST scheme, applying a starting value, or seed, to a PRPG may produce many test cycles worth of pseudorandom scan chain patterns before another seed is applied. On the other hand, if a seed is applied to on-chip test hardware at the start of each test cycle and represents a deterministic test vector, this appears to be a case of compacted test pattern decompression and not BIST at all.

The Relationship Between Test Data Compression and Failure Analysis

by Bernd Koenemann, Cadence Design Systems

To achieve data compression, response data streams are reduced into error-detecting code such as signatures made popular in classical logic BIST. However, compression into codes, like signatures, is inherently lossy. While retaining enough information to facilitate some trend analysis, it is impossible to reconstruct from a signature exactly where in the scan chains failures were captured—crucial information for failure analysis.


Figure 4. Failure Analysis in IC Production

Signature-based techniques rely on repeating the failing tests indicated by failing signatures, then unloading uncompressed scan-chain contents for diagnostic analysis. This causes issues on ATE with limited fail buffer capabilities that may make it impossible to simply dump the scan-chain contents in full. Instead, complete bit-level compare data for the failing test is needed to filter failing scan cells for more limited datalogging.

An interesting logistical exercise is balancing how much compare data must be online in production test and how much fail data can be logged. ATE that can efficiently dump several tests-worth of complete bit-level scan-chain contents without needing compare data greatly simplifies the process.

The success of new breed, low-cost DFT testers depends on the success of data-compression techniques. However, cost-of-test spans more than the cost of the tester; it also includes the cost of diagnosing failures during initial prototypes, early production, and high-volume production runs. ATE manufacturers can help users to realize the value of logic diagnostics by including adequate data dump capabilities to facilitate cost-management initiatives throughout the entire product life cycle.

For example, rapid defect learning from production testing is increasingly important for yield improvement. Automating the process involves electrical fail data from the production test equipment during wafer sort, in-line inspection images collected during wafer processing, and design data and test patterns to drive logic diagnostic tools and facilitate logical-to-physical mapping to guide physical failure analysis. Figure 4 illustrates this process.

Summary

As many test professionals have been aware and as more became aware after Pat Gelsinger’s often-quoted ITC 1999 keynote address, cost of test is the issue. Test-vector compression is only one means of reducing test cost, and there are a number of schemes vying for market share, especially now that cost of test has become the industry’s dominant concern. You need to understand the differences among the approaches to make an informed choice.

Whether viewed as a means of compressing ATPG vectors or as a way to make BIST structures have better fault coverage, vendors of most compression approaches promise similar benefits. The initial choice is between a true BIST solution and one of the test-vector compaction methods, followed by selection of the product that best matches your needs. Realistically, however, few designers or test engineers have had a truly free choice in this matter.

Because test-circuit insertion and test-vector generation are intimate parts of the design flow, the only practical solution for most companies has been to use the test tools that complemented the electronic design automation (EDA) tools they already had in place. As more EDA products join the trend toward standards-based open architectures using the standard test interface language (STIL) and its extensions, mixing tools from different vendors should become a viable option.

References

  1. Kapur, R. and Chandramouli, M., “Where Is DFT Heading?,” EE-Evaluation Engineering, February 2003, p. 49.
  2. TetraMAX™ ATPG, High-Performance Automatic Test Pattern Generator Methodology Backgrounder, Syn-opsys, May 1999.
  3. Embedded Deterministic Test (EDT)—DFT Technology for High-Quality Low-Cost Manufacturing Test, Mentor Graphics, 2003.
  4. Tendolkar, et al, Scan-Based At-Speed Testing for the Fastest Chips, Mentor Graphics, 2001.
  5. DFT Compiler SoCBIST, Deterministic Logic BIST, Synopsys data sheet, www.synopsys.com/products/test/dft_socbist_ds.html

Return to EE Home Page

Published by EE-Evaluation Engineering
All contents © 2003 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.

June 2003

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!