The gate count and complexity of today’s chip designs are growing at a rapid pace. The IBM merchant ASIC business now averages 1.3 million gates per design with designs crossing over the 5 million gate (random logic) point for the first time last year. Within these larger designs are more difficult structures to test. RAMs, ROMs, Cores, data pipe-lines, three-state buses, and multiple clock domains are impacting the capability to create high-quality tests.
At the same time, design cycles are not getting any longer. A design and test approach is needed to help cut through the added difficulties of size, complexity, and time. Using a proven design-for-test (DFT) methodology is a good start.
A DFT approach at the chip level alone is not sufficient. A DFT methodology must reach into all levels of the design. Today, functional units within a single design are as large as entire chips were just a few years ago. When all these units come together in a single design, a testability problem in any one of them can affect all the others, lowering the overall chip testability.
Why DFT?
There are two major reasons for having DFT:
Higher quality. This means better fault coverage in the design so that fewer defective parts make it out of manufacturing (escapes). But there are drawbacks. Manufacturing test is limited by the number of patterns that can be applied since time on the ATE adds to the cost.
Test production must not take a long time. If the tests take too long to produce, then the product cycle is impacted every time there is a design change that requires new test patterns. Designs implemented with DFT can have tests produced for them both faster and with higher quality, reducing the time spent in manufacturing and improving shipped-product quality levels.
Easier and faster debug and diagnostics when there are problems. Attend any conference where test is an issue and you’ll notice that more people are talking about diagnostics now than just a few years ago. As designs become larger and more complex, diagnostics become more of a challenge.
Just as Automatic Test Pattern Generation (ATPG) is used as a testability analysis tool (which is expensive that late in the design cycle), diagnostics now are often used the same way. Diagnosis of functional failures or field returns can be very difficult. An initial zero-yield condition can cause weeks of delay without an automated diagnostic approach. However, diagnosing ATPG patterns from a design with good DFT can be relatively quick and accurate.
Uses of DFT
In the life cycle of the chip, DFT can be used during:
Design.
Test generation.
First silicon.
Chip test.
Board-level test.
System-level test.
DFT is implemented during the design phase (Figure 1). Test structures are inserted, usually in some combination of automatic and manual interaction. It also is at the design phase that the most savings can be made in ensuring the testability of the design. Since costs for test and diagnostics increase in every later stage, this phase is where the major investment should be made.
Test analysis tools can measure the testability of each component of the design from individual functional units through all design hierarchies to the full chip. Problems should be addressed at each level, and strict targets must be set and met.
To help increase the testability, random-pattern testability should be maximized by using tools that measure random-pattern resistance and then altering the designs or adding test points. This is easiest to do at the smallest units in the design rather than for the entire chip. That means it is easiest when done as part of the design-team methodology, not exclusively as a function of a separate test group.
The next stage of the design is the test-generation phase. Here DFT helps to speed the ATPG process. By making the problem a combinatorial one, ensuring nonRAM memory elements are scannable and sectioning off areas of the logic that may require special testing, the generation of patterns for the chip logic often can be quick and high quality.
Another consideration during the ATPG phase is the use of additional fault models. The traditional stuck-at fault model is the workhorse of the industry. However, many defects can exist which are not easily modeled by stuck-at faults.
DFT can help detect these other fault types. When a design is easy to control with ATPG patterns, it often is easier to control with patterns that target nontraditional faults. With poor DFT, doing additional fault modeling and successfully detecting those faults may be a laborious effort.
First silicon is where everything comes together and where the fruits of DFT start to pay dividends. Here the focus becomes defect-detection diagnostics and characterization.
Diagnostics can resolve chip failures both quickly and more accurately. Whether it is model errors, pattern errors, process problems, or any number of other explanations, diagnostics are aided by DFT. Patterns can be quickly applied, additional patterns generated, and even additional fault models created. This is critical for timely yield improvement before product ramp-up.
The next area is chip production. Here DFT helps ensure overall shipped- product quality. High test coverage and small pattern counts act as the filter to judge working and nonworking wafers. For the working wafers, the diced chips are tested again to ensure working product.
DFT helps ensure small pattern sets—important for reducing ATE time and costs—by enabling single patterns to test for multiple faults. The higher the test coverage for a given pattern set, the better the quality of the production chips. More to the point, the fewer failing chips that make it into products reduce replacement costs.
Burn-in often is the next stop for chips. Here they are stressed in ovens to simulate temperature extremes and operational life. A variety of test patterns can be used at burn-in, but the two most common are the same deterministic patterns used for production test or built-in self-test (BIST) patterns to limit pin connections. The better the DFT in the chip, the more likely the burn-in stage will expose meaningful chip-life-cycle failures.
A multichip module (MCM) example can make the importance of production-chip quality clearer. The probability of a faulty MCM equals 1-(Fault Coverage) (number of chips in the MCM). So a 10-chip MCM would have the following failure rates:
65% at 90% fault coverage.
9.6% at 99% fault coverage.
2% at 99.8% fault coverage.
It is easy to see how test coverage directly affects the number of MCMs that would have to be scrapped or reworked—either way, a very expensive proposition. Things get better as fault coverage increases. Of course, higher-density MCMs require higher fault coverage.
As MCMs embed 20, 30, or more chips, the fault coverage must be even better to ensure a working MCM. If each chip in your MCM has a failure rate of 1% and there are 50 chips in your MCM, then nearly 40% of the MCMs will fail. Depending on the product, one failure in 100 may be acceptable—or devastating.
At MCM and board level, DFT helps by testing the package using the boundary scan interface. This allows the chip interconnects to be tested to ensure that the chip (its internals already tested at chip manufacturing) can interact with other components on the board.
Some applications also allow the chip internals to be retested by scanning internal chip tests via an IEEE 1149.1 interface. Optionally, chips with BIST can be retested on the package using the BIST interface.
At the system level, DFT ensures that the replaceable units are working properly. Often using a BIST interface and frequently accessed via boundary scan, components can test themselves. If failures are discovered, then the failing components can be isolated and bypassed or replaced without needing to replace the entire system. This can be a tremendous savings in system replacement costs and customer downtime.
Summary
DFT can yield many benefits: Faster development cycle, better quality of results, easier diagnostics, and improved product quality.
DFT does not come free. It has an impact on the design cycle, at least initially, while the DFT techniques are being learned. It also can add to the design’s circuit timing and real estate area if implemented without a well-designed plan.
The overall benefits are a reduction in design cycle, especially at the back end, and an improvement in quality, extended with experience. Once designers are familiar with the tools and techniques for good DFT, the methodology becomes second nature to them. Ensuring good DFT becomes as natural as ensuring timing closure.
DFT requires strong tools and strong support. Test synthesis, test analysis, test generation, and diagnostic tools must handle a variety of structures within a single design, work with alternate fault models, and quickly produce results even on today’s multimillion gate designs. Support must be responsive for training and education needs, provide rapid changes and enhancements to meet technology and customer challenges, and offer credible experience to consult with customers on advanced techniques and alternative implementations without a loss in quality.
Without DFT in today’s multimillion-gate integrated designs, quality and time-to-market can suffer. DFT must penetrate all levels of the design implementation; a top-down test methodology and bottom-up implementation are needed. Changes made late in the process take longer to implement and cost more. DFT is a commitment, with methodologies, tools, and support to have test considerations made up front and all through the design. It is a commitment to produce a successful, quality, and cost-contained design.
About the Authors
Randy Kerr, who joined the company in 1984, is a senior engineering manager in the IBM Test Design Automation group. Mr. Kerr is the development manager for the Test Synthesis software, part of TestBench. He graduated from Potsdam State University with a B.S. in computer science and a B.S. in political science. (607) 755-5860.
IBM Test Design Automation Group, 1701 North St., Endicott, NY 13760.
Ron Walther, a senior technical staff member in the Test Design Automation group, has been employed at IBM for almost 30 years. During that time, he has worked in Test DA tool development and in teaching and consulting on DFT techniques. Mr. Walther holds a B.S. in physics from the University of Oklahoma. IBM Test Design Automation Group, 11400 Burnet Rd., Austin, TX 78758, (512) 838-1077.
Brion Keller is a senior technical staff member in IBM’s Test Design Automation group. He joined IBM is 1979 and now is the lead architect for the TestBench ATPG system. Mr. Keller received a B.S. in computer science and chemistry from Pennsylvania State University. (607) 755-8231.
Copyright 1999 Nelson Publishing Inc.
March 1999
|