Managing IC and Tester Complexity

For 30 years, ICs have been doubling in density every 18 months. Clock rates are increasing even more rapidly. Many people predict that technology cannot keep this pace, but recent data suggests that IC technology is, in fact, progressing faster than Moore’s Law—possibly because the International Technology Roadmap for Semiconductors (ITRS) has become the competition for many companies.

As IC technology approaches fundamental physical and electrical limits, the circuits get more complex faster. This situation is exacerbated by the need to reduce time to market.

The many stages of verification for complex ICs, such as a system on a chip (SOC), now are more significant than designing. If verifying an SOC were like verifying a system on a board, then design and test productivity could increase dramatically.

Using predesigned modules, the SOC design process is slowed down because the blocks are not pretested and do not have standard, robust interfaces. The SOC can be designed at a high level, just like a system on a board, but must include design for test (DFT) and transistor-level verification.

To test a board, a test engineer can use direct-access, high-frequency probes, which are impractical to use inside an SOC. In an SOC, all test access must use the same technology as the circuit under test. For multimillion-gate SOCs or boards, gate count is not an issue. Adding logic gates to manage complexity and simplify verification is a well-accepted trade-off.

Tester Complexity

The complexity of today’s high-volume IC testers also is increasing rapidly. They can be very fast, over 1 GHz with better than 200-ps accuracy across 2,000 channels. They can be large, about the size of a small car. They are, however, expensive, with two channels costing the same as a small car, resulting in depreciation and operating costs of $.05 to $.10 per second of test time.

At the 1999 International Test Conference, Pat Gelsinger, vice president and chief technical officer of Intel’s architecture group, stated that improvements in tester accuracy are not keeping up with decreases in chip circuit-path delays (Figure 1). As a result, an increasing percentage of devices fail due to tester timing inaccuracy. In a few years, this will dramatically reduce yield unless testing is performed in a very different way.

Design for Hierarchical Verification

Hierarchy, simply stated, hides lower-level details from higher-level views. If a circuit block on a chip has a robust interface, it is easier to verify a block’s function independent of the other blocks, permitting parallel testing of blocks to save test time. Each block and its interface must be documented at each level in the hierarchy.

Obviously, you must use a standard, simulated hardware description language such as VHDL or Verilog. To progress further, the Virtual Socket Interface Alliance (VSIA), a group of more than 200 companies worldwide, has developed standards for supplying, documenting, and interfacing virtual components. VSIA specifications rely on existing and emerging standards whenever possible.

The inputs and outputs of an IC can be described by using the I/O buffer information specification (IBIS) standard, the IEEE 1149.1 standard that specifies boundary test access circuitry, and a boundary scan description language (BSDL). Correspondingly, at the IC level, P1500 is an emerging IEEE standard that defines the construction of the boundary of on-chip circuit blocks in an SOC and a core test language (CTL).

Performance should be verified at every level of a circuit hierarchy. The simplest way to test complex digital circuits at the lowest level is to use scan design techniques, which add a shift register mode of operation to existing flip-flops in a chip. Test time is proportional to the number of flip-flops in the longest scan shift register. So for multimillion-gate ICs, practical test times only can be achieved by using hundreds of parallel scan registers and hundreds of scan-in and scan-out pins. Basic scan testing uses only one level of hierarchy—gate level.

It is possible to make a scannable circuit block suitable for embedded test to facilitate a more hierarchical testing approach. Typically, embedded test is applied to the entire IC.

A new way to test very large ICs is hierarchical embedded test, in which each large circuit block has its own embedded test controller. This enables the SOC designer to consider each block as a pretested unit, finally allowing SOC design to be more like conventional board-level design.

This approach permits each block to be independently designed for embedded test, before other blocks of the SOC are complete. It also allows blocks and, more importantly, their tests to be reused and each block to be tested and diagnosed independently at speed.

The IEEE 1149.1 test access port (TAP) provides access to ICs for test and in-system programming. Recently, the 1149.1 standard has been extended for analog testing, adding an analog input and an analog output pin. This standard is the result of eight years of work by engineers from many companies worldwide and was published by the IEEE in March 2000. Although new standards take years to become widely adopted, National Semiconductor partnered with LogicVision to announce its intention to be the first company to provide a general-purpose 1149.4 IC for board test access.

Should small ICs be designed for hierarchical verification? Small circuits can be more useful if they are designed to fit into the hierarchy of larger circuits to allow an SOC designer to connect, verify, validate, and test.

Distributed Test Resources

Distributed processing is another proven way to manage tester complexity. Simply stated, it means putting resources close to where they are needed. In multinational corporations, for example, computers have evolved from a single mainframe into many PCs and a server. We now see the same evolution in test as test resources become embedded in chips.

To distribute resources, robust interfaces between the resources are essential, just as they are in a hierarchy. In this case, although they are needed to allow high-speed information transfer, distributed test resources can be impractical if too many kinds of information must be conveyed. Fortunately, structural testing reduces the diversity of information.

Structural testing means checking enough circuit functions to ensure that all structural defects are detected. Scan-path testing is the best-known example. 
In 1993, Motorola PowerPC designers reported that the key benefit of scan design was allowing quick debugging of the hardware and software. In 1999, Texas Instruments disclosed its efforts to minimize functional testing and maximize structural testing of its UltraSPARC to improve diagnostic capability.

Scan also allows the use of easily generated pseudorandom test patterns. Random tests provide the best coverage of unmodeled faults because the patterns are, statistically speaking, unbiased. More random patterns usually are needed, however, to detect a given type of fault, such as stuck-at faults, so at-speed stimulus generation and results compaction are necessary. At-speed testing not only is faster, it also can detect delay faults.

Embedded test is an example of placing structural test resources as close as possible to where they are needed; that is, entirely on the chip. Built-in self-test (BIST) is a well-known component of embedded test. AMD and Motorola are among the companies that use embedded test for memory in microprocessors, which must be extremely competitive in silicon area and performance.

Embedded test uses on-chip stimulus generation and produces a compressed summary of scan test results. This provides low-bandwidth access, which permits lower-cost test equipment or existing large testers to check more devices in parallel. An essential aspect of distributing test resources is low-bandwidth test access, which is difficult for at-speed scan testing to achieve. Nevertheless, embedded test should be viewed as only one technique within a continuum of distributed test-resource possibilities.

Recognizing the need for distributed test resources led to a workshop created to address the topic. The first Workshop on Test Resource Partitioning was held in October 2000, as part of the International Test Conference.

One hotly debated topic was the meaning of test resource partitioning (TRP). In general, TRP refers to the distribution of test capabilities and resources among the circuit block under test, the rest of the IC, the device interface board, the test head, the tester mainframe, the controlling workstation, and software.

The 1149.4 analog bus is a low-frequency test access bus that facilitates distributing mixed-signal test resources. The standard effectively requires a -3-dB bandwidth of only 100 kHz, although it can be as high as a designer chooses. This frequency is low enough that it won’t interfere with most circuit functions on a chip.

To access high-frequency signals, it is necessary to perform some type of compaction; under-sampling is a well-known example. High-speed sampling circuitry can be placed on-chip so that only a low-frequency analog signal is conveyed off-chip for analysis. Even on-chip radio-frequency circuit element values can be measured via a low-frequency bus. One tester company was able to directly measure capacitances as small as 5 pF with better than 1% accuracy using an 8-kHz signal on the 1149.4 bus.

Conclusion

The looming verification crisis is a result of circuit complexity, exacerbated by time-to-market demands. Hierarchy and distributed processing are proven ways to address the complexity inherent in large systems and can have similar implementation requirements (Figure 2).

The use of hierarchical embedded test can allow SOC design to become more similar to board-level design, which already handles billions of transistors. Putting test resources on-chip can greatly reduce complexity in external test equipment and the interfaces. These techniques already are in use for large SOCs, and recent industry activity suggests their use is spreading.

About the Author

Stephen Sunter has worked with mixed-signal ICs for more than 20 years, including 15 in IC design, three in test engineering, and the last five years as director of mixed-signal test at LogicVision. He was program chair for the International Mixed-Signal Testing Workshop for two years and vice chair of the P1149.4 Working Group for the last six years. LogicVision, 101 Metro Dr., 3rd Floor, San Jose, CA 95110, 408-453-0146, e-mail: [email protected].

Published by EE-Evaluation Engineering
All contents © 2001 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.

March 2001

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!