Meeting time-to-market windows and finding ways to improve the profitability of new products is not only important, but a matter of survival, especially given the current economic conditions and market instability. With cost-cutting measures in place at most high-technology electronics companies, all aspects of the design, verification, and manufacturing process are under scrutiny for ways to improve productivity and reduce cost.
Not least among them is the manufacturing test and design-for-test (DFT) process. New test solutions, such as embedded deterministic test (EDT), reduce test data volumes and test times by up to a factor of 10 and can dramatically contribute to reducing overall test costs. But what about time-to-market? How can system-on-a-chip (SoC) engineers make the best use of DFT methodologies to ensure that they meet design schedules while improving test quality?
The foundation for any structured DFT methodology is scan design. Scan lets the large sequential functions implemented in a design be partitioned into small combinational blocks during test. Scan is the standard DFT infrastructure for delivering test data to internal nodes of the circuit and for observing their responses. With today's multimillion-gate SoC designs, scan is required to ensure efficient generation of high-quality manufacturing tests.
Indeed, scan is considered the basic building block for automating the entire test-generation process. Assuming that scan is a given, let's look at what other test-related structures are needed in SoC designs, and where automation can improve the process.
Structurally, today's SoCs are not much different from those developed several years ago. The big difference is that they're faster with more and smaller transistors. As far as testing the internal structures of the device is concerned, the process can be separated into two basic areas: logic and memory.
Testing Logic: Scan and automatic test-pattern generation (ATPG) are the solutions of choice for ensuring the highest-quality test during manufacturing. Functional test strategies are losing popularity across the industry due to their high development cost. It has also become difficult, if not impossible, to grade the effectiveness of functional tests for large multimillion-gate designs. The simplicity and effectiveness of ATPG and scan-based test patterns directly address the problems of functional test patterns and offer several advantages.
Functional testing implies that tests are only delivered through functional operation of the device (Fig. 1a). Thus, it doesn't make use of scan to deliver tests to internal nodes of the device. Instead, functional testing relies on testing internal nodes by delivering stimulus through the external pins of the device and clocking the device as it would be under normal operation.
Generating this type of test for a large design, and ensuring that the entire design is adequately tested, requires extensive manual effort as well as in-depth knowledge of the design and its operation. One can envision the problems and effort required to test some small block of logic within a 5 million-gate design. In many cases, this takes thousands of clock cycles and primary input sequences just to get the right test data to that embedded block. The designer must then figure out how to get the data back out to primary output pins to observe a potential failure.
On the other hand, using scan-based test patterns provides access to internal nodes of the design and simplifies the problem into much smaller blocks of logic (Fig. 1b). Additionally, scan and ATPG enable the entire process of generating the test patterns to be fully automated. This ensures very high coverage in test patterns, as well as a predictable and repeatable process. Today, ATPG tools can generate very high-coverage test patterns for multimillion-gate designs in a matter of hours.
Basically, ATPG has become the standard for generating tests to detect static defects (or stuck-at failures). But as device sizes shrink, testing for static failures alone may not be sufficient. The higher performance and increased levels of integration found in SoC designs are now leading to new types of failure mechanisms—and the need for new types of test. The microprocessor industry, where performance is critical, has led the way in "at-speed" testing. But now, even standard IC processes moving to 0.13 µm and below are supplementing standard "stuck-at" patterns with "at-speed" tests.
As the industry continues to move to these smaller geometries, the increased occurrence of timing-related defects will require that all manufacturing test adopt a strategy that involves "at-speed" testing. Again, the choice for generating these "at-speed" tests comes down to scan-based test patterns or functional patterns. For the reasons listed earlier, scan-based testing is quickly becoming the standard for "at-speed" testing. Scan-based techniques, using both transition testing and critical-path analysis, are maturing to the point where they can offer a completely automated solution in most cases.
Testing Memory: It's no secret that the percentage of memory on today's SoC designs is increasing. As a result, a critical piece of any overall DFT strategy is a comprehensive plan for testing memory. Based on predictions from the International Technology Roadmap for Semiconductors (ITRS 2001), the percentage of die area that memory occupies will increase from about 50% today to over 70% in the next five years. Memory built-in self-test (BIST) has emerged as the standard for testing large embedded blocks of memory (Fig. 2). It delivers proven algorithms for testing embedded memory.
Also, creating the memory BIST controller and inserting it into the design can be fully automated. Although most good RTL designers can probably design their own memory BIST, why bother? Using an automated memory BIST tool saves weeks of implementation effort in the initial design, not to mention the amount of time saved in subsequent revisions of the design.
Automated memory BIST tools have many built-in algorithms to choose from and, in some cases, provide users with the flexibility to define their own custom memory-test algorithm. As with logic testing, "at-speed" testing is also important for memory. Some commercially available memory BIST tools offer unique solutions to "at-speed" memory test that improve the quality of test and reduce the test time needed.
As the increase in memory content continues, one slight caveat is that not all additional memory inserted into SoC designs are large embedded arrays. There's a growing trend of adding many small and distributed arrays, register files, and FIFOs scattered throughout the design, which poses an interesting test challenge. In some cases, designs may contain hundreds of small embedded arrays. These embedded arrays might be so small that the overhead of the BIST circuitry alone makes BIST impractical. The BIST circuitry could contain more logic than the memory it's intended to test.
The arrays may also be in performance-critical areas of the design where designers can't afford the impact of the BIST circuitry. One option is to leave small arrays untested. Just one or two of them on a design may have little impact on the overall test quality. But in cases where there are hundreds of arrays, leaving them untested can have a negative effect on test quality and the number of defective parts that escape the final manufacturing test.
A relatively new solution called "macro testing" has emerged in the last couple of years to specifically deal with this trend of small distributed arrays. Macro testing is an automated technique that allows users to define the memory test patterns that they want to deliver to an embedded array (Fig. 3). It then uses ATPG and the design's existing scan cells to figure out how to deliver the test patterns to the embedded array. Because it uses existing scan cells to deliver the test pattern, no additional test logic or BIST circuitry is needed to test the small arrays. Furthermore, hundreds of small arrays can be tested in parallel, reducing the amount of test time and data necessary.
Boundary Scan: Most designs today implement some sort of boundary scan so that chip I/O can be accessed and interconnect tested at the board level (Fig. 4). The boundary scan controller has also emerged as the standard mechanism on SoC designs for initiating and controlling the multiple internal memory BIST controllers. Boundary scan is now a well-known and documented IEEE standard, and some test software vendors offer automated solutions.
Much like the case of memory BIST, an RTL designer can design his or her own IEEE-compliant boundary scan chain and associated controller. But to improve efficiency and time-to-market, an automated boundary scan tool can be used to let the register transfer level (RTL) designers focus on critical areas of the functional design. Automating boundary scan can save weeks in initial implementation, and even more in all subsequent revisions that affect I/O and device-pinout assignments. If the memory BIST and boundary scan solutions communicate, the entire process of connecting the memory BIST controllers into the boundary-scan controller can be automated.
The ultimate goal of manufacturing test in SoC designs is to screen out bad devices from good devices. Eventually, it comes down to quality. The better the manufacturing tests, the less likely it is a defective part will escape the test process and make it to the end customer. As explained, the most common test methodologies—including scan and ATPG, memory BIST, and boundary scan—are available today to fully automate the creation and insertion of test logic, and the creation of the final manufacturing test patterns. High-quality manufacturing tests and improved time-to-market don't have to be at odds. Automating the DFT process can provide both.