Testing Embedded Memories

To meet the growing demands for speed in their final applications, standard microprocessors, specialty microcontrollers, and custom ASICs incorporate a variety of specialty cells on the die with the processor core (Figure 1). As much as 70% of this specialized cell area may be dedicated to memory.

Early embedded memories usually were six-transistor static RAMs that did not require refresh; consequently, test requirements were modest. However, the newest processor designs often incorporate smaller, less stable, four-transistor SRAMs or even single-transistor dynamic RAMs.

Wherein Lies the Fault?

This growth of embedded memory translates into new failure types and new test requirements. Fab-line technologies for memory and VLSI devices are not the same. When DRAMs are embedded in a logic chip, unavoidable compromises are made to merge these technologies.

Similarly, logic testing and memory testing are different. Logic testing verifies that each gate can go high or low and that no gate is stuck in one state. Memory testing is more concerned with disturb, pre-charge, and decoder testing. When testing a device containing both cell types, the requirements of each must be accommodated.

Frequently, the memory is one of the highest-performance areas on a chip. In addition, because of their regular structures, memories are a good way to monitor the integrity of the overall manufacturing process. A fault on an embedded memory can be a symptom of a more pervasive problem. Memory-cell failures can account for a large proportion of overall device failures, creating a major impact on device yields.

Test-Strategy Overview

Every memory has three main functional areas: the address decoding logic, level sensing circuits, and the array (Figure 2). Each of these components presents specific test requirements.

Address decoders must be stressed at the highest operating speed to confirm that they can switch from any address to any other address at maximum frequency.

Sense amplifiers may be presented with a signal that resembles noise. The test requirement is to distinguish a high from a low. A worst-case test might place a low or high on the most distant bit line, with a minimum time to pre-charge from the opposite state. Specific algorithmic patterns can be applied to test for sense-amplifier/bit-line charging problems.

The test strategy for the array depends on its structure. Six-transistor static RAMs require minimal disturb testing—writing a cell to zero, then surrounding it with ones to see if it can hold its zero state. Four-transistor SRAMs may oscillate when disturbed while DRAMs might lose their charge.

In testing, a battery of patterns is presented to the device to determine which patterns create a disturbance. Frequently, a memory fails one pattern while passing others. Pattern characterization identifies the most efficient test patterns for finding the fault.

To achieve the necessary fault coverage, a test system should start with a hardware algorithmic pattern generator similar to that used for stand-alone memories and some basic test-pattern templates. Programs can be created quickly by customizing the templates for the size and shape of each memory and the faults most likely to occur in a particular design. Using algorithmically created patterns, a large memory can be tested in a few program steps.

If a new fault is discovered, a new pattern can easily be added to the test program. It also is possible, but more costly, to create memory patterns using logic test vectors, pattern memory space, pattern creation time, and execution time.

Accessing the Memory

Today’s embedded memories have one of three access modes that can be used for testing: interleaved access through a logic address/data bus, direct access to the RAM through a test mode, or serial access through a test port.

The serial-access mode may have the least negative impact on device performance, but it is the most challenging for test. Several bits of memory address come out of the pattern generator in parallel, interspersed with data bits. They must be presented to the memory through one pin.

A test system must present full algorithmic patterns to the device, without restriction of the address and data mix. Failure data must be stored and presented in a way that preserves the address and type of each failure and the test conditions under which it occurred, even if data is pipelined.

Location, Location, Location

Logic testing focuses on device functions—what outputs are required for the specified stimuli. In contrast, memory failures often are related to the physical location of a cell or the state of adjacent cells.

Design trade-offs might result in a layout that leaves the device vulnerable to certain problems which must be found during testing. The test strategy should include selecting and optimizing test patterns for particular fault types.

Developing a test strategy based on the physical location of cells is complicated by address scrambling and topological inversion, two techniques used to optimize the size and performance of the memory. When these techniques are used, two adjacent logical addresses may not be near each other on the chip.

A test system should have built-in hardware for descrambling addresses and decoding topological inversion during testing. Address descrambling and inversion decoding can be done in software; however, a software approach may limit the capability of the test system to track failures and create bitmaps. These shortcomings may be felt during silicon debug and device characterization.

Failure Analysis

During silicon debug and early production, failing devices are analyzed to improve yields. But, where are failures on the device? Under what conditions do they occur? Do devices fail high or low? Information about high or low failures provides clues about problems with device connections such as VCC and ground as well as process and design-layout issues.

A test system should have both the hardware capability and the software tools to support yield improvement. In one benchmark, a tester with fail-capture hardware processed failure data in one second, compared to 10 seconds for a software-only approach. On a device with a total test time of six or eight seconds, that difference is enormous.

Fail processing data can be used to create a visual bitmap of the memory showing the location of failing bits. Bitmaps are valuable for silicon debug and process engineering. When yields shift, bitmaps can be used to analyze failures.

Bitmap software also can work with shmoo plots. For example, at each point on a fail-count shmoo plot, a memory pattern can be run, maintaining a count of the failing bits. Then a bitmap is generated for each cell.

For characterization or failure analysis, information is sent from the tester mainframe to the computer. To reduce the data transfer time, test systems may use hardware compression, saving only fail information.

But compression can cause data loss. Four, 16, or 64 failures might be ORed together into one bit, making it impossible to determine which bit failed (lossy compression). A loss-less compression scheme is more desirable. Information is compressed and transferred to the computer, with the full profile of each failure maintained.

Redundancy Analysis and Device Repair

To maximize yields, most stand-alone memory manufacturers repair failing memories using redundant cells on the device. During testing, failure data determines whether bad cells can be replaced with redundant cells. An off-line laser system reassigns the addresses of the bad cells to the redundant cells. Some VLSI logic manufacturers also use this technique for embedded DRAMs.

Test equipment should support this strategy with fast error capture and efficient redundancy analysis. A good redundancy algorithm quickly determines all possible repair solutions and chooses the best one to maximize yields without test-time penalties.

Integrated Patterns

You may need to interleave memory test patterns with logic test vectors at full operating speed to observe interactions. A test-system architecture must support this integrated approach for all the pattern types required by the device. For devices with embedded video or audio cells, analog waveforms also must be synchronized with digital patterns (Figure 3).

To BIST or Not To BIST?

Someday, built-in self-test (BIST) may provide a total solution for testing embedded cells; however, it has limitations in pattern flexibility, waveform manipulation for testing and characterization, bitmapping, and redundancy analysis. BIST also costs chip real estate and does not provide any safety valve for testing if a new fault type is discovered.

With these trade-offs in mind, one major device manufacturer is working on a version of BIST that includes redundancy analysis and repair. The approach eliminates the need for external access to the memory cell, potentially improving performance. It also removes test and repair from the manufacturing process. A second major chip maker, considering the same trade-offs, maintains a classical testing strategy without BIST, reasoning that any real-estate cost is too high in large-volume production.

The lesson for equipment selection? For a flexible test strategy, a test system should accommodate BIST but also support the full range of test requirements that BIST cannot yet meet.

The Final Analysis

The 30-year evolution of test technology has followed a different path for memory than for VLSI because memories pose different problems. When choosing test equipment for new VLSI devices with embedded memory, consider the supplier’s experience in testing both memory and logic. Learn all you can about memory test, and choose flexible equipment that will allow you to hedge your bets.


Thanks to John Donaldson of Teradyne’s VLSI Test Division and Kurt Gusinow of Teradyne’s Memory Test Division for their contributions to this article.

About the Author

Chuck Plagmann is a senior applications engineer at Teradyne. Before joining the company 13 years ago, he was affiliated with Burroughs and Northrop. Mr. Plagmann earned a bachelor’s degree in electrical engineering technology from DeVry Institute of Technology and an M.B.A. degree from Pepperdine University. Teradyne, VLSI Test Division, 30801 Agoura Rd., Agoura Hills, CA 91301, (818) 874-7528, email: [email protected].

Copyright 1997 Nelson Publishing Inc.

November 1997

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...


To join the conversation, and become an exclusive member of Electronic Design, create an account today!