Speedy, Wide-Bit and Deep DRAMs Challenge ATE

Just when you’re convinced that your new computer has more than enough memory to satisfy all foreseeable requirements, a new operating system, an application program update or a multimedia add-on kit comes along that needs even more memory. But memory designers and PC suppliers always had–and it seems will continue to have–solutions.

Semiconductor processing advancements, resulting in decreased IC geometries and greater density, have enabled memory designers to progress from a 64-kb to a 16-Mb-per-DRAM chip during the last decade. And today’s PC construction, using banks of memory in the form of single in-line memory modules (SIMMs), facilitates memory expansion by merely plugging higher-density modules into SIMM sockets. These two factors simplify upgrades.

But along with higher density come other changes. Decreased geometries provide speed benefits–far more so for microprocessors (µP) than for DRAMs. To keep up, new memory architectures are being devised. In some applications, lower supply voltages are used to achieve faster state transitions, simultaneously reducing power consumption.

As PCs progress from 16- to 32- to 64-bit-wide I/O buses, SIMM structures must also become wider. In addition to the technological benefits, it is less expensive to assemble SIMMs with wide I/O memories because fewer devices will be necessary.

More memory per chip, changing voltage levels, and wider and speed-enhancing architectures all impact memory ATE and SIMM test-equipment requirements. But the two types of testers are affected in different ways.

Memory ATE, used primarily by semiconductor manufacturers for characterization, wafer and device testing, already provides extensive facilities and is upgradable to varying degrees. Present SIMM testers, used by memory module assemblers, computer manufacturers, integrators and service organizations, will continue to test the present generation of memory subassemblies for some time. But an entirely new SIMM tester design may be required to cope with the next generation of modules.

Keeping Up With µP Speeds

As µP clock speeds increased from 25 to 33 to 66 and now to more than 100 MHz, conventional DRAM architectures cannot keep up. While SRAM now approaches 200 MHz, the price and power consumption make it an unattractive alternative for almost all applications. The solution lies in new DRAM architectures, such as those being employed in the extended-data-out (EDO) DRAMs, synchronous DRAMs (SDRAMs), and cache DRAMs (CDRAMs).

The EDO DRAM internal structure is equivalent to a conventional page-mode DRAM with a latch added to the sense amplifier output, which reduces wait states. Some EDOs have page cycle times as low as 20 ns (50 MHz). Modified DRAM controller circuitry may be required for optimized performance.

SRDAMs derive their speed advantage through burst data transfers and interleaving of operations. In a dual-banked SDRAM, two equal DRAM arrays are alternatively accessed. In a pipelined SDRAM design, one data word is delivered to the bus while the next word from the array is simultaneously retrieved.

CDRAMs use a combination of SRAM and DRAM to achieve read and write speeds as high as 200 MHz (for SRAM-cache hits). They are presently produced by only a few manufacturers and cost about 15% more than conventional DRAM. They also require external cache control circuitry.

“A variety of high-speed DRAMs being sampled and shipped today achieve speeds of 50 MHz to 150 MHz,” said Harold LaBonte, Memory Product Manager at Teradyne. “Burst EDO, graphic RAMs and SDRAMs are but a few of the many potential volume winners. As these devices reach volume production quantities, the challenge is to bring to market memory ATE that provides signal fidelity at speeds of 100 MHz and high-speed modes approaching 200 MHz.”

Speed-Related SIMM Implications

There are two ways to achieve the high speed that is needed. Some SIMMs will use EDOs, SRDAMs and SDRAMs. Another option combines conventional DRAMs, RAMs and control components into high-speed SIMMs and memory subsystems.

The first method is simple to accomplish, but requires some costly devices. The second method calls for more ingenious implementations, such as those outlined by Cecil Ho, President of Computer Service Technology.

“We have found that the best route to high-speed operation is to build innovative memory subsystems using an ASIC or a PAL controller to provide the special memory control circuitry,” said Mr. Ho. “One example would put cache RAM onto the same module with a cache hit/miss controller. If a cache hit is achieved, the SRAM would execute data read and write at a 15-ns rate. In the case of a miss, the system would be passed back to the DRAM where it will make a page-mode access.”

This arrangement functions as an expanded CDRAM system. A modular SDRAM system can be built in a similar manner. “A memory subsystem can be created to extend the interleaving mechanism,” explained Mr. Ho. “An ASIC on the subsystem would control interleaving of up to eight banks of memory. As a result, the access-speed enhancement would be eight times the individual RAM speed, allowing the system clock to be run at 100 to 200 MHz.”

Regardless of how higher speeds are achieved, future SIMM testers must provide precise timing and run at higher speeds since they must emulate the computer environment. To accommodate lower operating voltages required by newer memory ICs, analog pin drivers may have to be used to interface with the DUTs.

These and other SIMM test-related requirements are outlined in greater detail in Reference 1.

Accommodating Wider Buses


One-bit-wide memories have long been supplemented by multibit-wide devices. Wider I/O can provide not only performance improvements, but also economic and structural advantages (Figure 1). However, testing these wider I/O devices, with their higher pin-outs, requires that memory ATE be outfitted with an ever-increasing number of test-pin resources.

In addition, I/O width is expanding concurrently with increased density, providing higher memory capacity per device in each new memory generation. “For example, DRAM density and data width is increasing from 16 Mbyte by 16 bit to 64 Mbyte by 32 bit,” said Keith Lee, Marketing Manager at Advantest. “SRAM density has gone from 32k by 16 to very fast 32k by 32 and 32k by 72, and is going higher.”

The combination of greater speed, increased memory capacity and greater width is impacting memory ATE architecture and required capacity in several ways. “One technological implication is the need to increase the ratio of comparators to drivers,” Mr. Lee explained.

“Generally, memory testers have had ratios of 1 to 3. For the wider data devices, the ratio now is an unprecedented 1 to 1,” he said. “This ratio requires a greater number of I/O channels. When you look at the resultant architectures, they are more like logic testers than traditional memory testers.

“As memories become more dense, they require higher-throughput test systems,” Mr. Lee continued. “Parallel testing is the only practical solution. Advantest has progressed from systems that test 32 devices in parallel to 64 devices in parallel and will soon introduce a system that handles 128 devices in parallel. These systems also will contain one comparator for every driver.”

Performing tests on more devices in parallel, some being single-bit-wide and others featuring multibit I/O, requires judicious allocation of tester resources and intellectually challenging programming. “Megatest’s newest memory test system offers a hardware feature that simplifies the task of programming multiple devices in parallel,” said Nick Callaway, Senior Product Marketing Manager, Memory Products, at Megatest.

“The new system software analyzes a memory device’s configurations, such as the number of addresses, clocks and I/O, and automatically maps tester resources, including clocks, addresses, data, DPSs and even PMUs, into virtual sockets based on the number of devices to be tested in parallel,” Mr. Callaway explained. “You view the test algorithms and the test program through a single DUT, while the system software iterates programming and resource allocations across all enable-designated sockets. This concept eliminates the need to comprehend the memory test-system’s architecture.”

Outlook

The need for more memory is not short term. It is estimated that memory-chip consumption will continue to increase at a 16% CAGR until the year 2000.2

The memory cell contents of each device will also continue to increase, requiring longer test times. More devices and longer test times translate into a burgeoning demand for more memory ATE with higher throughput at the wafer, device and memory-assembly levels. These future test systems must provide not only the throughput, but also the high-precision timing and other facilities demanded to test future fast memories.

The memory devices expected to be in volume production at the turn of the century are already being characterized on today’s high-performance ATE. However, providing these new capabilities is not enough. Ideally, the next generation of ATE should still be compatible with present systems. The option to upgrade equipment is also important.

“On most test floors, you find multiple generations of memory ATE, with each generation providing higher performance and greater economies than the last,” said Mr. LaBonte. “It is essential that you obtain the maximum leverage from your investments in programming, existing support infrastructure and accumulated employee knowledge base–across all generations of equipment.

“It’s not news that memory ATE must be equipped to accommodate more pins,” Mr. LaBonte continued. “But present equipment may become obsolete if its architecture does not allow it to make the jump from 60 MHz to 200 MHz, while the pin count per test head multiplies.

“A tester-per-site architecture, as found in our J990 Series, provides both increased speed and pin counts. Like the transition away from shared resources to a tester-per-pin architecture that took place for VLSI ATE in the 1980s, we anticipate a similar departure from the shared resource architecture in memory,” Mr. LaBonte concluded.

However, most of today’s–and many of tomorrow’s–memory-device and SIMM test requirements can still be handled by currently available test systems. For a comparison of memory ATE and SIMM testers, see Tables 1 and 2.

References

1. Ho, Cecil, “High Volume Testing for Memory Sub-Systems,” NEPCON West ‘94 Proceedings, pp. 1293-1298.

2. Frost & Sullivan, “World Memory IC Market: Processors and Portability Drive Innovation and Growth,” January 1995.

Copyright 1995 Nelson Publishing Inc.

October 1995



Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!

Sponsored