In response to the never-ending demands for increased system bandwidth and faster memories, several new memory technologies have surfaced in recent years. Today, there is a new breed of high-speed memory on the horizon—the packet-protocol DRAM.
One notable player, is the Direct RDRAM. Direct RDRAM technology must be licensed from Rambus and is supported and promoted by Intel. In fact, Intel PC chip sets now are being developed to support Direct RDRAM.
Direct RDRAM devices use the same memory core technology that traditional DRAM memories do. The interface to this core is where the higher-speed operation is gained. A test mode allows access to this core in the conventional SDRAM manner, known as the DAMODE.
In this way, the device core can be tested at slower speeds, using existing, less expensive testers. Then at-speed testing of the interface and the core through the interface can be accomplished on a more expensive, high-speed tester. This approach reduces the cost of testing by performing only the high-speed tests that typically have shorter execution times on the more expensive tester.
Testing in this manner initially was performed with a logic tester using vectors and, to some extent, on a high-speed memory tester using an APG that supports packet protocol. The best test sequence—dual-insertion or single-insertion—is debatable.
Some percentage of Direct RDRAM testing must be performed on a high-speed tester. The preferred vehicle for at-speed testing of the core is an APG with packet-protocol capability. The final solution probably will be a single-insertion test using a packet- protocol-capable APG on an affordable high-speed memory tester.
Testing Packet-Protocol Memory Devices
As packet-protocol memory devices lead the way into the next millennium, and memory manufacturers gear up to produce them, ATE companies are faced with the challenge of testing these new devices. Traditionally, memory testing challenges have come in the form of devices operating at higher speeds with some new functionality. This has meant faster testers with higher accuracy and modifications for the new functionality. Not only are we presented with similar requirements for direct RDRAMs, there is something completely new to memory testing: packet protocol.
Packet-protocol devices represent a departure from what memory ATE vendors have supported in the past. Information is presented to the device in packet format. A packet can be thought of as an array of data that is applied across specific pins of the device.
Inside these packets is information such as Device ID, Bank Select, Commands, Row Address, Column Address, and Data. This presents a new set of pattern- generation requirements for memory testers that center on an APG with packet-generation capability. The required packets must be created and presented to the DUT with their respective data in the correct format.
Six packet types defined for the Direct RDRAM can be grouped in three categories: row, column, or data. Row packets are applied to three RQ device pins, column packets are applied to five RQ device pins, and data packets are applied to 18 DQA/DQB device pins (Figure 1). The six-packet types are ROWA, ROWR, COLC, COLM, COLX, and D (data).
ROWA
—activates a row and bank by providing the row and bank addresses and selects a device by providing the device ID; also moves the device to the attention power state.ROWR
—specifies precharge, refresh, and temperature calibration and moves the selected device into power state management conditions. Device and bank addresses are provided.COLC
—specifies write and read operations and provides column and bank addresses as well as the device ID; also frames the COLX and COLM packets.COLX
—specifies precharge and current calibration operations and provides the bank address.COLM
—specifies byte mask conditions when the device is receiving data packets.D
—sends and receives data to and from the device. A total of 144 bits of data per address is arranged in eight sequences of 18 bits applied across the DQA0 to DQA8 and DQB0 to DQB8 device pins (Figure 1).
Packet-Protocol APG
Traditional tester APGs provide row, column, bank addresses, data, and commands. Any of these, and often multiplexed combinations of these, can be output at a specific point in time to a DUT. However, most traditional APGs fall short of accomplishing this for packet-protocol devices for several reasons:
There is no single APG that operates in the 400-MHz to 500-MHz range required to generate information at the speeds and accuracy required by the Direct RDRAM.
The high-speed multiplexing required to route the different types of packet data does not exist. Data generation becomes more complex now because 144 bits of data must be generated and fully supported by failure processing for each address supplied to the DUT.
Packet-related device latencies compound the timing generation and pipeline requirements of the APG. Interleaved APG schemes attempt to address these issues, but they become unwieldy to program and can introduce timing and accuracy issues.
An APG specifically designed to generate the required packet information in the correct format is necessary. The packets must contain the correct bit information in the correct packet locations and must be clocked to the device at the exact time they are required. This often requires device, bank, address, and command bit information to be applied to the same pin during the packet clocking sequences (Figure 1).
The packet-protocol APG provides a method of programming similar to traditional APG programming. This greatly eases the task of pattern generation and speeds pattern development and debug time for the test engineer.
When testing Direct RDRAMs, it is important to control the state of the device pins between packets, particularly the row and column packet pins monitored by the device for packet recognition or framing. These pins are RQ7 (ROW2) and RQ6 (ROW1) for row packets and RQ4 (COL4) for column packets (Figure 1).
Left in the incorrect state, they will cause nonexistent, unwanted packets to be framed by the Direct RDRAM. In turn, the test engineer may spend significant time to find and correct the problem. In general, it is best to bring these device pins to a low state between packets and avoid the problem of inadvertent framing of packets.
The packet-protocol APG must be coupled with a timing system to generate timing that clocks correctly built packets to the device at the speed and accuracy it demands, which is up to 1 Gb/s for characterization today. Packets are clocked in DDR fashion, using both edges of the respective high-speed differential clock pairs: CTM/CTMN for reading device data and CFM/CFMN for writing device data.
All packets are four clock cycles, or eight clock events, in length. The device packet timing essentially is fixed for the duration of the test based on the programmed clock speed and read latency (tCAC) during the device initialization sequence.
Initialization sequences are required to put the device in the proper mode of operation. A different initialization sequence is necessary for each tCAC parameter. They currently are executed in logic vector format. This dictates that the tester must execute both logic vectors and packet-protocol APG patterns with a seamless handoff from the vectors to the pattern. The high-speed clocks cannot be interrupted, and the proper phase relationship (tTR) must be maintained.
From a memory-tester prospective, a packet clocked to the DUT at 400 MHz/800 Mb/s must be built every 10 ns, which equates to a tester packet-protocol APG rate of 100 MHz. This means that the device, bank, row, column, command, and data for any given packet must be generated by the packet-protocol APG at a 10-ns rate. This information must be properly positioned in the desired packet format and clocked to the DUT at a rate of 2.5 ns. At a 2.5-ns rate, data is clocked every 1.25 ns, which equates to 800-MB/s device operation.
There also are packet latencies to be dealt with in the tester timing system. There are column packet latency (tRCD) and data packet latency (tCWD and tCAC) specifications that must be met. We must be able to delay column and data packets several additional packet-protocol APG packet cycles from the row packets. This can be thought of as gross latency.
Also, packets are not always initiated on even packet-protocol APG cycle boundaries. An additional finer grade of latency is required to start packets on the correct clock edge within the packet-protocol APG packet cycle (Figure 2).
Further complicating the latency issues are timing bubbles that must be accounted for during interleaved RRWW device transactions. A bubble is a time delay of n clocks inserted between packets.
Another issue is how best to handle the inverted logic format that the Direct RDRAM uses. A logic level one from the device data sheet is registered by the device as a low value, and a logic level zero is registered as a high.
One way to accommodate this inverts the levels presented to the DUT in timing. Once this is in place, the engineer no longer has to worry about it and can program the device with the logic levels shown on the data sheet. This removes a level of complexity when creating test patterns.
The speed of these devices, initially 400 MHz/800 Mb/s, is unprecedented for DRAM memories. With this speed, accuracy is crucial to test parameters such as tQ and tSH with setup and hold time specifications as small as 200 ps. EPA must be in the neighborhood of ±50 ps or better (Figure 3).
Studies have shown that a relatively small increase in EPA can increase yield significantly. At this level of speed and accuracy, jitter also can be a significant source of error. Obviously the less jitter present, the less its role as a source of error.
Tester Calibration
Sophisticated calibration techniques are required to guarantee the signal accuracy, integrity, and repeatability at the device pins. DTL from the tester pin electronics to the device pins must be implemented for the RSL pins to eliminate dead cycles when turning the bus around between read and write data packets. For DTL interfacing, a shorted TDR calibration technique is commonplace. However, the traditional TDR load board calibration approach no longer is adequate in this environment, and additional calibration methods must be used.
A packet-protocol device uses timing in very repeatable formats. Each edge transition should be calibrated to the specific DUT operating conditions. Device sockets play a significant role and, as a rule, the lower the inductance rating of the socket, the better signal fidelity is maintained.
Calibration execution time and frequency are important. Lengthy calibration times that must be executed frequently decrease tester usage for device testing and impact throughput and efficiency. A stable short-duration calibration routine with a relatively long interval between executions is desired. An automated calibration routine is even better because it eliminates the human-error factor and speeds execution time.
Conclusion
To test packet-protocol memory devices like the Direct RDRAM, ATE companies and test engineers will have to deal with all of the issues discussed here and others yet to be encountered. Doing so requires a tester that can provide the following capabilities:
An easy-to-use packet-protocol APG solution to provide proper pattern/data generation.
Vector and pattern execution in tandem.
A flexible timing system with the power to handle all forms of packet latencies and the RDRAM inverted logic format and provide the speed and accuracy needed.
Automated calibration that is accurate, quickly executed, and long lasting.
About the Author
Mark Hosman is a technical specialist at Schlumberger Advanced Business Engineering Resources. During the past 18 years, he has held a variety of manufacturing, engineering, and marketing positions at Schlumberger ATE and other semiconductor test equipment manufacturers. Mr. Hosman received an electronics technology degree from Chabot College and a certificate in computer technology from Merritt College. Schlumberger Test and Transactions, 1601 Technology Dr., San Jose, CA 95110, (408) 501-7120, e-mail: [email protected].
Mark Hosman is a technical specialist at Schlumberger Advanced Business Engineering Resources. During the past 18 years, he has held a variety of manufacturing, engineering, and marketing positions at Schlumberger ATE and other semiconductor test equipment manufacturers. Mr. Hosman received an electronics technology degree from Chabot College and a certificate in computer technology from Merritt College. Schlumberger Test and Transactions, 1601 Technology Dr., San Jose, CA 95110, (408) 501-7120, e-mail: [email protected].
Glossary of Terms
APG—algorithmic pattern generator
CAS—column address strobe
CFM/CFMN—clock-from-master positive polarity/clock-from-master negative polarity
COLC—column packet that frames COLM or COLX packets and contains column, bank, and device addressing information and opcode information
COLM—column packet that contains byte masking information
COLX—column packet that contains opcode information in addition to that specified in
a COLC packet
CTM/CTMN—clock-to-master positive polarity/clock-to-master negative polarity
D—data packet containing 144 bits of data
DAMODE—direct access mode
DDR—double data rate
DRAM—dynamic random access memory
DTL—dual transmission line
DUT—device under test
EPA—edge placement accuracy
RAS—row address strobe
RDRAM—Rambus DRAM
ROWA—row packet containing row, bank, and device addressing information
ROWR— row packet containing opcode information and bank and device addressing information
RRWW—indicates a read, read, write, write sequence of device operations
RSL—Rambus signaling logic
SDRAM—synchronous dynamic random access memory
tCAC—CAS access delay for read transactions
tCWD—CAS write delay for write transactions
TDR—time domain reflectometry
tQ—CTM to DQA/DQB output time
tRCD—RAS-to-CAS delay
tSH—setup and hold time
tTR—CTM-CFM differential
Copyright 1999 Nelson Publishing Inc.
October 1999