During its 30-plus years, the venerable dynamic RAM has undergone a series of facelifts to boost density and performance. And the nipping and tucking isn't over, with system designers demanding more performance and bits per chip.
The DRAM market also has fragmented into a plethora of options at both the chip and the memory-module level. No longer does a single memory configuration or interface meet all system demands (see "Memory Granularity Issues" at www.elecdesign.com, Drill Deeper 10093).
Today, about a half-dozen basic DRAM types are available to meet various performance needs. The largest category, the mainstream DRAM, includes the older synchronous DRAM (SDRAM), double-data-rate SDRAM (DDR1 SDRAM), and the second-generation DDR (DDR2 SDRAM). These devices are used by PCs, workstations, and server manufacturers, as well as in many consumer, industrial, and networking applications.
For a short time, the Rambus RDRAM also contended as a basic memory type for the desktop, or at least for the graphics card, due to the high data-transfer rate made possible by its unique byte-serial interface. But DDR performance caught up, and delays in RDRAM production caused the memory to lose some of its appeal. Yet it did enjoy a strong showing in several game consoles, starting with the Nintendo 64 and then the Sony PlayStation 2. It also is used in some high-performance networking applications.
Additional DRAM choices include versions optimized for graphics (GDDR DRAM), networking (NetDRAM or reduced-latency DRAM, RLDRAM), and low-power/portable applications (CellularRAM, Mobile DRAM). How does each memory type position itself in terms of data-transfer rate? Figure 1 sums it up in a graph that shows some of the memory families--three generations of DDR, more than three generations of graphics memory, and the upcoming extreme-data-rate (XDR) DRAMs.
The first-generation DDR DRAM turns four years old this year. New designs are moving rapidly to the next-generation DDR2 memories, though, thanks to their higher performance and lower power. Lower operating voltage and the shorter access times, among other reasons, explain why DDR2 memories are more attractive for high-performance systems. (see table illustrating the many advantages of migrating from DDR to DDR2)
Even though DDR2 DRAMs will come on strong as more suppliers start delivering 533-MHz speed grades, DDR1 DRAMs are well-entrenched with speed grades of 333- and some 400-MHz devices.
Most vendors offer chip densities of 128, 256, and 512 Mbits. Top-tier suppliers like Elpida, Hynix, Infineon, Micron, and Samsung, and second-tier vendors such as Elite, Etron, Nanya, Powerchip, ProMOS, and Winbond are among them. Also, 1-Gbit devices are available from Micron and Samsung, while Hynix packs two 512-Mbit chips in a package for its 1-Gbit offering.
The 256- and 512-Mbit densities, offered in x4, x8, and x16 organizations, represent the mainstream devices. A few companies also offer a 32-bit version in 128- and 256-Mbit density levels. Such a device would be useful for budget-priced graphics cards.
MARKET SHARES CHANGING
This year, the DDR2 DRAMs have started to gain market share as PC motherboard manufacturers build boards with DDR2 controllers in the CPU chip set. Laptop computer makers are also looking to push the performance and run time of their systems by switching to DDR2 memories, since they consume less power than DDR1 devices.
By late 2005, DDR2 memories may account for just under 50% of all system memory sold into the PC space. DDR2 speed grades of 400 and 533 MHz are already in production. A considerable number of vendors are sampling 667-MHz chips. In fact, production quantities of the 667-MHz speed grade should be available in the second half of this year.
Expect companies to try for one more turn of the screw, with 800-MHz DDR2 devices now on drawing boards. Several companies expect to sample such high-speed chips in 2006. Memory-manufacturer designers are meeting in JEDEC committees to define the next-generation DRAM interface, DDR3. Few details are available since samples probably won't be ready until late 2006. But look for speed grades to start at 800 MHz and extend to 1600 MHz.
DIMMING THE PICTURE
A considerable percentage of DRAMs in the computer market aren't sold as standalone devices. Instead, they're typically sold preassembled on dual-in-line memory modules (DIMMs) to computer OEMs as well as value-added resellers (VARs) and retail stores.
The DRAM manufacturers supply a significant portion of the DIMMs. But many third-party DIMM suppliers purchase DRAMs and either assemble the DIMMs or use a subcontractor to manufacture them. While commercial module granularity options range from 256 Mbytes to 2 Gbytes, designers also must determine which of the three basic DIMM types best matches their system performance requirements--unbuffered, registered, or the new fully buffered (FB).
Basically, unbuffered DIMMs are used in commodity PCs and other systems with a maximum of three DIMM sockets. Registered DIMMs provide better system-memory expansion capability. They're used most often in high-end workstations and servers. The new FB DIMM aims to replace the registered DIMMs in servers, allowing the implementation of very large memory subsystems--up to 192 Gbytes with a single controller (see "Buffering Extends Capacity" at www.elecdesign.com, Drill Deeper 10094).
MEMORY OPTIONS FOR FAST ACCESS
Graphics and high-speed data networks require higher data-transfer speed. Special SDRAM implementations can deliver this higher bandwidth by reducing access time and system delays. These implementations include the graphics DRAM (GDRAM) and the reduced-latency DRAM (RLDRAM), also known as the network DRAM.
The GDDR memories provide higher data bandwidths for graphics engines by employing 16- or 32-bit wide data buses and tight timing margins. Over the last few years, though, the graphics bus width jumped from 64 bits to 128 bits, and now it is at 256 bits.
These wide buses require wide datapaths on the graphics DRAMs to minimize the number of memory chips used on the graphics card. If 8-bit wide memories are used, a 256-bit wide bus would need 32 chips. Obviously, half that number of chips would be required for 16-bit wide memories, and half again for 32-bit wide datapaths.
Because the high-end graphics card's memory typically maxes out at 256 or 512 Mbytes, multiple 4-Mword by 32-bit or 8-Mword by 32-bit GDDR2 or GDDR3 memory devices more than meet the density requirements. Such wide buses deliver the aggregate bandwidth to the graphics engine.
GDDR memory is similar to DDR memory in operation. But GDDR chips usually can offer tighter margins, because bus lengths and loading are more tightly controlled than that of the main computer memory. Also, memory vendors modified the internal DRAM architecture to better optimize the chips for graphics applications.
Some DRAM vendors offer various generations of GDDR memories (GDDR, GDDR2, and GDDR3). Going in another direction, however, Toshiba and Samsung are sampling a high-speed memory based on Rambus' extreme-data-rate (XDR) interface. Rather than use the DRAM's traditional bus architecture, the XDR interface is more of a point-to-point approach. Each XDR memory chip delivers data to the controller over its own datapath (Fig. 2).
The separate datapaths eliminate loading issues. Differential signaling permits the XDR interface to use very small signal swings (200 mV) that deliver eight bits per clock cycle. As a result, these memories offer at least double the data-transfer bandwidth per pin versus the fastest GDDR3 graphics RAMs--3.2 Gbits/s versus 1.6 Gbits/s for the 800-MHz GDDR3 speed grade. XDR memory suppliers expect to increase the speed to 4.8 Gbits/s by late this year and eventually up to 9.6 Gbits/s per data pin by 2008.
NETWORKS DEMAND LOW LATENCY
In graphics systems, data typically flows in one direction--from the graphics memory to the graphics engine. But networking applications often involve bidirectional communications. And in those communications, the less latency, the better. Latency issues can cause missed data packets or affect how quickly data can be forwarded. That's why DRAM vendors came up with several approaches that tackle the latency caused when data transfer on a bus must change direction.
One of these solutions, the RLDRAM (or the network DRAM), permits faster bus turnarounds. This reduces the latency when a bus' data direction changes--for example, from input to output or vice versa. In turn, overall system performance improves because fewer gaps exist in the data transfers and the bus operates more efficiently.
Now in its second generation, the RLDRAM II uses a multiplexed address scheme that differs from the DDR2 approach, which uses 13 addresses plus two bank selects for a total of 15 signals. The RLDRAM uses 11 address lines and three bank selects for a total of 14 signals. When implementing a DRAM controller, it's possible to design a single controller that handles DDR2 or RLDRAM II memory devices.
Basically, the RLDRAM II offers the fastest random read/write cycle time (tRC)--between 15 and 20 ns--and the shortest bus turnaround time of any memory device. Today, 288-Mbit, 400-MHz devices are available from Micron. Also, 576-Mbit devices operating at 533 MHz are almost ready for sampling. A 144-contact fine-pitch ball-grid-array package houses all versions of the RLDRAM, including 9-, 18-, or 36-bit data-bus width options.
Faster and higher-density RLDRAMs are now in development. Micron expects to have engineering samples of a 1.125-Gbit device by late 2006 and a 2.5-Gbit chip in late 2007. Other high-speed contenders include Fujitsu's fast-cycle DRAM, or FCRAM. Toshiba provides an alternate source, but both companies offer devices with capacities of 256, 288, or 512 Mbits that are available with 8-, 9-, 16-, or 36-bit data buses. These memories use burst-mode operations to stream data between the memory and host system and can offer random read/write cycle times as short as 20 ns.
POWER: A GROWING CONCERN
Greater DRAM use in portable instruments has designers on a neverending quest for lower-power operation. Currently, designers have two choices: the slow-refresh, low-voltage DRAM, and a new approach called CellularRAM--a memory with an SRAM interface and a pseudostatic DRAM core.
The latter method targets cell phones and other low-power handhelds. The CellularRAM specifications were co-developed by a team consisting of representatives from Cypress Semiconductor Corp., Infineon Technologies, Micron, and Renesas. (For details, go to www.cellularram.com.)
The CelluarRAM pseudostatic memory includes an SRAM and burst NOR flash-compatible interface. It also has a high-bandwidth burst Read and Write capability. Operating from a 1.8-V supply, the memories can perform partial array refresh to minimize active power (40 mA).
The on-chip self-refresh controller includes temperature compensation; thus, the refresh rate will change as the DRAM temperature changes. This ensures the memory cells stay at optimal charge. A deep power-down mode keeps power to a minimum when the CellularRAM isn't being accessed (160 to 250 µA).
Available in densities of 16, 32, 64, and 128 Mbits, the CellularRAM chip layout was designed for multichip packaging and system-in-package applications. Data-transfer rates can run at 66, 80 or 104 MHz, depending on the speed grade.
The alternate approach to the CellularRAM leverages standard DRAM technology, but optimized for low-voltage operation. For instance, Elpida recently released a 256-Mbit super self-refresh (SSR) DDR DRAM that can reduce the refresh current by 95% versus standard DDR memories. Several vendors, including Elpida, Micron, and Samsung, offer Mobile RAM devices in capacities ranging from 64 to 512 Mbits. These memories are based on standard DDR or SDRAM technologies and come in either 16 or 32-bit data widths. Mobile DRAMs based on SDRAM cores are also available from Infineon.
The SSR technology added on-chip error-correction circuitry to check and correct the data when the DRAM exits the self-refresh cycle. On-chip temperature sensors commonly known as auto-temperature compensated self-refresh (ATCSR) are used with the SSR technology to allow the circuity to automatically adjust the self-refresh timing to compensate for internal temperature variations. Thus, self-refresh currents can be trimmed to as little as 40 µA at 25°C and only 150 µA at 70°C.
|NEED MORE INFORMATION?|
Cypress Semiconductor Corp.|
Elite Semiconductor Memory Technology
Elpida Memory Inc.
Etron Technology Inc.
Fujitsu Microelectronics America Inc.
Infineon Technologies AG
Integrated Device Technology Inc.
Micron Technology Inc.
Nanya Technology Corp.
NEC Electronics America Inc.
Powerchip Semiconductor Company
Samsung Semiconductor Inc.
Texas Instruments Inc.
Toshiba America Electronic Components
Winbond Electronics Corp.