Memory: From DRAMs To Ferroelectrics
THE ABILITY TO STORE DATA IN EVERYTHING from silicon to crystal lattices has opened the door to a wide range of commercial storage products. Also, potential future storage structures will allow the eventual holographic storage of terabits in small, half-inch cubes of crystalline material. Designers today have a wide choice of storage options, ranging from DRAMs and SRAMs on the volatile side to flash, ferroelectric, and other forthcoming technologies for nonvolatile storage.
Memory densities have been on a relentless pace, with the storage capacity of DRAMs quadrupling almost every three years from their inception by Intel Corp. (www.intel.com) in the early '70s through the late '90s. Similarly, SRAMs have also seen density increases, but on a lower pace. They typically quadruple every three years, but one generation behind DRAMs in density. Memories started to hit a performance wall in the late '90s as DRAM densities went beyond 64 Mbits/chip. Internal loading of the memory bit lines on the chip actually started to overshadow performance gains due to scaling. First-generation 256-Mbit chips actually ran slower than their 64-Mbit cousins. So the mid-'90s saw the first half-step density increase, with DRAM capacity doubling rather than quadrupling every two years.
SRAMs also hit a wall, leveling off at about 4 Mbits for designs based on the four-transistor/two-resistor or the six-transistor full-CMOS memory cells. Denser SRAMs had such large chip areas that it made them much too expensive for many applications. Additionally, the high leakage currents of the 4T cell at 4-Mbit and higher densities prevented the use of such memories in many portable applications. SRAMs based on the 6T cell allowed designers to craft memories with much lower leakage currents. SRAMs with 1- and 4-Mbit densities based on a 6T memory cell offered much lower leakage currents than their 4T counterparts. But the larger chip area made the chip cost-prohibitive for most commercial applications. New deep-submicron processes with features below 0.13 µm have come to the rescue.
In addition to density improvements, memory access time has greatly improved over the decades. Access time gains come from two sources. First, improvements in processes let designers fabricate faster transistors. Second, major architectural enhancements and radical design approaches reduce internal overhead in DRAMs. The latter has led to the progression from standard page-mode devices to units with extended data-out capabilities, to the synchronous DRAM, and now to the double-data-rate (DDR) SDRAM.
Additionally, several offshoots targeted at graphics have added special registers or ports to accelerate video-data transfers. One novel architecture, the RDRAM from Rambus Inc. (www.rambus.com), employs a byte-serial burst-type transfer. Intel has adopted its high-speed interface for several families of high-performance PC motherboards. RDRAMs are now in widespread use in video-game applications as well.
The doubling rather than quadrupling of DRAM density led to the introduction of the 128-Mbit DRAM, followed by a better-performing 256-Mbit device in the late '90s. In 2000, the 512-Mbit DRAMs entered sampling. Though a number of companies have presented papers detailing functional 1- and even 4-Gbit DRAM designs, most are still laboratory curiosities. Just one or two companies are sampling 1-Gbit devices to customers.
These higher-capacity memories will eventually make it into production as process features (gate lengths) go well below today's production 0.13-µm processes. As the memory densities increased, so did performance, with access times for DRAMs going from many hundreds of nanoseconds in the '70s to less than 2 ns/access when performing burst transfers on the latest DDR synchronous DRAMs or the Rambus RDRAMs. Similarly, SRAM access times have plummeted from microseconds to below 10 ns for standalone chips, and to just 1 or 2 ns for special on-chip cache memories.
Just making smaller DRAM cells, however, may not be enough. Yield losses and capacitive loading will limit density or performance. Novel process technologies are being used and explored for future memory generations. Some 3D approaches stack layers of memory cells one above the other. Also, the use of multiple voltage levels in a memory cell lets each cell hold two or more bits worth of data.
Although stacking technology has been around for years, it has been way too expensive for commercial use. Mainstream chemical-mechanical polishing production technologies are now being used to flatten the wafer's surface. There also is a wafer-bonding/fracturing approach where several layers of memory cells can be formed on top of a control circuit. This allows the creation of a very dense and high-performance DRAM. Tachyon Semiconductor Corp. (www.tachyonsemi.com) has employed this approach in a 1-Gbit high-performance memory unveiled last year ("Re-Architected DRAMs Deliver Denser, Lower-Cost Storage," Electronic Design, Nov. 19, 2001, p. 29).
Relentless advances in process technology have also been leveraged by designers of nonvolatile storage devices. Bipolar ROMs as well as PROMs of the late '60s with their kilobit storage capacities gave way to the UV EPROM in the '70s, leading to multimegabit storage. For a short time in the late '70s and early '80s, designers shifted their interest to the magnetic bubble memory as an alternative to the UV EPROM or EEPROM. Data could be written and read, and when power was removed, it would be retained. But memory access time was too slow. Also, the magnetic materials were difficult to manufacture, power consumption was too high, and the overhead support circuits were too expensive.
The UV EPROM, therefore, endured for a long time. But it finally gave way to EEPROM and now to flash memory. Today's flash memories have capacities of 512 Mbits using 1 bit/cell. The first devices to employ a two-bit/cell storage scheme for storage capacities of 1 Gbit/chip will be detailed by Toshiba Corp. (www.toshiba.com) next month at the International Solid State Circuits Conference (www.isscc.org) in San Francisco.
Intel pioneered the use of a 2-bit/cell scheme in its StrataFlash family with a 32-Mbit device introduced in 1996. In addition to their use in computing systems, high-capacity flash memories have found a home in many consumer applications such as digital film for electronic cameras and digital tape for MP3 music players.
Although standard flash memories include both write and erase capabilities, a number of cost-sensitive applications need only a write-once capability. A novel scheme developed by Matrix Semiconductor Inc. (www.matrixsemi.com) promises a low-cost write-once option. It employs a memory-cell layering scheme. But rather than a wafer bonding and fracturing approach, it uses the deposition of oxide and polysilicon layers to form multiple storage layers, each composed of polysilicon diodes and polysilicon interconnects.
Flash memories have one theoretical limitation—a wearout mechanism that limits the number of write cycles to about 1 million for the best devices available to date. Ferroelectric-based memory cells eliminate this limitation. But for now, memory capacities of FRAM-based chips are limited to a few megabits. Processing improvements over the next few years promise to boost capacities to hundreds of megabits and perhaps let them compete with DRAMs.