Electronic Design

So, What Was That Memory Technology Again?

Fast-forward to the year 2017. You turn on your tablet PC and, like a Palm Pilot, it's instantly up and running right where you called it quits on your last search. There are no buzzing hard drives or fans, and all of the data is written to and read directly from the memory inside the device.

You're probably thinking that this isn't any big deal. It may seem like I've simply described a laptop with RAM used for temporary storage and some kind of flash memory replacing the magnetic hard drive. Not even close. That technology is expected this year.

Instead, the future PC will contain a single "universal memory" that will be fast enough to eliminate the need for temporary storage. It will also be large enough to handle all your multimedia needs and still run cooler and require less energy than any of today's memories.

Many scientists believe it will be possible to use carbon nanotubes (CNTs) for system memory in the near future and complex logic farther out (see "Back To Nature For Next-Gen Semis"). For example, professor Qing Jiang of the University of California at Riverside and his research partner Jeong Won Kang discovered a multiwalled CNT structure that consists of an outer tube and inner tube that can oscillate at a high frequency with a voltage stimulus. The inner tube's position indicates the nonvolatile logic state (Fig. 1 and 2).1

One company looking to provide an intermediate step to Dr. Jiang's future vision is a startup named Nantero, which employs CNT technology to build novolatile, nanotube-based RAM (NRAM). The company hopes NRAM will replace DRAM, SRAM, flash, and hard disks. With NRAM, memory cells are constructed using several CNTs suspended above a metal electrode. When a small voltage bias is applied to the tubes, they "sag" toward the electrode until making contact. At that point, the tubes are considered in the logic 1 state. When the voltage bias is removed, they pull back away from the electrode and the logic state becomes 0 once again.

Because the technology is built on top of standard semiconductor technology, NRAM's many benefits will include speeds approaching SRAM, densities that far exceed DRAM, and lower power consumption than DRAM and flash. NRAM also stands up well to harsh environments and scales well. In fact, Nantero created a working 22-nm memory switch and expects production, albeit using a larger process, to ramp up later this year.

Fujitsu, Intel, Samsung, Sharp, Spansion, and several other companies are woring on resistance (or resistive) RAM. Also known as ReRAM or RRAM, this nonvolatile memory is built using metal oxides like titanium dioxide (TiO2). Current paths (in the form of filaments) appear in the TiO2 film when a sufficient voltage bias is applied. The filaments may then be broken (reset), resulting in higher resistance, and reformed (set) with the appropriate voltages.

These companies feel it will be 100 times faster than flash, yet scale much better than other advanced memories like phase-change RAM (PCM or PRAM) and magnetoresistive RAM (MRAM) (see "New And Emerging Memory Technologies,"). Intel has since announced at its Intel Developer Forum in April that it plans to go into production with PCM later this year.

If your system requires external memory (as most systems still do), your choices are virtually limitless (Table 1). So, how do you decide which memory technology is best for your design? The best place to start is likely the protocol or protocols you've selected for your design. Whether it's a standard or proprietary protocol, several factors, including speed and bus configuration (parallel or serial), dictate a starting point (see "High-Speed Serial Technology Drives Board Interconnects,").

Next, you need to consider a slew of parameters, such as volatility, the number of times the memory can be written, type of application, read and write speeds, cost per byte, and amount of memory required (Table 2). Other major considerations include form factor, package pinout, and scalability. For example, if you select a parallel architecture, you should consider how the address and data lines are designed, how many other manufacturers have drop-in replacements, and how your printed-circuit board (PCB) will be updated if more address and data lines are needed in the future.

The tradeoffs don't end there, though. Many advances in memory technology over the past few years presented several more considerations. For instance, Samsung and SST offer hybrid memories that incorporate a mix of volatile and nonvolatile technologies, such as flash and RAM. SST's new hybrid All-in-OneMemory even combines the benefits of RAM, NAND flash, NOR flash, and a memory controller in one device (see "I Wish My Memory Were As Dynamic As the All-inOne Memory,").

Many industries impose special requirements on your system, such as military, aerospace, automotive, and medical. If your design has special requirements, such as operating in extreme temperatures or radiation hardening, consider companies that specialize in memories for such environments. These include Austin Semiconductor, Aeroflex, Celis Semiconductor, Honeywell, Maxwell Technologies, Pyramid Semiconductor, and QP Semiconductor.

If you need a memory stackup solution, companies like Irvine Sensors, which handles stacked ball-grid arrays (BGAs), thin small-outline packages (TSOPs), bare dies, and custom stacking, might offer an answer. You should also try visiting helpful Web sites like Denali's ememory.com, which includes a fully searchable database of datasheets, specifications, and simulation models for thousands of memory components.

Another useful site, RAMpedia.com, is a DRAM memory encyclopedia and reference tool to help hardware designers with DRAM memory subsystems and address several memory-related challenges. "Designing a memory board means a multitude of technical and industry issues must be researched and their impact on the design interpreted," says Phan Hoang, director of research and development at Virtium Technology.

"Thermal challenges, the need for extended functionalities, component availability, and end-of-life issues— these demand considerable time and attention from the engineer building a competitive design," adds Hoang. "RAMpedia.com \[provides\] specs and simulation data in one convenient base of information" (see "Table 3: Socket (Module) Versus Chip On-Board Tradeoffs,").

If you have the "luxury" of using an application-specific standard product (ASSP) or are designing an ASIC as part of your system design, you're looking at some tough decisions. Foremost among these is whether or not you're going to use embedded memory, and if so, how much should be included as part of a system-on-a-chip (SoC) design (Table 1, again).

Guidelines are available for designers facing these choices (see "NVM Integration Ensures A Successful Experience,"). For example, if your SoC design requires more advanced embedded-memory IP solutions to handle multiple memories and include integrated testing, consider companies like Aldec, ARM, Denali, Faraday Technology, Inapac Technology, Sonics, and Virage Logic.

You could even opt for a system-in-package (SiP), package-in-package (PiP), or a stacked-die approach (see "Die Stacking Solves The Mobile Device Memory Crunch,"). Companies selling multichip-package (MCP) memory and die-stacking technology include Hynix, Micron, NEC, SacTec, Samsung, Toshiba, and Vertical Circuits.

IBM's new PCB-like method of chip design, 3D chip stacking, stacks dies vertically and connects them using through-die vias. This considerably cuts down on wiring when compared to SiP, PiP, or MCP technologies. Using the new 3D technology, a memory die (or dies) could sit directly on top of a controller, which in turn could sit on top of a processor and so on until you have an entire system stacked up vertically.

"\[This new technology\] allows \[for\] more interconnects between chips. If chips sit next to each other \[in\] a package, the wires that connect them together have to be very long (at least as wide as the chip, maybe 1 to 2 cm)," says Steven J. Koester, manager of IBM's Exploratory CMOS Integration. "Therefore, in order to keep the resistance low, the wires have to be ‘fat,' and so you cannot have too many wires connecting the chips to each other," he adds. "In 3D, since the chips are stacked, they are very close together, so that the wires between them are very short (\[around\] 1/1000 times shorter). Therefore, the 3D interconnects can be very narrow, allowing me to have many more of them connecting the chips. As long as we place the chip components that need to ‘talk' to each other directly on top of each other, we can eliminate the need for a lot of rerouting wiring on the chip."

So there you have it, your memory roadmap from now until 2017 and... wait... what was my conclusion again?

1. Nanotechnology Journal 18, "Electrostatically Telescoping Nanotube Nonvolatile Memory Device," IOP Publishing

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.