Every processor needs storage. A few systems may get by with a single type, but more often a hierarchy of technologies is employed, such as a server with a redundant array of disks (RAID) storage system (Fig. 1). Each component brings something to the system, be it high capacity, fast access, or nonvolatility.
The mix is changing, too, as new technologies come online and existing technologies change due to improvements. What was once unheard of is now commonplace. For example, small netbooks are available with only solid-state memory, leading to better battery life due to lower power requirements.
The drive toward higher capacities requires tradeoffs and different approaches to implementation. For instance, multilevel-cell (MLC) flash memory is delivering higher capacities than singlelevel- cell (SLC) flash memory, but at a price in terms of performance and hardware lifetime.
Likewise, hard drives are shrinking. The 2.5-in. drive is a far cry from the full-height, 5.25-in. hard drive of old, but the smaller drivers have a higher capacity and faster response time. There also is wider use of RAID to provide more reliable storage as well as smaller and more scalable solutions.
FORGET ME, NOT
Random access memory (RAM) is the centerpoint of commercial computing devices. These days, it’s typically volatile storage, including implementations such as static RAM (SRAM) and dynamic RAM (DRAM) replacing magnetic core memory that was nonvolatile. Also, new technologies like ferroelectric RAM (FRAM) and magnetic RAM (MRAM) look to bring this feature back into the fold.
Standalone SRAM chips are still used, but most SRAM is typically found on-chip as part of microcontrollers providing functions ranging from register files to multilevel cache. Its primary features include high performance. The downsides tend to be chip real estate and higher power requirements.
DRAM is where things get more interesting and varied. On-chip DRAM is becoming more common, although the differing semiconductor technologies for DRAM and logic have tended to keep the two on separate chips. Furthermore, DRAM’s higher capacity tends to move it outside the processor chip. As a result, designers can choose how much capacity to provide, or end users can even add their own memory.
Embedded designers have a number of other challenges when choosing DRAM. That’s because the microprocessors in use have a wide range of performance characteristics, as does DRAM. Embedded designers also need to consider product lifetime, whereas PC users tend to chase the latest and greatest and lowest cost per bit when it comes to memory. The move to virtualization is pushing for ever-higher densities, fulfilling the adage that there is no such thing as enough memory.
At the low end resides the venerable but still highly utilized synchronous DRAM (SDRAM), at least in embedded applications. SDRAM has been available and inexpensive, and a major benefit these days is its easy interface requirements. Its slower speed compared to DDR2 and DDR3, used on the bulk of PC-based systems, is an advantage to designers, especially when trying to mate it with slower processors, relatively speaking. The downside is capacity and efficiency compared to DDR2 and DDR3.
Another problem microprocessor designers are running into is speed. Pushing the upper speed bound usually means dragging the lower bound up the scale. This isn’t a problem when dealing with the latest x86 gigahertz multicore processors from the likes of AMD, Intel, and VIA, but becomes so when trying to support 200-MHz processors.
Of course, the processor clock could be sped up with a corresponding increase in cost and power requirements. These two factors are definitely not on the list of preferable features. Almost any microcontroller could handle SDRAM. Some can handle DDR2, and few can handle DDR3’s higher speeds.
DDR2 is the commodity king. It handles the bulk of server, PC, and laptop systems, but those are quickly moving toward DDR3. Still, DDR2 will be the darling of embedded systems for some time to come even as its availability begins to fall and prices begin to climb. This won’t happen overnight, but it’s trending in that direction. The challenge in the embedded market is meeting DDR2’s performance requirements for lower-end micros.
Samsung’s new 16-Gbyte DDR3 memory targets server motherboards designed to handle only DDR3 memory (Fig. 2). When using these new modules, a server motherboard can host up to 192 Gbytes of DDR3 memory at transfer rates up to 1333 Mbits/s with a 60% power consumption improvement over DDR2. Most higher- end motherboards have chip sets that handle DDR2 or DDR3. DDR3-only chip sets are typically smaller and more efficient.
On the horizon, Innovative Silicon’s Z-RAM single-transistor memory technology is supposed to be more scalable and areaefficient than existing DRAM technologies. Hynix and AMD have licensed Z-RAM technology, but for different purposes. Hynix may incorporate it into mainline memory, while AMD is looking at large on-chip L3 caches. Z-RAM likely won’t show up for another year or so, but it could significantly impact the market when it arrives.
An interface on the horizon is serial port memory. Designed to bring high-speed serial interfaces to memory, it’s sponsored by the Serial Port Memory Technology Working Group. It will, in theory, cut the number of pins needed for memory by 40% and deliver 3.2- to 12.6-Gbyte/s throughput. It initially addresses multimedia mobile devices where real estate is a premium and power must be minimized.
Continue to page 2
NONVOLATILE SOLID-STATE STORAGE
DRAM by its nature is volatile, but nonvolatile storage is always part of the system solution. Nonvolatile solid-state storage has seen dramatic change over the years with rising capacities and falling costs. A range of technologies is now in general use, from flash to MRAM and FRAM.
Read-only memory (ROM) is a well-known nonvolatile storage technology that’s showing more traction in standard microcontrollers. It has always been a factor in custom chips because it’s the most efficient nonvolatile storage technology. Unfortunately, ROM can’t be changed like the other nonvolatile storage technologies covered here.
One example of ROM use involves Luminary Micro’s LM3S9000 microcontroller, which has runtime libraries that provide StellarisWare Library services. This is in contrast to typical custom ROM-based microcontrollers that contain the entire application. In Luminary Micro’s case, the main application that uses the ROM code is stored in a device using another nonvolatile memory. The ROM may have boot code allowing the main application to come from a range of sources, including via a network connection.
Flash memory covers a wide range of solutions. FRAM and MRAM, which hold lots of promise and are currently used for important yet niche applications, have similar characteristics.
These nonvolatile memories effectively replace SRAM, operating at SRAM speeds. However, they don’t have the write lifetime issues of flash memory. This allows them to be used for primary and secondary storage. Capacities are growing and costs are dropping, but they still trail both SRAM and flash. This leads to some interesting combinations, like the RAID controller that was mentioned earlier.
The 8051-based, VRS51L3xxx microcontroller family from FRAM vendor Ramtron combines 64 kbytes of flash memory, 4 kbytes of SRAM, and up to 8 kbytes of FRAM (Fig. 3). The flash memory is used for program storage and long-term, slowchanging data, while the SRAM and FRAM are used for read/ write data, with FRAM handling nonvolatile chores.
FRAM and MRAM also show up in plug-compatible versions that can replace SRAM and flash parts. Everspin’s MR2Axx MRAM line is pin-compatible with standard 8- and 16-bit SRAM parts. These parts are also available in ball-grid array (BGA) packages with 35-ns read/write times and extended industrial temp versions. Up to 512 kbytes of Everspin’s MRAM parts are used in Emerson Network Power’s Freescale MPC864xD-based MVME7100 single-board computer (Fig. 4). Look for 16-Mbit parts later this year as well as automotive-compatible parts.
Coming soon is phase change memory (PCM) from Numonyx. As with Z-RAM, it will have to challenge entrenched technologies, but its performance and scalability promises to push it past the competition once it becomes established. It’s still a couple of years away, but keep an eye on this technology.
The established technology of the day is flash memory, encompassing a range of implementations. Flash memory found in most standalone flash products exhibits a higher density than that incorporated in microcontrollers. That’s because it must be implemented using the same process as the logic circuits.
Standalone flash memory comes in a range of formats, too, from chips to removable device formats such as Compact Flash, SD/XD, MiniSD, MicroSD, Memory Stick, and, of course, USB flash drives. Many of these are employed in embedded applications as well, leading to more rugged, industrial versions like WinSystems’ 16-Gbyte industrial-grade Compact Flash (Fig. 5). Its dual-channel operation supports sustained read transfers up to 40 Mbytes/s and writes using interleaving up to 30 Mbytes/s.
For embedded applications, even more options are available. Modules that plug into integrated drive electronics (IDE) headers are common replacements for hard drives. Initially, the capacity of these flash drives was low. However, it has grown significantly, allowing these devices to move from boot chores to a complete replacement of hard drives in many applications.
Western Digital Solid State Storage, formally Silicon Systems, is one source of flash drives that utilize the Small Form Factor (SFF) SIG Silicon Blade form factor. The Silicon Drive Blade is a latching, rugged alternative to the 10-pin module also available from Western Digital (Fig. 6). Available from a number of sources, it plugs into the 10-pin header found on most PC motherboards.
Form-factor decisions tend to pale against other technology choices when it comes to flash memory. NAND versus NOR and SLC versus MLC technologies introduce a host of tradeoffs that designers must consider. No one approach satisfies all application requirements. In fact, a mix of technologies is appearing in some more demanding applications.
Some general specs from Toshiba provide some insight into these tradeoffs. For example, NAND erase speeds are 2 ms while NOR is 900 ms. On the other hand, NOR capacities are four times that of NAND, reaching 256 Mbits and growing. NOR’s read speeds, which clock at 103 Mbytes/s, are at least four times faster than NAND. NOR’s write speed, though, is on the order of 0.5 Mbytes/s versus 8 Mbytes/s for SLC NAND.
The SLC versus MLC tradeoff is similar. MLC offers higher density, but at a significant loss of write lifetime. All flash technologies have the limit, which make alternatives like MRAM and FRAM desirable. If these technologies could approach or exceed flash capacity for a similar price, then there would be a major change in the memory landscape. Unfortunately, that’s unlikely in the near term.
This means that wear-leveling techniques are becoming more important, especially given MLC’s limitations in this area and its significantly higher capacity. The target for hard-drive replacement is a five-year lifetime. Though this is sufficient for enterprise solutions, it may not necessarily suit embedded applications that have a longer lifetime. This means designers must pay closer attention to a wider range of specifications than in the past.
Continue to page 3
Load leveling can be performed in hardware or software. Several microcontrollers are attached to “raw” flash. These typically incorporate load leveling in the device drivers. Products like Datalight’s FlashFX Pro family handle a range of NAND and NOR flash devices in addition to providing the same interface for NAND flash controllers.
Hardware-based interfaces offer a number of important advantages, including a consistent microprocessor interface. This can cause more of an impact than most designers appreciate because of the changes in “raw” flash. Moving to newer flash-memory chips doesn’t typically require major accommodations. Still, it’s one more issue that requires at least device driver changes.
SandForce’s SF-1500 SSD controller highlights this approach (Fig. 7). Specifically targeted at MLC flash, it delivers a minimum five-year lifetime and throughput on the order of 30k IOPS (I/ Os/s) for random read/write and 250-Mbyte/s throughput for sequential read/write operations. This translates to 5k IOPS/W versus 20 IOPS/W for a hard disk.
The DuraClass technology employed by SandForce also implements Redundant Array of Independent Silicon Elements (RAISE), essentially RAID with chips. This, combined with advanced dynamic wear leveling and advanced error correction coding (ECC) support, allows a SandForce-supported solid-state drive (SSD) to reach the lifetimes and performance requirements for enterprise storage.
Alternatives will be hard-pressed to match this unless similar approaches are taken to mask the limitations of MLC flash. For example, many alternatives force daily write restrictions to attain a guaranteed five-year lifetime. SandForce can support a singlechip controller solution in a 512-Gbyte, 1.8-in. SSD.
SSDs are part of the mass-storage solution set. They have killed the 1-in. hard-drive market and are increasing their share in the 1.8-in., 2.5-in., and even the 3.5-in. market. They’re also making a big difference in form-factor solutions that don’t follow the normal hard-drive configurations. That’s because SSDs can easily be placed on a circuit board, an option difficult to attain with a hard drive.
Nonetheless, hard disks still beat SSDs when it comes to the upper limit on capacity. They also win from a price/gigabyte standpoint. The boundary where an SSD will be used instead of a hard disk continues to move, but this simply means more options for designers and users alike.
The 1.8-in. drive is the favorite for mobile devices. This is where the choice between flash and hard drives is more difficult for consumers. It’s easier for designers, though, since SSDs and hard drives are both readily available in this form factor. (Price and capacity tradeoffs still exist.)
Most of the action is in the 2.5-in. space. It includes external drives like Fujitsu’s 500-Gbyte Handy Drive (Fig. 8). This capacity was the top end for 3.5-in. drives not too many months ago.
The form factor has also significantly influenced the design of servers due to the fact that a large number of drives can easily fit into a 1U rack. Even more importantly, the number greatly exceeds the minimum for RAID configurations, leading to growth in this controller market. An eight-drive RAID system is no longer a novelty. It’s become a standard option instead, with even large drive counts showing up in high-end storage systems.
The drive capacity for a 2.5-in. drive still pales compared to its 3.5-in. sibling. Size isn’t everything when it comes to RAID systems, though, where system rebuild times are lower for smaller drive configurations.
Don’t count out the 3.5-in. market. Drives like Seagate’s Barracuda LP are coming in with 2 Tbytes of storage looking to fill the capacity cravings for video storage in digital video recorders (Fig. 9). If the movie studios ever recognize the opportunity they have with this growing amount of storage, the 3.5-in. drive market will go through the roof. As is, it might be tough to keep up with demand.
RAID continues to play a part with 3.5-in. drives, especially for consumer applications. However, keeping this hidden from users is crucial. It’s easy to explain adding more storage to a consumer and even reduced capacity to improve reliability via RAID. Understanding the difference between RAID 1 and RAID 5 is a whole other matter.
No coverage of storage would be complete without mentioning the increasing importance of interconnects. For consumeroriented products and a wide range of embedded applications, this means USB and SATA. USB is an indirect interface for hard drives and potentially a direct interface for flash drives.
External SATA or eSATA is cropping up in a number of products, including external drives, but it will complement rather than display USB. USB 3.0 will arrive in time to address the higherthroughput drives. For now, though, High Speed USB 2.0 will suffice with its 480-Mbit/s transfer rate.
SAS and Fibre Channel will be found at the enterprise level. Fibre Channel systems will often comprise SATA or SAS hard drives and potentially, or rather eventually, SSDs.
There are more options than ever when it comes to storage, but those choices won’t be easy. There are alternatives.