The Storage Hierarchy Gets More Complex

March 10, 2011
New technologies and continued demand for more storage at all levels pushes the envelope for faster memories with more capacity that use less power in smaller footprints.

Memory hierarchy

Micron's DDR3 BGA chips

SFF-SIG's RS-DIMM memory module

RAMBUS Flexmode architecture

Seagate 3 Tbyte Barracuda XT

Viking SATADIMM and SATACube3

HighPoint RocketHybrid

Fusion-io ioDrive Octal

Storage architectures may be simple for microcontrollers (Fig. 1). However, they get rather complex as applications become mobile, virtualized, and personalized. Multicore, many core, and cluster architectures similarly blend a wide range of storage technologies into a single box. High-end microprocessors incorporate multiple cache levels with a surprising variety of interconnect and cache coherence schemes.

Not too long ago, a cache miss would only invoke a chain of events that extended to a nearby hard drive. Now the effects may ripple through solid-state disk (SDD) drives and hard drives or move out through the cloud or local-area network (LAN), possibly using iSCSI to bring a page into a virtual memory system. And it all would be handled transparently with respect to the application.

Still, designers, developers, managers, and users need to consider the type and quantity of storage a system will employ and how it will be configured. The challenge is harder than in the past because of the variety of options available.

DRAM Dynamics

DRAM capacity continues to grow bigger, faster, and cheaper. Double-data-rate 3 (DDR3) dual-inline memory modules (DIMMs) top out at about 16 Gbytes right now. They run at 533 to 800 MHz supporting 1066 to 1600 Mtransfers/s. Standard DDR3 runs at 1.5 V, but the latest low-power DDR3L uses 1.35 V for significant power and cooling savings.

DIMMs and small-outline DIMMs (SODIMMs) are the norm for desktops, servers, and laptops, but embedded storage requirements are equally insatiable. BGA-base (ball-grid array) parts like the DDR3 chips from Micron are the form factor of choice for mobile, industrial, and rugged applications (Fig. 2). Package-on-package matching of DDR3 memory to processors is common in high-end mobile devices like Apple’s iPad (see “Inside the Apple iPad” at electronicdesign.com).

BGA packaging provides memory for rugged applications, but there’s demand for rugged storage that isn’t soldered down. The Small Form Factor Special Interest Group’s (SFF-SIG) RS-DIMM platform addresses this space (Fig. 3). The 67.5- by 38-mm module stands 7.36 mm high and supports nine- and 18-chip designs. The pinout on the Samtec connectors for the DDR3 support resembles the standard DIMM layout. The standard also specifies an optional Serial Advanced Technology Attachment (SATA) interface.

DDR3 has taken over all but the replacement market for other memory in the desktop, laptop, and server arenas. It definitely hasn’t displaced DDR2 in embedded designs, though, where compatibility and slower speeds are common. The challenge for chip and system designers is that DDR3’s low power, high capacity, and cost advantages are significant. Still, too many microcontrollers simply do not have the speed or storage requirements for DDR3, but on-chip storage is insufficient.

Graphics DDR version 5 (GDDR5) graphics memory is based on DDR3. This helps keep the cost down and simplifies system design since the design rules are on par with DDR3. GDDR5 increases the number of data lines compared to its predecessor. Its home right now is in the high-performance graphics and supercomputer environment.

The various DDR implementations to date employ single-ended signaling. So far the designs have kept up with the signaling limitations, but this is likely to change as faster speeds are attained. The high-speed serial interfaces like PCI Express, USB 3.0, SATA, and Serial-Attached SCSI (SAS) are all differential in nature. That may be the likely path for DDR as well.

The Rambus Terabit Initiative is the company’s proposal for a differential-based signaling system for the next generation of memory (see “Possible Differential Path To 20-Gbit/s Memory” at electronicdesign.com). The company is demonstrating 20-Gbit/s serializer-deserializers (SERDES) to handle the transfers.

The FlexMode design defines an interface that could handle DDR3, GDDR5, and its new differential-based support using the same set of pins, though for different purposes because the differential pairs require twice as many lines (Fig. 3).

The approach trades off control/address (C/A) pins for additional differential data pins. The C/A signals are also differential, further reducing the actual number of C/A signals. The design can do this because the data rate on the C/A lines is increased.

The Serial Port Memory Technology (SPMT) Consortium is taking another differential approach. Its solution targets mobile devices and uses a low-voltage differential signaling (LVDS) system that is scalable by lanes like PCI Express. SPMT is self-clocking, like PCI Express. A 20-pin implementation has a 6-Gbyte/s bandwidth.

Nonvolatile Storage

NAND and NOR flash technologies remain the centerpieces for nonvolatile storage, though other technologies like magnetoresistive (MRAM), ferroelectric RAM (FRAM), and phase change memory (PCM) are gaining ground. Single systems often use a mix of technologies. A microcontroller-based redundant array of independent disks (RAID) system might utilize NAND or NOR flash for program storage and MRAM, FRAM, or PCM for RAID data tables replacing battery-backed dynamic RAM (DRAM).

Storage capacity for all of these technologies continues to grow, with NAND remaining in the lead. NAND’s lead is due to the increased use of multi-level cell (MLC), although single-level cell (SLC) NAND flash still provides better cost, throughput, lifetime, and reliability. MLC is also used with NOR technology.

MLC NAND flash will be found in most USB flash drives and other removable storage cards. It is even finding its way into high-capacity enterprise drives with the help of sophisticated flash memory controllers. The sweet spot for the enterprise is five years, so guaranteeing flash drive operation for at least this period has been a requirement for system designers.

Flash is fast, but interfaces like 6-Gbit/s SATA and multilane PCI Express are pushing SSD controller technology. MLC flash controllers face a number of challenges in addition to performance and reliability (see “Key Challenges In SSD Controller Development” at electronicdesign.com).

Block recycling and load leveling are keys to long drive life. Even temperature management comes into play. SandForce is one vendor delivering flash memory controllers. Its DuraClass RAISE (redundant array of independent silicon elements) technology employs a RAID-style architecture to recover from flash block failures.

NOR flash is finding its way into more rugged environments. Spansion’s 65-nm MirrorBit GL-s 2-Gbit technology can handle in-cabin automotive (–40°C to 105°C) applications (see “2-Gbit NOR Flash Supports Automotive Apps” at electronicdesign.com). It is now available in a 9- by 9-mm BGA package.

Also, NOR flash has the advantage of allowing code to be executed directly from flash. Companies like Samsung are challenging NOR using a combination of SRAM and NAND flash. Samsung’s OneNAND integrates a 3-kbyte SRAM buffer into its NAND controller. Developers can use the controller’s interface to external NOR flash if necessary.

Dual and quad serial peripheral interfaces (SPIs) also are affecting where nonvolatile storage is being used, often replacing parallel memory chips. Most of the nonvolatile storage is available with these interfaces.

NXP’s Cortex-M3-based LPC1800 microprocessor can even run, not just boot, from quad SPI memory (see “Cortex-M3 Can Run From Quad SPI Flash” at electronicdesign.com). The LPC1800 also highlights the mix of memory in microcontrollers these days. It has on-chip ROM, one-time programmable (OTP) memory, flash, and SRAM.

OTP storage is another nonvolatile storage technology that often goes unnoticed. Companies like Kilopass and Sidense deliver antifuse technology OTP for a wide range of applications. OTP provides security and low-power operation. It is also easy to incorporate into existing CMOS manufacturing flows supported by major foundaries. The technology is often used for key or configuration storage, but it also can be used in place of ROMs.

Disk-Drive Capacity Continues To Grow

Seagate’s 6-Gbit/s, 3-Tbyte Barracuda XT hard-disk drive (HDD) pushes the envelope in capacity past Windows XP’s 2.1-Tbyte limit (Fig. 5). Luckily, most 64-bit operating systems like Windows 7 and Linux do not have an issue with a large 3-Tbyte partition.

The 3-Tbyte drives do bring up unified extensible firmware interface (UEFI) BIOS, though. UEFI was designed to address PC BIOS limitations. It can handle the GUID Partition Table (GPT) as well as provide faster boot times and support for independent drivers.

It is possible to handle these large drives for older operating systems. The Seagate DiscWizard software that comes with the drive does this through partitioning and device driver software. This transition is likely to push many designers to the newer platforms.

Another issue highlighted by Seagate’s announcement is the move toward 4-kbyte sectors from the venerable 512-byte sectors. The 4-kbyte sectors match operating-system requirements better in addition to providing more efficient throughput.

Most motherboards support 4-kbyte sectors already. Even Windows XP supports 4-kbyte sectors. All current desktop and server operating systems support them as well. An operating system’s virtual memory support is often configured with a 4-kbyte page size or a multiple thereof.

Some drives provide support for both sizes. Typically, they implement 4-kbyte sectors and map the smaller sector size onto them if requested. Drives will operate in one mode or the other.

Few flash drives are at the 3-Tbyte mark yet primarily due to cost, yet flash drive adoption is up in general. Lower-price chips are a big factor, but so are the improved SSD controller chips. Another aspect that is pushing storage into new areas is flash storage’s ability to fit into new locations.

The Viking Module Solutions SATADIMM and SATACube3 allow for even more compact embedded solutions (Fig. 6). The SATADIMM plugs into a DDR3 socket for power. It includes a SATA cable connection. Suitably designed systems can run the SATA interface on unused pins of the DIMM socket. The SATACube3 stack provides rugged storage for custom system designs.

Hybrid drives like Seagate’s Momentus XT combine flash storage and hard disks into a single package, but that isn’t the only way to get a mix of drive technologies (see “Seagate Delivers 2nd Generation Hybrid Hard Drive” at electronicdesign.com). Marvell’s SATA controller HyperDuo technology is another (see “SATA Controller Pins Files On SSD” at electronicdesign.com). HyperDuo can be found on new motherboards and on PCI Express adapters like HighPoint’s RocketHybrid (Fig. 7).

Marvell’s dual-port SATA controller can handle any type of SATA drives as if it were a conventional controller. Its HyperDuo mode comes into play with one flash drive and one hard drive. The operating system is one of Microsoft’s latest operating-system incarnations that support New Technology File Systems (NTFS).

HyperDuo can operate in “safe” or “capacity” mode. Safe mode works like a cache where commonly used files are stored on both the hard drive and flash drive. The advantage is that the hard drive always contains a valid file system. The capacity is similar to RAID 0, where data is striped across both drives. The system requires both drives to operate.

The big difference between HyperDuo and most other hybrid solutions is that HyperDuo operates at the file level, not at the sector level. Another difference is that any transfers to flash occur after a file is accessed, not during the access.

The process can operate transparently, or power users can explicitly pin a file in flash. The approach is less expensive than SAS controllers that often provide flash-based caching because HyperDuo can take advantage of the Arm processor on the SATA controller and the lack of off-chip memory or the need for battery-backed cache memory.

SAS controllers like LSI’s MegaRAID controllers usually employ a more conventional caching approach (see “SAS RAID Controller Handles Hierarchical Storage” at electronicdesign.com). LSI CacheCade specifically uses flash drives as a secondary cache tier to a set of hard drives. It can handle arrays as large as 512 Gbytes. Adaptec’s maxCache covers the hardware and software.

CacheCade operates like a typical cache controller, loading frequently used sectors of data into flash storage. Performance tends to be significantly better than a hybrid drive, and the amount of flash storage is under the control of the owner. The system can handle up to 32 SSDs.

The other difference, which most SAS controllers support, is the ability to expose storage as virtual drives. Likewise, the storage can be based on a RAID configuration. For example, a single controller can handle a mix like RAID 5, RAID 0, and RAID 60 arrays with multiple virtual drives spanning each. Any of these can, in turn, be matched with flash storage.

These controllers are typically found on enterprise servers, and configuration tends to be simpler with a single RAID array. ISPs and enterprise environments take advantage of virtualization, however, so there may be a need for many virtual drives with different characteristics. Again, these more expensive SAS controllers can handle these chores. Finally, SAS hard drives aren’t a requirement, although their higher performance and reliability are often beneficial. SATA drives are often used where cost and capacity are more important.

SATA and SAS flash drives have many advantages, but the interfaces are bandwidth-limited. It is possible to run flash storage faster than hard drives, and vendors are delivering solutions that connect to the host using PCI Express. PCI Express scales by using more lanes.

Fusion-io’s ioDrive Octal board delivers flash storage via a x16 PCI Express connection (Fig. 8). It supports a 6-Gbyte/s bandwidth and can deliver up to 1 million I/O operations/s (IOPs). The board is built in a modular fashion that can handle up to 5.12 Tbytes of flash memory. The ioDrive Octal looks like a conventional block device.

PCI Express-based flash solutions are popping up everywhere. Rugged applications can take advantage of products like the Extreme Engineering Solutions XPort6103 XMC module (Fig. 9). The XPort6103 offers up to half a terabyte of flash storage. It uses a PCI Express x1 interface, with 3-Gbit/s SATA and encryption support being optional. Also, it uses SLC NAND flash since it is likely to wind up in an embedded application where long life has a higher priority than capacity. Read performance and write performance are 200 Mbytes/s and 120 Mbytes/s, respectively.

Networking and the Internet

These storage technologies address embedded applications and PC and server environments, but another one of the major growth areas continues to be in network-based storage. “The cloud” and “cloud strorage” have been the latest buzzwords, though there is technology to back them up.

These days, file servers are more likely to be network attached storage (NAS) boxes with one or more hard drives (see “Nice NAS” at electronicdesign.com). A host of specialized systems-on-a-chip (SoCs) target this space like Applied Micro’s multicore Mamba (see “Multicore Server Processor Slims Down Secure Networking” at electronicdesign.com) and PLX’s NAS7825 (see “Smart Storage” at electronicdesign.com).

These chips typically include RAID acceleration as well as multiple Gigabit Ethernet ports. RAID 1 and RAID 5 support are common, but so are RAID 6 and combinations like RAID 50 (RAID 5+0) and RAID 60. Encryption support is also a common part of this mix, allowing secure storage even without the use of hardware-encrypted drives. This class of chips enables low-cost wired and wireless NAS servers.

One variation of the NAS box is based on Marvell’s Armada chip, which is often found in a device dubbed the “plug computer,” and exemplified by PogoPlug’s offerings (see “PogoPlug And DockStar Are Internet NAS Boxes” at electronicdesign.com). The PogoPlug Pro supports up to four USB external drives that are typically hard drives (Fig. 10). A front-panel USB connection is specifically designed for USB flash drives as well.

The plug approach is flexible, but that’s only the starting point for these NAS boxes. Their Internet connectivity and related applications make them stand out. Sometimes called cloud storage, the PogoPlug Web site acts as a gateway to an Internet-connected device, allowing other devices like a laptop or smart phone to access data on the NAS box.

The main trick is to overcome the firewalls and gateways usually found between a NAS box on a LAN and the Internet-connected device like the laptop or smart phone. PogoPlug’s free service does this by running communication through its Internet server with the NAS box connecting to that server since it is allowed through the LAN firewall/gateway.

The initial services included basic file sharing, but they have been augmented to include support for features like multimedia streaming, printer hotspots, and e-mail printing. There is even Dropbox-style sharing among devices. The PogoPlug Biz version adds multiple user support along with usage and auditing features.

Unlike Dropbox, PogoPlug’s storage capacity is only limited by what’s attached to the NAS box. The downside is typically the connection bandwidth on the upload side since most consumer connections are asymmetrical. This usually isn’t an issue with the faster cable and fiber home connections.

The more technical “cloud storage” platform tends to be based on a storage area network (SAN). SANs have been the storage backbone for corporate server farms with petabyte or even exabyte storage capacities. This is where Fibre Channel plays, although iSCSI is usually at the top of most lists these days.

Fibre Channel addresses a hardware interface that now runs up to 10 Gbits/s as well as a storage communication protocol. It was specifically designed for large, high-performance, high-reliability storage clusters. The Fibre Channel over Ethernet (FCoE) standard moved the protocol onto the network, which has become so important as cluster computing has moved to the forefront.

The iSCSI standard was started before SCSI drives were pushed out by SAS drives, but iSCSI has nothing to do with the underlying hardware. Its command set mirrors that used by SCSI and subsequently SAS drives, but iSCSI is a networked-based, block storage protocol designed for SANs.

NAS boxes often support iSCSI. Open-source platforms like BSD-based (Berkeley Software Distribution) FreeNAS includes iSCSI support. Likewise, many motherboards and Ethernet adapters provide network boot capability via iSCSI, but the big drive is from virtualized servers.

Cloud computing services like those from Amazon and Google are typically built around virtualized servers running SAN storage linked via iSCSI connections. This approach makes it possible to spread storage and compute components throughout the network. It also means the service provider can handle the configuration while providing free reign to the consumer of the service, who has direct access to the virtual environment.

The problem with the description thus far is that the term “network” covers a lot of ground. In this case, the SAN used for cloud computing tends to be an isolated network, possibly within a virtualized network. In fact, there are typically multiple virtual SANs within a service provider’s environment to isolate the customer storage and compute environments. The virtual machines for the compute environments have a network interface to the customer and another to the iSCSI SAN.

The number of iSCSI connections from a virtual machine usually aren’t limited by its operating system, so it’s possible that a virtual machine may be accessing logical iSCSI drives hosted by different SAN servers. Likewise, a virtual machine isn’t restricted to iSCSI connections. It may also have connections to file servers and other storage solutions such as database servers.

Flexibility is a key advantage of iSCSI, but another is security. We have also discussed network isolation, but another aspect is end-to-end encryption that iSCSI and SAS support. It is becoming more readily available in enterprise hard drives and SSDs as well.

Securing Storage

SSDs and HDDs tend to support the same types of security measures. Full-disk encryption is one of these features. Access to data on a self-encrypting drive requires the proper key when the drive is first used.

This is more than just a gating mechanism, though, because the key is used to encrypt and decrypt the information on the drive. Simply bypassing the security control won’t provide access to the data because of the encryption. A trusted platform module (TPM) is often part of the mix, providing secure boot support (see “Trust Your PC” at electronicdesign.com).

An access key typically provides access to yet another key that is used for the encryption process. This allows multiple access keys to be utilized, so a corporate key could provide access to multiple drives while individuals would have access to the drive that matched their key. One interesting feature of this approach is that a drive can essentially be wiped by destroying the encryption key. This can be done by a single command versus the overwrite method normally employed with a non-encrypted drive.

The advantage of full, hardware-based disk encryption is that the controller handles the details. They are typically matched to the hardware so the encryption process doesn’t slow down data transfers.

Newer SAS controllers are designed to take advantage of hardware-based encryption. Some can provide encryption support without requiring hardware-encrypted drives. SAS controllers typically support one or more disk arrays where hot-swapping is a common way to replace a bad drive. Key management now becomes an issue for the controller as drives are replaced or moved to new locations.

Unfortunately, full disk encryption might not provide finer-grain storage use that a software-based encryption strategy would provide. Some systems like secure USB flash drives can split the drive into two logical sections, one encrypted and one unencrypted. These sections appear as two drives, so no change is required to operating systems.

A new class of drives supports T10 Protection Information (PI) end-to-end encryption, including Seagate’s Constellation 2 (see “2.5-in. Terabyte Drive Secures Secrets” at electronicdesign.com). This methodology requires operating-system and application support as well because the drive’s sectors are actually larger.

In this case, the application handles the encryption and decryption process. It means the data is secured before it leaves an application, so snooping the communications of an iSCSI link does no good.

T10 PI also requires matching controller support, which is found on the latest SAS controllers. It additionally makes it possible to back up drives since copying the data does provide access to the unencrypted contents.

Storage issues now span a wide range of technologies, and even embedded designers need to take many of them into account. With networking, storage is no longer restricted to the device in hand. For more, check out our coverage of the 2011 Storage Visions show at engineeringtv.com.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!