Electronic Design

Storage Must Prepare For The Zettabyte Universe

All guts and no glory—storage solutions in the Information Age aren’t the engineering “no-brainers” they once were.

Remember when bubble memory was the top storage technology? Then along came the faster, cheaper, and higherdensity hard-disk drive (HDD).

Of course, bubble memory replaced core memory. One example of the latter was the Apollo Guidance Computer, which incorporated the read-only core rope memory (it resembled a rope of woven copper wire). The Apollo 11 lunar mission in July 1969 used 36 kwords of core rope memory ROM with a cycle time of 11.7 µs to store a program that, when printed, required six inches of 11- by 15-in. fan-fold paper.1 But enough about the past. What about the future of storage?

During January’s Storage Visions Conference in Las Vegas, Tom Coughlin of Coughlin Associates said that we can expect increases in HD/SD television streams and downloads, plus a continued increase in music downloads. He expects the average household in the U.S. to require more than a terabyte of storage space for home entertainment by next year, approaching 5 Tbytes by 2013. Add in personal data and home backup requirements, and these figures jump to more than 2 Tbytes by next year and nearly 9 Tbytes by 2013.

These trends will drive the sales of HDDs in consumer electronics from just under 100 million in 2008 to 250 million in 2013, mostly in set-top boxes, external storage, auto entertainment, and personal media players (PMPs). Flash memory will reap similar rewards, with most flash for consumer devices going into cell phones, and the rest divided among MP3 players, PMPs, and digital cameras. Flash will appear in more than 1.5 billion devices this year and approach 2.5 billion in 2013.

Optical drives buck the trend, though. Their use in auto navigation and entertainment, camcorders, and DVD players will peak in 2009 at around 300 million units and then decline to less than 250 million units by 2013.

“The digital universe will grow six-fold, from 161 exabytes in 2006 to 988 exabytes in 2010,” says John Rydning of IDC, describing the total amount of data in the world. An exabyte is 260 bytes, or 1 quintillion bytes or 1 billion gigabytes. After exabytes come zettabytes (270 bytes), yottabytes (280M bytes), xonabytes (290 bytes), wekabytes (2100 bytes), and vundabytes (2110 bytes). This continues through lumabytes, or 2210 bytes. No names have been locked down beyond lumabytes.

So where is all of this data going, and what’s driving it? According to Jim Handy of Objective Analysis, “Mobile applications will migrate to flash memory \[while\] static applications will favor \[magnetic\] HDDs.” In the 1980s, the driving factor was text files, followed by photos in the 1990s, and music in the 2000s. Trends are moving to video now. Going forward, Handy’s company believes library replica and Internet replica will drive data storage. After that is anyone’s guess.

If we look at enterprise storage in particular, IDC noted a few trends in the January 2008 issue of InfoStor Magazine. First, the use of parallel SCSI is rapidly declining from almost 50% in 2006 to under 10% in 2009, while the use of Fibre Channel will stay about even over the same time at around 20%. Serial Attached SCSI will grow from under 10% usage in 2006 to just over 25% in 2009, while Enterprise SATA will grow from just over 20% to just under 50% in the same period.

Speaking of enterprise storage, Hubbert Smith of Samsung noted some data-center application requirements during a recent Storage Power Lunch event. The parameters used included power, capacity, reliability, performance, and vibration tolerance across a multitude of applications, including surveillance, embedded, and scientific (see “Application Requirements,” www.electronicdesign.com, ED Online 18634).

With all of the fascination surrounding more glamorous technologies, storage is sometimes seen as rather dull. Yet since virtually every electronic design has some form of storage, its importance can’t be overlooked. And despite what some may think, storage isn’t always boring (see “DDR3's Impact On Signal Integrity,” ED Online 18633).

MetaRAM’s recent MetaSDRAM chip set increases the capacity of a DRAM-based, dual-inline memory module (DIMM) by a factor of four. It’s also significantly cheaper and requires less power than technologies that use other methods to attain the same capacity on a DIMM (see “SDRAM Chip Set Boldly Goes Where No Man Has Gone Before,” p. 23).

The Violin Switched Memory (VXM) from Violin Memory provides a huge amount of DRAM- or flash-based storage. It employs a unique patent-pending switched-memory architecture (versus traditional bus topology) that delivers an impressive 1.7-Gbyte/s bandwidth.

This new architecture allows for incredible scale, since a singlememory controller supports more than 4000 memory devices with a latency less than or equal to that of a repeated bus network, saving more than 75% in power (Fig. 1). It also provides fault tolerance, whereby a module can experience failure without data loss or application interruption.

The DRAM and/or flash memory are configured into Violin Intelligent Memory Modules (VIMMs). Each VIMM can be a 6-Gbyte DRAM or 64-Gbyte NAND flash. The company packs up to 84 of the VIMM modules in a standard 2U-height (88.9 mm) form factor in a product called the Violin 1010 (Fig. 2). An entire rack full of Violin 1010 products would provide a 100-Tbyte RAM disk.

Continue to Page 2

This type of capacity, bandwidth, and latency suit the Violin for high-performance imaging, large-scale caches, real-time data acquisition, databases, design automation, and scientific computing. The 1010 connects to a host computer via a 10- or 20-Gbit/s PCI Express interface, providing latencies as low as 3 µs. And at more than 1 Gbyte of DRAM or 10 Gbytes of flash per watt, Violin presents the ideal solution for data centers vying to be more green.

Small-form-factor solid-state drives (SSDs) are making more and more inroads into the memory market. SMART Modular Technologies’ recent XceedLite, a PATA 1.8-in. SSD, features what the company claims is the industry’s lowest power consumption (Fig. 3). The device draws 80 mA while reading, 70 mA current while writing, and 9 mA in passive mode during operation in the 3.3- to 5-V range.

In addition, the XceedLite supports 8/16-bit data register transfers, PIO Mode 6, multiword DMA Mode 4, and Ultra DMA Mode 5. The SLC NAND flash device, which has an optional ruggedized enclosure, was designed for industrial-grade specifications as well as tablet PCs, ruggedized notebooks, and industrial and embedded designs. As with most SSD devices, it includes on-board error correction and dynamic wear-leveling.

Yet these examples illustrate one of the problems with today’s storage market. The storage medium and bus technology options are endless, especially compared to yesterday’s choice of single-sided versus double-sided floppy disks. Thus, further analysis may be needed to discover which technology best suits your next design.

When contemplating a storage solution for a particular application, consider the end users and how they will be using your product, its lifecycle, and the conditions under which it must operate during its lifespan. And remember, the best solution may not always have the cheapest up-front cost.

Take magnetic versus flash hard-disk drives as an example (Fig. 4). Under most circumstances where durability and reliability are critical, flash wins every time. But after you get past a few gigabytes, cost becomes a huge issue. This is where cost of ownership comes into play. Solid-state drives may carry a heavier pricetag up front vs. their magnetic counterparts, but what about their cost to the user over the product’s lifespan?

Cost-of-ownership studies should be done with every new design as part of the overall risk assessment. If the new design were a laptop computer, cost of ownership should include the purchase price and the cost to the IT department to load the laptop with the proper software, deploy the laptop, and provide initial training.

The cost of using the laptop then must be considered over time, which gets more difficult when adding in boot time, application launch time, downtime due to software/hardware failures, etc. Next, consider the support and maintenance cost over time. Lastly, when it’s time for a platform upgrade, costs must be considered.

With respect to cost of ownership, we can compare the cost of owning a solid-state solution to the cost of a magnetic hard disk over time to determine if a solid-state drive would be worth the extra bucks. Solid-state drives offer improved reliability, improved read performance, and lower power consumption, but at a greater unit cost and a generally more limited capacity.2

"The product qualification process can be a challenging one, especially for applications requiring long product lifecycles," says Gary Drossel, SiliconSystems' VP of product planning. "Since product life can be tied closely to the storage solution used, establishing the projected life of a product’s storage system solution under various application-specific usage models provides a critical— though previously unavailable—indicator of storage system performance over time. Advanced storage technology that constantly monitors and reports the exact amount of a storage system’s remaining usable life has become crucial to detect issues early enough to actually do something about them." (See “Improve Product Qualification Accuracy With Advanced Solid-State Storage Usage SMART Monitoring Technology,” ED Online 18632.)

After assessing the risks and considering tradeoffs with the cost of ownership, the next question to ask is: Will a customer pay more for a more robust solution? The answer depends on how customers plan to use the device. When all else fails, it might come down to what we’ll call the “coolness factor.”

The “coolness factor” often gets overlooked when we discuss end products. With a cell phone, coolness comes from the look and feel of both the phone and GUI, as well as from slick features that would make other users jealous, like navigation and television.

But how do you measure the coolness of a storage device? A few factors come to mind: boot/recovery time (as compared to other solutions), noise factor, vibration, and heat. Flash hard drives beat magnetic hard drives outright in each of these categories. After all, flash can provide “instant on” capabilities with less complex operating systems and will “boot” noticeably faster for bulkier ones.

Then there’s the noise. Magnetic drives are noisy little buggers, which can be quite annoying. This annoyance gets compounded when you’re using an operating system that likes to pound on the drive every now and then for no apparent reason.

Magnetic drives also tend to give off a lot of heat. If you use a laptop, this too can be annoying, especially when the weather (or office) is warm. Speaking of laptops, don’t forget those precious few seconds lost at various moments throughout the day waiting for the hard drive to perform some little task like, say, opening a folder. Tick, tick, tick... there goes your life.

So is a device that costs more but is “cooler” than other products worth more money to the end user? Just ask Apple about its iPod, iPhone, and MacBook Air sales, and you ought to get an idea.


1. Tales from the Lunar Module Guidance Computer, NASA Office of Logic Design, April 20, 2005.

2. Evaluating the SSD Total Cost of Ownership white paper, Jeff Janukowicz and Dave Reinsel, 2007, IDC, www.idc.com

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.