Reassess The Reliability Of Enterprise SSDs

July 10, 2012
Every IT discussion on high-performance storage acquisitions should begin with two questions. First, does the technology meet our performance objectives? Second, will the solution safeguard our data?

Every IT discussion on high-performance storage acquisitions should begin with two questions. First, does the technology meet our performance objectives? Second, will the solution safeguard our data?

Too often, these questions are perceived as separate issues. The pursuit of top-end input/output operations per second (IOPS) sits at one end of the storage drive spectrum while data integrity sits at the other. Performance and safety may seem unrelated, but any well-planned storage deployment must view these pursuits as inseparable, especially for line of business applications.

IOPS And Workloads

What solid-state disk (SSD) review doesn’t rely on Iometer benchmark results? While specific test scripts can vary in their mix of elements, Iometer essentially examines IOPS through random reads and writes and sequential reads and writes. An SSD reviewer might set up a script for 30% random writes and 70% random reads, run the script for three minutes, and whatever number Iometer reports at the end becomes the drive’s performance ranking against its peers.

This scenario is akin to filling half of a truck’s flatbed with gravel, speeding it around a racetrack for a couple of hours at 55 MPH, and declaring its fuel efficiency under an everyday load. But what if the track is actually a dirt road? What if the truck starts off with a full bed of gravel, runs at nearly no load for a while, and then gets filled halfway? Perhaps that’s how such trucks get used in the real world.

Many enterprise workloads are dynamic. Deriving performance conclusions based on one scenario that doesn’t even resemble real-world application conditions is inherently misguided. No one questions that demanding real-world circumstances will wear down a truck more quickly than circling it around a smooth track. Why aren’t the same assumptions applied to SSDs? After all, that’s exactly what happens.

Heavy, enterprise-type data loads will wear down SSDs faster than conventional benchmarking loads. One shouldn’t simply ask, “How fast will it go?” A better question would be, “How fast will it go while handling my specific workloads while reliably meeting my service-level expectations?” Traditional storage benchmarks ignore this approach.

A Reliable Benchmark

The Storage Performance Council (SPC) is a non-profit corporation funded by dozens of storage industry vendors with the express purpose of providing benchmark results able to address these more complex performance and reliability questions. Over nearly 15 years, the SPC has devised and refined five major benchmark suites.

The SPC-1/1C/1C-E benchmark family targets online transaction-type workloads (OLTP) while the SPC-2/2C tests focus on video and large file workloads (Table 1). Additionally, while SPC-1 and SPC-2 examine storage within the context of a complete system, SPC-1C/1C-E and SPC-2C specifically spotlight storage drives and controllers. Compared to Iometer, the SPC tests are quite complex, but they’re designed specifically to emulate real datacenter load conditions.

For instance, consider SPC-1C, an OTLP-focused benchmark for storage components. A server generates synthetic workloads from virtualized workers (called business scaling units, or BSUs) and feeds data into abstracted zones called application storage units (ASUs). SPC-1C uses three ASUs: incoming data (45%), application-generated data (45%), and log files (10%). Specific yet customizable algorithms define how test data is created and then sent out in four streams mixing random and sequential operations concurrently.

Because SPC-1C loads are more demanding than conventional Iometer runs, IOPS results often appear substantially lower than those seen in alternative benchmarks. But in addition to indicating IOPS performance, SPC-1C is also offering pass/fail qualifications on several fronts. For example, to meet service-level expectations, a drive must perform consistently throughout a prolonged work cycle.

Recall those three-minute Iometer runs when examining the following four-hour SPC-1C response time results from three different drives purporting to be enterprise-class SSDs (Fig. 1). Note how SSD Device B in particular performs at acceptable SSD levels for over an hour and then collapses to levels that would make most notebook hard drives look snappy. Clearly, such performance characteristics cannot be tolerated in business applications.

1. The four-hour SPC-1C response time results from three different drives that purport to be enterprise-class SSDs highlights their differences. (Courtesy of Seagate, Enterprise Storage, SPC-1C Case Study Consistent Performance Presentation, Flash Memory Summit, 2011)

Under SPC-1C, if a drive exceeds a 30-ms average response time, or its I/O throughput falls by more than 5% from its reported SPC-1C IOPS average, then the drive fails testing. Additional checks ensure that performance stays consistent at lower load levels and that no data is lost or changed after a power cycling.

Figure 2 shows a snapshot from the January 2012 SPC-1C results for Seagate’s Pulsar.2 SSD. The nearly flat response lines not only overlap, indicating very similar performance across the drive, they also stay almost completely below the 5-ms threshold for eight hours straight. This is the type of performance that enterprises should be demanding for their high-priority storage solutions.

2. The nearly flat response lines in the January 2012 SPC-1C test of the Seagate Pulsar.2 SSD not only overlap, indicating very similar performance across the drive, they also stay almost completely below the 5-ms threshold for eight hours straight.

Throughout SPC-1C analysis, emphasis remains on drive reliability. Under demanding, real-world load conditions, will the drive perform in a consistent, reliable manner in line with enterprise-level expectations? There are many reasons why a drive vendor might choose to avoid publishing SPC-1C results, but there’s only one reason why it would promote them: because it passed, and passing says a lot.

The Reliability Relationship

There is another side to measuring if a drive will reliably deliver on its performance promises: endurance. Will a drive and its NAND media operate at consistent, optimal levels throughout their expected life span?

Put simply, the millions of cells that make up flash media wear out with use. The insulating oxide layer between each cell’s control gate and floating gate erodes as the electrons that make up the cell’s data pass across through the oxide with each write operation. When the oxide insulator becomes too thin to retain the cell’s charge, the cell is flagged as bad and removed from the available storage pool.

Early SSD generations were known for using multilevel cells (MLCs) that could tolerate roughly 10,000 write cycles and single-level cells (SLCs) able to tolerate 100,000 writes. However, wear leveling algorithms have evolved, improving how writes are spread evenly around the drive’s media, lowering the number of writes any given cell performs over time. Wear leveling prolongs the drive’s useful life and reduces the chances of data being lost from failing cells. The drive’s ability to retain data reliably over time is known as its endurance.

Naturally, several factors affect endurance. Temperature can play a huge role. A drive that runs at 35˚C on a test bench for two years will likely exhibit much higher endurance than that same drive running at 55˚C next to a datacenter hot aisle (Table 2). Running under load 24x7 all year, as enterprise drives are expected to do, will have a much higher impact on endurance than the lighter 8x5 loads typical of client systems. A drive’s value should be evaluated based on the total work done along with the speed at which that work is accomplished. One enterprise SSD may do the total work of three or four client SSDs over its life span.

The JEDEC Solid State Technology Association has two publications, JESD218A and JESD219, which establish the industry’s first and so far only standards for defining and testing SSD endurance workloads. The end result is a total bytes written (TBW) rating that can be used to compare drives according to their preservation of media capacity as well as preservation of the drive’s uncorrectable bit error rate (UBER), functional failure requirement (FFR), and non-powered data retention within its application class.

The JEDEC tests outline workloads similar to those used in SPC-1C and stress the entire addressable space of the drive, although writes are not spread evenly across the logical block addressing (LBA) media. Rather, most accesses are loaded toward the front end of the LBA since JEDEC studies show that roughly 20% of data within a workload are responsible for more than 80% of media accesses. JEDEC tests examine drives at various temperatures for time intervals ranging from 500 to 3000 hours.

Because JEDEC requires a significant number of drives to undergo these tests for such long durations, few TBW results have yet to surface, but they’re coming. Vendors with drives that demonstrate high endurance are anxious to trumpet proof of their reliability to enterprises with rapidly escalating needs for high-performance storage.

However, it remains for those enterprises to become aware of the benchmarks and metrics that best determine actual reliability and performance. So far, awareness throughout the industry has remained relatively low. But as this improves, organizations will find their storage deployments yielding higher return on investment (ROI) and IT doing a superior job at meeting service-level agreements.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!