LSI WarpDrive SLP-300
Solid state disk (SSD) drives are a central part of enterprise storage. Solid-state storage significantly accelerates application performance by reducing data read and write access latency, especially for workloads dominated by random accesses. SSDs are being incorporated into the hierarchy to provide higher performance.
A major benefit of solid-state storage is it reduces overall system total cost of ownership (TCO). This includes lower capital expenditure by increasing individual server performance, allowing server consolidation and lower operating expenditures by reducing power, space, environmental cooling and support costs.
Technology Editor Bill Wong talks with Tony Afshary, LSI Corporation's Director of Marketing, DAS and Server Storage, about SSD intergration within enterprise storage hiearchies.
Wong: What markets and/or applications are using Solid-State Storage today?
Afshary: Today, solid-state storage resides in servers handling the most mission-critical and latency-sensitive application environments, commonly referred to as Tier 0 in the server market space using direct attach storage (DAS). A typical example here would include the ultra high performance applications used by Wall Street financial firms for high volume, real-time market trading. Today, the Tier 0 market represents less than five percent of the overall DAS server market space.
As the cost of flash memory technology used in solid-state storage devices continues to come down , more are being deployed in what can be considered the Tier 1 server market space, where applications require high random input/output operations per second (IOPS) performance to handle, revenue-generating, intense transactional processes. Typically, active and frequently accessed data stored in this tier is less than a day old.
Wong: What are the different Solid State Storage form factors available today and which one is best for your storage requirements?
Afshary: The most widely available solid-state storage form factors available today are solid-state drives (SSDs) and PCI Express-based storage adapters with fully integrated solid-state storage modules. SSDs provide a compatible solid-state storage alternative to hard disk drives (HDDs) for easy integration into existing storage environments utilizing HDDs. Like the spinning media HDDs, the solid state storage SSDs utilize standard storage interfaces such as SAS, SATA and Fibre Channel. The PCI Express form factor approach provides the fastest performance boost in smallest storage footprint because it is housed completely inside the server. It is also the easiest to install and configure. Either solid-state storage form factor can be utilized as a storage cache or dedicated storage volume.
Wong: What are some of the most common approaches to integrating Solid State Storage into server environments today?
Afshary: The most common approach to implementing solid-state storage into today's enterprise environments is to treat it as a dedicated storage volume, much the way one would configure traditional rotating media or HDDs. While it is more expensive to implement a one-to-one replacement of rotating media, the performance gains and latency reductions are dramatic, particularly in application environments where data needs to be processed and analyzed in real-time, such as Ultra Low Latency Direct Market Access (ULLDMA) systems.
A primary objective of solid-state storage solutions in enterprise environments is application acceleration. Product offerings such as LSI MegaRAID controller cards, with MegaRAID FastPath SSD optimization software, or the LSI WarpDrive SLP-300 (Fig. 1), a PCIe-based card with onboard solid-state storage capacity, are a few leading examples for the DAS server storage market space.
Another emerging opportunity for flash-based storage use in the enterprise are solutions that deploy solid-state storage as cache memory. An example would be LSI MegaRAID CacheCade software, which enables SSDs to be configured as a secondary tier of cache to maximize transactional I/O performance. This approach has the advantage of letting the caching system observe the data access patterns and determine what data to place on solid-state storage in order to realize the maximum performance benefits. In most cases, the user or administrator does little or nothing to achieve the performance benefits for the amount SSD capacity allocated. The caching system puts as much frequently accessed data into its SSD cache as possible and leaves the remaining infrequently accessed data safely stored on one or more HDD volumes. Data stored in the SSD cache and on the HDD volumes is protected by standard RAID data redundancy schemes. The only real task for the administrator or end user is to decide how much SSD technology to deploy, and then to configure the caching system to use only the specified amount.
Another approach that is commonly deployed in large, traditional shared storage data center environments, and is now gaining momentum in the server storage market, is storage tiering. This type of solution leverages multiple types of storage media with different capacities and performance capabilities. Intelligent tiering software will dynamically move data between the various storage media volumes making up the total pool of storage. This allows the most actively accessed data to be stored on the highest performing solid state media, while less frequently accessed data is allocated to the most cost effective disk volumes.
Both caching and tiering solutions can be deployed in a number of ways and at various levels of the host server and storage subsystem hierarchy. An added benefit of placing solid-state storage in or directly attached to the server, whether on a PCI-e card or SSD form factor, is that it further reduces storage latency. For the fastest possible performance, you want to keep your storage resources as close to the server processors as possible. The further they are from the server, the more time it takes to move I/O requests back and forth.
No matter which approach is deployed, data stored on solid-state storage still needs to be protected and actively monitored. User data is protected against SSD drive failure by utilizing high-availability RAID algorithms and software features such as LSI MegaRAID SSD Guard. SSD Guard preserves data availability by automatically copying data from an SSD, with a detected performance or reliability issue, to a designated spare or newly inserted drive.
Wong: What are the benefits of using Solid State Storage as a cache for traditional spinning disk storage rather than a complete replacement?
Afshary: Most storage system deployments generally over allocate storage to anticipate data capacity planning requirements. With this in mind, solid-state storage costs and limited capacity (when compared to traditional rotating media) make complete replacement of spinning media cost prohibitive to all but the least cost sensitive application deployments. The approach of using solid-state storage as a cache or as a tiered level of storage is a cost effective solution because it only requires enough solid-state storage capacity to store and accelerate the most frequently accessed application data. However, when access to all application data is considered mission-critical or when the highest possible performance enhancement is required, then storing all the application data on a dedicated solid-state storage device is the right approach.
Wong: Are there any other benefits that Solid State Storage Caching offers?
Afshary: A surprising side benefit of these Solid State Storage approaches is that the performance of the associated spinning hard disk drive (HDD) volumes in the system are also enhanced because a majority of the workload normally serviced by the HDDs, is now offloaded to the Solid State Storage. In some environments there is a sufficient shift of application performance critical data accesses to Solid State Storage allowing replacement of the more expensive high performance HDDs with more cost effective near line HDDs.
Wong: Are there instances where Solid State Storage is not worth the investment?
Afshary: Solid-state storage delivers cost effective application performance enhancement when applied to transactional random access workloads and especially when data is accessed frequently or repetitively. As an example, a database application that randomly and frequently accesses active portions of the larger database is an excellent way to apply SSS to accelerate application performance. Application data that is accessed infrequently or isn’t performance critical is still best stored on traditional higher capacity and lower cost spinning HDD . For example, a large amount of sequentially read data may not benefit from being cached in SSS unless portions of the data are reread multiple times and the majority of those portions of read data fits within the available SSS capacity.
Wong: Solid State Storage provides enhanced reliability by eliminating the mechanical issues associated with spinning disks, but does this mean data protection is now less important?
Afshary: While it is true that solid-state storage removes most of the mechanical elements that make HDDs more susceptible to failures over time, it is important to remember that solid-state storage can fail and will eventually wear out due to write endurance limitations. Data stored on solid-state storage still needs to be protected and actively monitored.
Wong: Looking forward, what are some other trends that will accelerate Solid State Storage adoption in the enterprise?
Afshary: Solid-state storage performance and cost per gigabyte (or capacity) will continue to improve and this trend will accelerate adoption; expand its usage models and results in the development of new innovative solutions.