RAID storage, with the exception of RAID 0, uses redundancy to provide improved reliability. RAID 1 makes use of mirroring so that two drives contain the same information. The system continues to operate if one of the drives fails or is removed. RAID 0 does not provide redundancy but allows multiple drives to be concatenated to provide a larger, logical drive.
RAID 10 is a combination of RAID 1 and 0, providing redundancy across multiple drives. RAID 1 ADM (advanced data mirroring) uses three drives instead of RAID 1’s dual drive system, allowing RAID 1 ADM to continue to operate even if two drives fail.
Cutting the amount of storage in half for RAID 1/10 and by two-thirds for RAID 1/10 ADM may seem inefficient, but often the requirements for an application offset the cost. Likewise, there are parity-based RAID solutions like RAID 5 and RAID 6 that reduce the overhead, but trade off write performance for improved capacity. RAID 5 can handle a single drive failure within the array that must have at least three drives. RAID 6 uses dual parity and can handle two drive failures within the array.
Some RAID systems can also improve read performance because data is available from multiple sources. For example, a RAID 1 system could handle two reads simultaneously since the same data is available on both drives, assuming one has not failed.
The downside to RAID configurations is system integrity. Data needs to be written to multiple devices; a loss of power could jeopardize this integrity if it causes only part of a write transaction to complete. Hardware-based RAID controllers can eliminate this problem by using an on-board, non-volatile cache.
Microsemi’s SmartRAID 3154-24i (Fig. 1) provides 24 12 Gbit/s SAS/SATA ports with RAID support for RAID levels 0, 1, 5, 6, 10, 50, 60, 1 ADM, and 10 ADM. ADM is also handy for migration from existing RAID 1/10 systems to new hardware.
1. Microsemi’s SmartRAID 3154-24i (bottom) used the ASCM-35F capacitor module (top) to provide 24 12 Gbit/s SAS/SATA ports.
It also comes with the ASCM-35F capacitor module to save the on-board DRAM cache should power be lost. The supercap on this module allows the system to operate at full speed with the DRAM cache while maintaining system integrity through power cycles. The supercap does not need to be periodically replaced like a battery.
I had a chance to check out the SmartRAID 3154-24i with the supercap module that provides zero-maintenance cache protection (ZMCP). This included checking out a very useful feature: simultaneous support for RAID and HBA (host bus adapter) modes. The latter allows a processor to work with the drives in “raw” mode without any overhead. This is common in large systems where the operating system manages the disks, as well as applications where redundancy is not required. This configuration allows data that requires high system integrity, like the operating system and applications, to reside on a RAID partition.
The 24-port controller with the 28-nm SmartROC SoC handles two dozen drives, or up to 256 devices, using SAS expanders found in very large systems. It is also a good size for handling the mixed configuration I wind up using. The SmartRAID 3154-24i is capable of delivering 1.7 million random read 4 Kbyte IOPS.
Systems that require just the raw HBA mode can take advantage of the less-expensive SmartHBA 1100 series. Like the SmartRAID 3100 series, the HBA adapters come in various configurations with fewer ports. There is also the SmartHBA 2100 series with raw and basic RAID support. The controllers support Shingled Magnetic Recording (SMR) host-aware drives in HBA mode.
The controller and supercap are designed to fit into a pair of PCI Express sockets. Alternatively, the supercap can be mounted on the case and extension cables can be used if the distance is longer than the length of the card. The mounting bracket for the PCI Express socket can handle a pair of supercaps if two controllers are needed.
Installation is easy using the mini-SAS connectors that handle four ports. Cabling is available separately. The maxCache 4.0 software is built into the controller.
The more involved set-up was on the software side. The maxView Storage Manager (Fig. 2) is a web-based GUI interface. The system supports Windows, Linux, Solaris, and VMware. I used the Red Hat Linux support since I run CentOS. MaxView also supports SES and SGPIO enclosure management common in larger drive installations. SAS tape devices and autoloaders are supported as well.
2. The web-based maxView Storage Manager allows remote management of the controller and attached drives.
In actuality, software setup is relatively easy. SNMP support is slightly more involved. This includes VMware CIMPAT and vSphere SNMP support if needed. It is possible to set up virtual arrays at boot time using the BIOS/UEFI interface, but it is much easier with maxView, and that will be the normal management interface anyway. The web interface works with most browsers. I used Chrome and Firefox.
Using maxCache was easy. Essentially my virtual array for the operating system and main applications is on a RAID 6 hard disk array with a pair of solid state drives (SSDs). The software manages the caching between its own DRAM, the SSDs, and the hard disk drives (HDDs). This can provide significantly better performance than just HDDs alone, or HDDs with caching SSDs. Of course, many installations are simply using arrays of SSDs.
The software handles most errors transparently providing an audible alarm if a drive fails. Hot swap drives allow easy replacement, and the system will automatically rebuild depending upon how it is configured. Likewise, extra drives can be included as hot-spares, allowing more leisurely replacement of failed drives.
I have used older versions of maxView and tried the migration features. For example, it is possible to migrate from a RAID 5 to RAID 6 configuration. Other migration combinations are possible. This is handy for extending the redundancy of a system or changing the type of redundancy such as migrating from RAID 5 to RAID 10. The system handles all the details and makes sure the new configuration will be supported by the number of drives used in the array.
I wasn’t able to measure it, but Microsemi has improved the power requirements for the new controllers. Also, maxCache’s dynamic power management can yield power savings of up to 30%. The system also supports features like staggered drive spin up that can be very useful in large HDD arrays.
Software-based RAID systems using HBA controllers may be less expensive, but SmartRAID solutions are better if system integrity is paramount. The flashed-based backup of DRAM with supercap support works regardless of whether systems have a UPS to handle power loss. You can appreciate why I prefer to use hardware like the SmartRAID 3154-24i having lost power twice this year due to weather. I have not lost any data in quite some time.