Getting The Most Out Of SSD Arrays

April 26, 2012
Technology Editor Bill Wong takes a hands on look at LSI's CacheCade and FastPath with the LSI SAS 9260-8i controller, SuperMicro A+ server, Micron SSDs and Seagate 15K SAS hard drives.

It has been awhile since LSI's MegaRAID SAS 9260-8i was available (see SAS RAID Controller Handles Hierarchical Storage) but I have finally got my hands on one (Fig. 1). The eight channel, 6 Gbit/s RAID controller is based on the LSIAS2108 RAID-on-Chip technology. The controller contains 512 Mbytes of DDR2 RAM. EUFI support is standard but battery backup support is optional.

The controller was delivered with LSI's CacheCade Pro 2.0 and FastPath software on-board along support for RAID 5 and 6. These are extra options so it is possible to install what your system requires. For example, if RAID 1 is sufficient but you want SSD caching support then CacheCade is all that is required. FastPath is designed to optimize SSD arrays.

As usual, your mileage may vary and most installations will concentrate on one type of configuration based on the application mix. For example, a high performance transaction system might employ SLC (single-level cell) SSDs that have higher write endurance than MLC (multi-level cell) SSDs and FastPath is probably the best choice. Large hard drive arrays with SSD caching would employ CacheCade.

Because of this variance in architectures I am going to be addressing installation, configuration and features rather than performance. Getting the same results of any tests I would do would require identical hardware. In general, CacheCade provided near SSD performance overall when mixed with a hard drive array and FastPath enabled SSDs were almost twice as fast as the same array without FastPath enabled.

Figure 1. The LSI MegaRAID SAS 9260-8i handles 8 SAS or SATA II drives.

I popped it into SuperMicro's 4U AS-4022G-6F A+ Server (Fig. 2) that had two, 16-core Opteron 6000 Series processors (see Hands-on SuperMicro's 32-core A+ Server). The original testing for that project was done using the built-in LSI SAS controller but for this test I disconnected the on-board controller and used only the SAS 9260-8i.

Figure 2. SuperMicro's 4U AS-4022G-6F A+ Server has three x16 and three x8 PCI Express slots. The MegaRAID SAS 9260-8i fits in any of these slots.

The 2.5-in drives used with the tests included five Seagate SAS enterprise 15K Savvio drives (see Family Of Drives Span Enterprise Storage Needs) plus SSD drives from Micron. These includeded RealSSD P300 (see Building A Hybrid RAID NAS Server) and the Micron RealSSD P400e SSD drives. The P300 is an SLC enterprise drive while the P400e is an MLC enterprise drive. All the drives run at 6 Gbits/s. The SSDs had SATA II interfaces.

The P400e's read performance is 50,000 IOPs random and 350 Mbytes/s sequential. The P300's read performance is a little better at 60,000 IOPs random and 360 Mbytes/s sequential but its sustained random write IOPs exceeds 16,000.

I swapped the P300 and P400e drives when running performance tests. They were on par for read support and caching and the P300 performed better for writes as expected. The choice is not necessarily cut and dried since users will need to consider cost and drive life based on their application requirements.

Installation and Configuration

Hardware installation was more of a task because of SuperMicro server came with the on-board controller installed. Pulling out the drive cables was necessary because they were designed for the motherboard controller and were too short to reach the LSI controller (Fig. 3). Also, the controller came with cables that were longer. Routing the cables to the drive backplane was the hardest part of the job. The drives were mounted in hot-swap drive bays.

Figure 3. The LSI MegaRAID SAS 9260-8i cabling replaced the cabling for the on-board SAS controller.

LSI's WebBIOS configuration utility is the first interface to deal with. It is a mousable interface making it handy when using SuperMicro's remote management interace. Most of the configuration is done using WebBIOS although it is possible to do additional configuration using the MegaRAID Storage Manager (MSM) discussed later.

The first step is to create a drive array group using the WebBIOS wizard. This is simply a matter of selecting the drives to be included in the group. The next step is to create one or more Virtual Drives (VD). I usually create at least two VDs on a system so the operating system and applications reside on one and data resides on the other. This can be accomplished using logical drives and tools like LVM (logical volume manager) on Linux but I prefer to use these for other management chores such as delivering logical volumes for virtual machines. In any case, the WebBIOS interface is well adapted to this chore.

The controller can handle Online Capacity Expansion (OCE) and Online RAID Level Migration (RLM). These are critical when expanding existing arrays although it is often easier to manage this as well as array rebuild via MSM.

My usual configuration runs RAID 6 but RAID 10 is the other configuration I tried. RAID 10 essentially uses spanned mirror drives. In this case, four drives are actively used and one is a hot spare. Configuring an array for either layout takes only a few minutes using manual configuration although these changes essentially wipe out the disk contents.

I installed CENTOS 6 on the test system since it was going to be overwritten each time. CENTOS a free version of Red Hat Enterprise Edition (RHEL). Using another operating system would usually require an activation process. LSI supports RHEL 6 and there are even drivers for CENTOS 5.5 in case you don't want to play games with the installation.

The system can be used without the drivers but you will need these installed to use MSM. It is well worth the effort. MSM and the matching drivers run on all the platforms including the latest Microsoft Windows variations.

The exercise of putting CENTOS on an LSI RAID system is straight forward and no different than in the past. This basic RAID 6 configuration did provide a base performance for testing though.

Adding Flash Drives

The next step was to add the CacheCade SSD support. This would usually be done from the WebBIOS during the initial set up so a quick reboot got me back to this point. It was simply a matter of selecting the unused flash drives and adding them to the another array. The SSD drives were already installed so it was simply a mattter of creating a new array (it could be just one drive). This array is then mated to the hard drive array. That's it.

Drives can be added and removed from either the SSD or hard disk array. Manipulation or removal of the SSD array will not affect the system other than change its performance. A little more care is needed with the hard disk array but this is typical. Extending storage will require adjustments with the operating systems and drives cannot be removed to the point that that array will not operate.

I did notice a major difference once CacheCade was up and running. The configurations included two flash drives plus the five SAS hard drives. I did not notice major differences between the P300 and P400e although I did not run a lot of test with heavy write loads. There is not much to see using MSM other than noting the status of the drives although this is on par with the RAID support. Then again, that is what you really want from the system anyway, transparency.

CacheCade supports up to 512 Gbytes of SSD. The SSD array can have up to 32 drives. The controller supports CacheCade with up to 64 virtual drives. I could have come close to this with all the flash drives but I didn't have an expander or enough connections since I had 5 SAS hard drives. Still, 400 Gbytes with the two P400e drives or even 200 Gbytes with the two P300 drives gave impressive results. Even a single 100 Gbyte P300 will give a performance boost to any size SAS array.

As expected, the larger SSD drives did better once large datasets were used for the tests. Using expanders or extenders will allow more drives and options but even limiting the support to 8 devices allows a nice 6/2 or 5/3 drive split.

CacheCade performance improvements varies depending upon a number of factors including the application. At the low end the average is about 40% but some applications with lots of writes could actually make the SSDs next to useless. On the other hand, this tends to be the exception and improvements as much as a factor of 20 are possible.

The FastPath test was a separate exercise because it required me to logically pulling the flash drives from CacheCade-enhacned SAS RAID drives. Again it is a matter of putting all the SSD drives into their own array. The difference from the CacheCade configuration is that the SSDs are now a primary arry. Configuration was on par with creating a RAID 10 or RAID 0/JBOD (just a bunch of disk) configurations. The controller will support these configurations withouth FastPath providing the usual advantages of RAID 1 redundancy and normal SSD performance.

The FastPath configurations did optimize performance compared to an unoptimized SSD array. It gave a 40% to 200% speed boost and that is easy to like. LSI indicates that improvements up to 300% is possible. The affect is less dramatic than the difference in read performance with CacheCade but FastPath does provide consistent improvement. It is definitely better to dedicate drives to FastPath for some applications because even an SSD array alone is better than any hard drives with CacheCade.

The issue will be the cost, the types of drives that can be used and the kind of application the system will be used with. A system with lots of web servers delivering mostly static pages will benefit from FastPath. Transactional systems that do a good bit of reading will gain significant benefits from CacheCade.

The dual Opteron 6000 had more than enough bandwidth to push the on-board controller past its limits. If the application will pound on the disk array then the LSI MegaRAID SAS 9260-8i will definitely make a difference and SuperMicro's motherboard has six PCIe slots. I suspect that most applications would be hard pressed to exercise more than three MegaRAID SAS 9260-8i controllers.

The SuperMicro system makes a great combination with the Seagate 15K Savvio drives and Micron SSDs. Overall CacheCade and FastPath worked as expected. Either is worth the investment although it might be a challenge to use both effectively in the same system. I will probably use CacheCade in the lab since more storage is available when hard drives are in the mix. The cost of these software features is usually less than the cost for a drive making it relatively easy to amortize it as part of the disk subsystem.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!