Electronic Design
Building A Hybrid RAID NAS Server

Building A Hybrid RAID NAS Server

This Hybrid RAID NAS Server was an interesting project. It ties together some interesting piece of hardware that might be useful in other projects. The central piece is the CFI A7879 case (Fig. 1) from E-ITX. It is designed for a Mini-ITX motherboard with room for a full size expansion board. There are four hot swap SATA drive bays (Fig. 2).

The system has actually wound up being a general server that is now central to the lab. It is acting as an Internet gateway, file server and it is running a number of virtual machines. But more about that later.

The CFI A7879 case is stylish and it has a large system cooling fan that moves air across the four removeable drive bays. It has space for a full size expansion board. A two drive version of the case is available.

The motherboard's expansion slot allowed me to take advantage of Adaptec's new Adaptec 6805 SAS controller (Fig. 3). The 6805 handles up to 8 SAS or SATA drives directly and up to 256 drives using SAS expanders. It has a x8 PCI Express Gen 2 interface. There is a 4 port version, the 6405E, but has a x1 PCI Express interface providing a lower throughput. They both provide hybrid RAID 1 support where a solid state drive (SSD) and a hard drive are combined into a single RAID configuration. The Adaptec controller also supports RAID 5, 5EE, 50, 6, 60 and JBOD.

The motherboard for this project is Super Micro Computer's (Supermicro) X9SCV-QV4 Mini-ITX motherboard (Fig. 4). It targets a range of embedded applications from digital signage to 12V automotive. The motherboard can handle the range of Intel Core processors as well as the low cost Celeron B800 series but we packed it with the top end Core i7. This allows the NAS box to do a lot more.

The other main components in the project include Corsair's DDR3 SODIMM, Micron's P300 RealSSD SSD enterprise drives, and Seagate's Barracuda XT SATA hard disk drives. The Corsair SODIMMs are available in sizes up to the 8 Gbytes we used providing the operating systems with 16 Gbytes of memory. The 2.5-in Micron P300 RealSSD drives are SLC flash with a SATA interface. The 3 Tbyte Seagate Barracuda XT drives are 3.5-in SATA hard disks. These were paired in a hybrid RAID 1 configuration.

Related Companies

System Configuration

The Supermicro motherboard uses the Intel QM67 chipset and arrived with an Intel Core i7 processor. The compact mini-ITX motherboard has a pair of DDR3 SODIMM sockets that I filled with Corsair memory. It is easier to install the memory before installing the board in the case. Likewise, the cabling to the motherboard for the status LEDs, switches, etc. is easier to do before installing the expansion card. The motherboard has two SATA 3 and four SATA 2 interfaces that were not used. If hybrid RAID support is not needed then these interface could be used allowing the PCI Express slot to be filled by another card such as a video tuner, video capture card or data acquisition card.

The Adaptec controller was installed after the motherboard. A single hydra cable connects the controller to the four SATA connections for the removable drive bays (Fig. 5). The Zero-Maintenance Cache Protection feature was provided. It consists of an external supercap that has a cable and card that plug into the controller. I have already had plenty of chances to check out this feature since we have lost power a number of times due to the weather in the Philadelphia area.

Two of the removable drive slots were used for the 2.5-in Micron P300 RealSSD drives. The other pair housed the 3.5-in Seagate Barracuda XT drives. These days the drives tend to be bolted into the case since the hot swap connector position is now standard. The drive trays have mounting holes for 2.5- and 3.5-in drives. There is no lock on the drives or main door but this type of system is not designed for physical security.

Actual construction time for the system was very short. Configuration of the hard drives using the Adaptec controller BIOS actually takes longer. The BIOS is relatively easy to use and provides defaults for faster configuration.

Two basic configurations are reasonable with the drives used in the project. It is possible to pair the SSDs and the HDDs (Fig. 6) although this does not take advantage of the hybrid RAID support. For the hybrid RAID configuration, the smaller SSDs are paired with the larger hard drives. The hybrid RAID configuration provides two RAID 1 partitions that are the same size as the SSDs, 100 Gbytes in this case. These can be designated as two separate virtual drives or one 200 Gbyte virtual drive using a RAID 10 configuration. This leaves the remaining hard drive space to be used in a RAID 1 mirror configuration or as RAID 0 that does not provide any redundancy.

The desired software can be installed once the virtual disk drives are defined. I tested the system using Microsoft's Windows Server and then replaced it with Centos 6 (a free version of Red Hat Enterprise Linux).

System Software

Adaptec's controllers are so new that they are not included as part of the installation disks for either of the selected operating systems. I had to add the latest Adaptec drivers during the installation process. There was no issue with Microsoft Windows but I did run into a configuration issue with Centos because the drivers for Red Hat Enterprise Linux (RHEL) were not not automatically recognized by the Centos installation process. A little manual work fixed the issue and I have been running it ever since.

I installed the Adaptec Storage Manager (ASM) after the operating systems were installed. ASM provides real time control for the local and any Adaptec devices available via the network. I actually now use ASM from a workstation that does not have any Adaptec controller to manage the two servers with Adaptec RAID controllers. The most useful aspect of the tool is the status when a RAID partition when it is being rebuilt because a drive was replaced. This was easily tested by pulling one of the drives and popping it back in. The system continued to run as long as the other drives continued to work properly. RAID 6 with a sufficient number of drives is needed to remove two drives at one time.

The final configuration used Centos 6 with its KVM virtualization support. I took advantage of the dual Gigabit Ethernet interfaces to run NAT gateway on a virtual machine (VM). The VM had access to both of the Ethernet bridge interfaces. This approach had the advantage of letting me check out half a dozen gateway packages. It was simply a matter of downloading an ISO image or a KVM image to test.

Of course, only one VM should be running at the same time since one side of the gateway had a fixed IP address. I finally wound up using Untangle for the gateway. I may get around to writing about the other gateway software I tried. They included the likes of IPcop and IPFire.

I am using a Samba server, also running in a VM, to provide access to the rest of the disk space. This let me minimize the host configuration. I used Linux's LVM (logical volume manager) to manage the disk storage. In the case of the Samba server, its main disk partition uses the hybrid RAID storage while the network shares are on the hard drive RAID 1 parition.

I do have a couple of other virtual machines running some LAMP servers. The nice thing about the configuration is that it is easy to put them on either side of the gateway since it is simply a matter of having them use the appropriate Ethernet bridge device.

The final system has been a joy to use and it has been very reliable. I run the system as a headless server running multipel VMs as already noted. Performance is excellent and the hybrid RAID system provides the desired redundancy.

The next sections provide a more detailed view of the major components used in the project.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.