SuperServer 6046T-TUF system
Dual 6-core Westmere Xeon
Micron DDR3 UDIMMs
3.5-in removable SAS drives
5 drive SAS backplane
LSI MegaRaid SAS 9260-8i
Removable cooling fans
This is a hands-on project I look forward to doing each year. I get to take the best hardware to see what it can do. This year we start with Intel's Westmere Xeon processor and Super Micro Computer's (SuperMicro) SuperServer 6046T-TUF (Fig. 1). This includes a motherboard with an Intel 5520 Tylersburg chipset and 4U desktop or rack mountable case.
It is possible to build up a system like I did but most customers will likely have Super Micro build up a system for them. They can provide all the components presented here. The major components used in the system include:
- Intel Westmere hex core Xeon (see Multicore, Flash, And Secure Drives Deliver Performance)
- Super Micro SuperServer 6046T-TUF (see 4U Super Server For Embedded Systems)
- LSI MegaRaid SAS 9260-8i (see SAS HBA Handles Hierarchical Storage)
- Seagate Constellation (see 2.5-in Terabyte Hard Drive Secures Secrets)
- Micron Technology DDR3 UDIMM (see Micron Delivers Solid DDR3 Server Memory)
Most of these components were selected for Electronic Design's Best of 2010 Award. The rest are top notch components that fill out the system. For example, LSI's SAS host bus adapater (HBA) provides hardware-based RAID 5 and RAID 6 support. The Super Micro motherboard supports RAID 5 but it is handled in software. Also, the chip used on the PCI Express HBA is the same LSI 2008 SAS controller available on the motherboard in one of its incarnations. It is the usual price/performance trade off. Using one of the x8 PCI Express slots is the option we chose.
The first step in the process was to mount the hex core Xeon processors (Fig. 2) on the motherboard along with the heat sink and fan systems. This is done before the motherboard is placed into the system because the heat sink has a matching plate underneath the motherboard. The two items are bolted together. Make sure you include the thermal compound that usually accompanies the heat sink on top of the processor to make a good thermal connection. The hex core Xeon processors used support hyperthreading so this dual chip system can handle up to 24 threads.
The next step is to add the Micron Technology DDR3 UDIMM memory. Each Intel Xeon processor has a three channel DDR3 memory controller. Each processor can access the memory of the other processor so all cores are operating in a single SMP environment. I filled three of the six slots for each processor (Fig. 3) with 4 Gybte UDIMM DDR3 memory with ECC support from Micron Technology. This provided 24 Gbytes of memory. The motherboard can handle significantly more. If you are buying your own memory then you would probably pick up some from Crucial Technology, Micron's consumer and VAR brand. Micron's parts are normally available to OEMs.
Adding Storage
The Super Server has 5 hot-swap 3.5-in drive bays and 3 peripheral 5.25-in drive bays. It can also handle one slim DVD drive. I decided to skip the DVD drive since installing Linux, and most other operating systems, can be done easily using a USB drive, flash drive or via the network. I tend to use all three depending upon the OS I am using. I have in the lab a couple 1 Gbyte flash drives with various versions of Linux handy.
I didn't mention it earlier but this server was going to be a near line storage server. My main compute server has faster SAS drives. This system will be populated with five Seagate Constellation drives designed for near line storage use. They are rugged and fast but, at 7200 rpm, they are not as fast the 15,000 rpm SAS drives from Seagate. The flash drives are even faster. The Constellation drives are available as self encrypting drives and these work with the LSI controller.
Installing the five 3.5-in Seagate Constellation drives (Fig. 4) was easy. The carriers pop out and it was a matter of swapping the the drive with the removable plastic frame. The system comes with the frames installed because the metal carrier is a U-shaped architecture. It requires the drive to maintain its strength. The drive carriers are hot-swappable. Super Micro provides labels for the carriers that are handy because the drives need to be installed in order for things to work properly.
As with most servers these days, the drives are designed to plug directly into backplane-based connectors. In this case, the Super Server's drive bay has matching SATA/SAS connectors on the other side of the backplane (Fig. 5). When delivered, the cables are connected to the motherboard's SATA controller.
There is now sufficient hardware installed to check out the system so we installed an OS for the first time. I do use a USB keyboard and an LCD monitor for the install but typically the system runs headless.
First OS Install
I booted from a USB DVD drive with the standard Centos 5.5 DVD. Centos is a free version of Red Hat Enterprise Linux (REHL), the gold standard for Linux servers. The standard installation sets up multiple partitions including a boot partition and LVM (logical volume manager) partition. I decided to keep things simple to start with since it would be an easy task to utilize all the other drives using LVM. It is
This exercise was done primarily to test all the installed components. It meant utilizing all the drives and memory. Everything worked nicely including the two Ethernet ports so it was time to move on.
Adding Hardware RAID
The next step was to replace the on-board SATA interface with the LSI MegaRaid SAS 9260-8i adapter. This plugs into one of the x8 PCI Express slots (Fig. 6) It comes with a hydra cable with 4 SAS/SATA connectors. It plugs into the adapter and two are required for this project since we have 5 hard drives.
It was a simple matter of unplugging the disk drive cables that were plugged into the motherboard and replacing them with LSI's cables. The adapter presents virtual drives to the system so the first thing to do is boot into the WebBIOS. This is essentially the typical character-based BIOS interface for this type of adapter.
The default setup creates a virtual drive consisting of all the available drives. In general, this is often what you want. It is simply a matter of choosing what RAID configuration to use: RAID 0, 1, 5, or 6. RAID 10, 50 and 60 are a bit more complex to set up. As it turns out, I had to create two virtual drives to handle the Centos installation.
Second OS Install
I went again with a Centos 5.5 installation but found that it did not like to handle boot drives over 2 Tbytes. Centos and Red Hat support UEFI (Unified Extensible Firmware Interface) that is found on the Super Micro motherboard but for some reason the installation process could not handle the GUID Partition Table (GPT). I think it has something to do with the boot program. The next version, 6.0, addresses the issue. The latest version of Microsoft Windows Server can handle one large drive.
This issue prompted me to redo the default virtual disk configuration. I created two virtual disks. The first was a 100 gigabytes. The second used the remaining space. Both were setup using RAID 6. RAID 6 provides 1+2/n space availability versus 1+1/n for RAID 5 but RAID 6 can handle the loss of two drives.
Splitting the disk array into multiple virtual disks takes a bit more effort. The virtual disks need to be created and then the logical disks need to be added. The whole process took less than fifteen minutes and it only needs to be done once.
The nice feature of this approach is the OS does not have to worry about the RAID support. Hot swapping disks and rebuilding the array is handled by the hardware. Device drivers are provided for all major operating systems including RHEL and Centos that provide management support using the native MegaRAID Storage Manager (MSM) application.
I could go on about MSM but if you have used it or this type of management program you know that it is a must have tool. It can operate across the network to manage large server farms. It is especially useful in tracking problems and alerting people using email.
The system has proven to be very robust as expected. It easily handles a dozen virtual machines.
Closing Comments
This system was a dream to work with. The processor placement on the motherboard lines up with the rear cooling fans so everything can work together to provide plenty of airflow. The fans are easy to remove and hot swappable (Fig. 7). The fans start up at full speed but step down once the system is running. The fans barely increased in speed when running heavy compute and storage applications.
Adding memory after the fact is easy as well. The Micron memory used left half the slots empty for expansion. The slots are well clear of the PCI Express slots so they do not interfere even if all the PCI Express are full.
I did not have optional IPMI feature that includes KVM support installed but it is a handy tools in large installations. I also did not take advantage of more than one of the PCI Express slots. This is where Super Micro's system would excel. Installing some 10 Gigabit Ethernet or dual-port 4x InfiniBand adapters with I/O virtualization support would make things interesting. Of course, custom, full height PCI Express adapters couild prove useful for some applications. GPUs might also be useful as well although many would rather be plugged into a x16 PCI Express slot.
Check out the Related Articles below for links to individual reviews of the components used in this project.