There's a lot of talk about power consumption in data centers, yet the efforts to reduce storage power consumption are rarely discussed. Consider high-performance disk drives, commonly driven to 10,000 or 15,000 rpm by latency demands, with their numbers multiplying to supply the terabytes of storage we take for granted.
With the demands for megabytes per second and I/Os per second high on their feature list, the RAID controllers driving those arrays have embedded CPUs and memory rivaling what would have been considered a high-end PC only a few years ago! Storage consumes a significant amount of power, and designers can reduce power in three areas; on the disk drives: on the interfaces connecting the disk drives, controllers, and CPUs; and on the RAID controllers themselves.
The shrinking physical size of today's disk drives is helping to reduce power consumption. Smaller units have less massive platters, requiring smaller motors and less power to spin them. In addition, the resulting higher-bit densities help push data rates up without having to increase rotational speed.
The new breed of "hybrid" disk drives should provide some very interesting opportunities. With large flash memories helping to buffer performance, there will be some fairly dramatic power reductions as drives begin to rely less on heavy power-hungry mechanicals to meet their specifications.
Storage and Host Interfaces
Just as mass is the enemy of power savings in mechanical disk drives, switching signals are the enemy of the interfaces between those drives and their controllers, as well as between the controllers and their servers. The industry move from parallel interfaces such as SCSI and PCI to serial interfaces like SAS and PCI Express has a downside in power.
When a parallel interface is idle, few or no signals are in the switching state, so its power consumption is naturally low. Serial interfaces generally rely on embedded clocking and must send special "idle" data patterns even when no real data is being transferred. Groups like T10 and the PCI-SIG are addressing innovative ways to improve the power efficiency of these interfaces without losing the other advantages of serial busses.
For instance, PCI Express allows each end of the link to stop transmitting completely when it knows there will be no data transfer for a while. The PCI Express 2.0 specification adds the ability for dynamic switching between 2.5- and 5-GT/s (gigatransfer/second) speeds. That same specification enables components to negotiate to use fewer of their transmit/receive pairs when bandwidth needs are lower.
While these improvements in interface power reduce RAID-controller power consumption, evidence is mounting that they aren't enough. It's okay for a RAID controller to idle or throttle down its PCI Express bus, but what about all its onboard CPU(s), memories, and so on? There is growing industry collaboration between the makers of operating systems and chips to address ideas such as load-based power management.
While today's RAID controllers know that they might not be doing anything right now, they have little or no visibility into the system's state of affairs. Knowing that the server is running at low utilization and without storage-intensive applications would allow the RAID controller to make decisions about power management that are more intelligent, such as slowing down or shutting off one or more of its CPU cores and shifting RAID parity calculations from hardware to firmware.
I/O virtualization is another interesting area for power-consumption savings tied to the RAID controller. Consider a blade server where every blade has a pair of mirrored drives on board to provide boot services and non-SAN (storage-area network) storage. Replacing those 20 or so drives with five on a shared RAID controller blade leads to nice power savings – and reduced system cost!