It's raining board interconnect switch fabrics. Or so it seems, with last year's experimentation and development leading to a flood of deployments and new products. We're seeing clusters switch gears, going from dozens and hundreds of nodes to thousands and tens of thousands of nodes.
Fabrics overcome many of the problems associated with buses, including scalability, reliability, and performance. It's relatively new technology, but the need for speed is pushing its adoption.
Switch fabrics have finally settled into coexistence with each other. Ethernet remains king of the Internet and enterprise. Serial RapidIO is the platform of choice for communications and many military applications, such as large radar system support. InfiniBand is a major player in clustering, and PCI Express often links them to the processors in the network.
These fabrics all have sufficient features and performance to meet computing needs for many years to come. Still, incremental speed jumps, as well as new features like virtual channels and remotedirect-memory-access (RDMA) support, keep designers on their toes. Many of these developments result from cross-pollination with other switch fabrics.
Gigabit Ethernet
Ethernet continues
to dominate the Internet. Prices and power
consumption continue to fall, but
performance still goes up. The platform of choice at the high end will
be 10Gbit Ethernet on copper,
while 1Gbit Ethernet will be the bottom end for fabrics. For industrial
use, 100Mbit Ethernet will remain the
mainstay, though 1Gbit Ethernet is pushing its way in. In the mean time, 100Gbit
Ethernet remains on the drawing board—don't expect more than spin this year.
New technology adoption and innovation are more likely to deliver spikes in price and efficiency with products like Silverback Systems' iSCSI Initiator Host Bus adapters (Fig. 1). These devices handle higher-level protocols such as iSCSI and RDMA, in addition to TCP/IP. Handing off fabric management to adapters is critical for minimizing host overhead.
Serial RapidIO
Last
year saw a flood of Serial
RapidIO chips and the initiation of the Serial RapidIO
Interoperability Lab. Excellent interoperability has put
Serial RapidIO in Ethernet's
league.
One key factor behind the push is the Serial RapidIO interface's incorporation onto the processor chip. For example, Freescale's MPC8641D can be found on Embedded Planet's EP8641A (Fig. 2). The AMC-based board has a 4x Serial RapidIO interface that plugs into a MicroTCA or AdvancedTCA fabric.
Meanwhile, Serial RapidIO is now being integrated into high-end DSPs, such as Texas Instruments' C6000 line. Serial RapidIO provides a way to cluster, as well as integrate, DSPs into a Serial RapidIO-based fabric. Overall, Serial RapidIO is likely to dominate the communications and military arenas for data plane work.
InfiniBand At The Head Of The
Pack
When it comes to performance
and power, InfiniBand leads the way. It's
the glue that holds together the largest
supercomputers and commercial clusters. This year, it will probably strengthen
its hold—not bad for a technology that
was written off as dead just a couple of
years ago.
Its 4x 20-Gbit/s host interfaces provide more bandwidth than even a quad-core processor can use, so there's little chance this year for movement to the 12x interfaces used on InfiniBand. Still, speed increases are expected soon, simply to stay ahead of system requirements.
It's doubtful that InfiniBand will move onto the processor chip this year, though the conventional bridge chip approach should remain. PCI Express to InfiniBand chips and adapters are available from different sources now. Also, HyperTransport is becoming more important to InfiniBand, due to greater usage of AMD's Opteron in large systems. QLogic's HTX version of its QLE7140 for PCI Express adapters allows direct connection to the Opteron (Fig. 3). The same protocol stack is used for both platforms.
PCI Express Is Ubiquitous
No surprise
here—PCI Express is a
rousing success. It's the
interface of choice on microcontrollers and processor support chip
sets. PCI Express is being used for board
and chassis interconnects, but continues
as a host-based solution. Or is it?
Advanced Switching, the PCI Express fabric, seems to have lost out to the other fabrics and PCI Express virtualization. PCI Express virtualization will be the hot ticket this year. Nonetheless, this will be a year for building and experimentation on the virtualization side.
PCI Express adoption will continue to be pushed because of performance requirements in new systems. It will be found on all board and mezzanine form factors, from the compact EPIC Express to the large AdvancedTCA racks.
HyperTransport Moving Off-
Board
HyperTransport is more of an
on-board chip-to-chip interconnect made
famous by companies like AMD and
Broadcom with their high-performance,
multichip systems. HTX (HyperTransport
Expansion) is the standard for moving
HyperTransport offboard.
The reasoning behind HTX is the same as PCI Express—getting high-speed access to peripherals without going through an additional level of translation. Look for HTX boards like QLogic’s InfiniPath to be more common as the HTX connector shows up on more boards and high-performance interfaces such as InfiniBand become add-ons.
Changing The Software
The
biggest change this year will come on the
software side with the installation of more
fabric hardware. Hardware adoption will
continue to proceed, quickly running existing
applications. But reprogramming will
provide even better throughput and more
functionality. Protocols like iSCSI and features
like RDMA require application and
operating-system modification. The software
to handle this is now available and
well understood. And, conventional buses
aren’t going away, but fabrics will ultimately
dominate.