Interconnects Make The Cut With The Right Fabrics

Jan. 12, 2006
Standard-driven fabric backplanes move from evaluation to production.

Fabric backplanes are rolling off production lines. Also, maturing technologies continue to move systems from evaluation to deployment, creating very large blade clusters. This one-two punch sums up today's ever-evolving field of interconnects.

However, the software still needs to catch up to the hardware. Once that occurs, processor networks will be able to tackle problems that were impossible in the past, as well as offer higher reliability plus higher performance.

Newer, faster technologies such as Advanced Switching Interconnect (ASI), 10-Gigabit Ethernet, InfiniBand, and RapidIO are displacing established fabric technologies like Star-Fabric and Mercury's Raceway++ (see "Fabric Acceptance," p. 69). These new technologies provide the bandwidth and expandability that's unattainable with bus technologies such as PCI, PCI-X, and VME64. They also offer more expansion and flexibility than their replacement, PCI Express.

Full mesh systems yield excellent performance, but the increasing number of nodes makes them increasingly difficult to support. Cabling can be a nightmare due to multiple racks as well. A common compromise is the dual-star configuration usually employed in fabric backplanes for standard form factors such as VME, AdvancedTCA, MicroTCA, and CompactPCI (see the figure).

No single fabric architecture has won the market yet. Some fill certain niches. InfiniBand dominates supercomputer clustering. PCI Express, which has been pushing fabrics, can deliver the bandwidth necessary to feed fabrics without overloading the host. Its cousin ASI uses the same underlying hardware, but ASI has its own protocol.

ASIBased on PCI Express, ASI benefits from PCI Express' success. The protocols for ASI and PCI Express differ, but the hardware interface remains consistent. ASI's ability to tunnel PCI Express now comes in handy as ASI emerges from the lab into the real world.

ASI trails Ethernet, InfiniBand, and RapidIO in deployment. That could change this year, though, as ASI chips start to ship en masse. It appeals to users who are now familiar with PCI Express. The latter's success is building more confidence in the ASI camp.

10G ETHERNET Ethernet runs the world, and 10-Gigabit Ethernet (10G) pumps data into and out of clusters attached to the Internet very well. Still, it has a tough road to travel against the competition because its compatibility is both a blessing and a curse.

First out of the chute for Ethernet fabrics was 1-Gigabit Ethernet, and it's been the mainstay for the past few years. But it's only a matter of time before 10G Ethernet supersedes it. Both provide direct connections as corporate and Internet Ethernet backbones. Bandwidth is the primary reason for fabric-based solutions, and 10G offers more than its older sibling. IT managers are comfortable with Ethernet, and 10G is just faster, right?

Ethernet's TCP/IP overhead is becoming its biggest liability. To combat that problem, TCP/IP offload engines (TOEs) or faster hosts were developed. They're sure to make a big difference in 10G, which is saddled with much more overhead. Consequently, high-end network attached storage (NAS) and storage-area networks (SANs) can take advantage of 10G's capacity. On top of that, interest in iSCSI continues to rise, which will require TOEs to keep up at 10G speeds.

InfiniBand will compete with 10G, since both target compute clusters. InfiniBand's trump card, remote direct memory access (RDMA), is being put into the Ethernet deck. But it remains to be seen if Ethernet's overhead will be the edge Infini-Band needs to stay ahead.

INFINIBAND It's back. InfiniBand got a bad rap between the initial hype and product shipment. Yet many designers who jumped ship are back in force now that InfiniBand has proven itself. All major switch vendors feature InfiniBand products in their catalogs. It's a definite change from two years ago.

Mellanox's delivery of double-datarate (DDR) InfiniBand, which pumps out 60 Gbits/s per connection, puts Infini-Band on top of the performance heap. Add RDMA support and extremely low latency, and it's the clear winner for super clusters. InfiniBand chips also are inexpensive and use little power.

Still, the next two years are likely to be a period of consolidation and growth for InfiniBand products versus new performance increases. That's because Infini-Band already taxes even the fastest PCI Expressequipped host, and the next generation of PCI Express isn't expected until 2007. We won't see a faster InfiniBand until then. In the interim, look for even lower-cost solutions and a movement into the lower end of the server market.

RAPIDIO FABRIC It's simple. Simply fast. Simple to program. And simply the best solution for a range of applications, such as processing radar data.

Several chips with Serial RapidIO support debuted last year, including the allimportant switch chips. This year, those chips will be available in quantity. So will boards based on this technology. Likewise, host processors announced with built-in Serial RapidIO interfaces will arrive this year, significantly changing the RapidIO landscape. It's time to reap the benefits of those promised DSP farms.

RapidIO has proven interest and support in communications, military, medical, and other arenas. This year will establish where it can beat out other fabrics. Its peer-to-peer communication system doesn't-require the common memory map used in other systems, making RapidIO more suitable for some applications.

Don't count out StarFabric or PCI Express when it comes to fabrics, though. StarFabric remains a mature and inexpensive technology that's particularly useful if it can meet the bandwidth requirements of an application. Its migration path to ASI will cause ASI to replace it completely over time, but the price of ASI hardware will remain higher for at least a couple of years.

Multiprocessor PCI Express solutions are now possible using many of the current PCI Express bridge chips. It's not as sophisticated a solution as ASI, but simpler is often better if the architecture can handle the desired application.

Finally, PCI Express remains the bridge and bottleneck for fabrics. Because it's the host connection of choice, fabrics will depend on suitably equipped hosts. Only Serial RapidIO shows the promise of hostbased fabric interfaces this year.

Parallel bus architectures aren't going away. But as performance and reliability requirements rise, fabrics become the answer.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!