Maturing InfiniBand Simply On Fire

Nov. 15, 2004
InfiniBand was never really burned, although some companies got singed along the way. The initial hype has given way to a rock-solid technology that blows away the competition, primarily 10G Ethernet. InfiniBand's low latency and high...

InfiniBand was never really burned, although some companies got singed along the way. The initial hype has given way to a rock-solid technology that blows away the competition, primarily 10G Ethernet. InfiniBand's low latency and high throughput combined with low overhead and features like remote direct memory access (RDMA) make it ideal for clusters and blade servers. It has essentially been rediscovered by system designers trying to put an ever-increasing number of processors into a box.

The InfiniBand naysayers remind me of embedded development decades ago (okay, I'm dating myself) when it came to virtual memory. It was a neat technology for mainframes. But who needs it on a microprocessor? Besides, virtual memory induces all sorts of problems regarding interrupt latency and determinism.

Most developers now take virtual-memory microprocessors for granted. Some systems simply exploit the memory protection provided by the hardware. Microprocessor software has changed, too, with decreasing memory prices and increasing processor performance. Operating systems like Linux and Windows only run with virtual memory.

There's effectively one virtual-memory concept, but dozens of implementations exist with varying details. The operating system takes care of most if the implementations. While performance varies, a particular architecture's performance results are the same regardless of the way the virtual-memory subsystem is implemented. That's because the process and overhead are essentially the same: translate address from logical to real, access data.

Moving data between servers is a little more complex, incorporating a higher-level protocol on top of the data being transferred. This is the case with Ethernet and less so with InfiniBand and RapidIO. Protocol overhead was less of an issue with Ethernet when it operated at slower speeds, but at 10 Gbits/s the overhead can be significant. That's why TOE (TCP/IP offload engine) is so important to high-speed Ethernet. It's also why InfiniBand and RapidIO have an edge and in many ways complement Ethernet. Chips supporting these two technologies tend to be simpler and more efficient than TOEs. InfiniBand and RapidIO rarely meet, with InfiniBand entrenched in the data center while RapidIO has a niche in communications and military applications.

A combination of elements makes InfiniBand's flight possible: low overhead, low cost, high speed, and low power requirements. This is especially true for blade servers, where a small footprint and low heat generation are requirements. InfiniBand is building a track record that's harder and harder to ignore.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!