Maturing InfiniBand Simply On Fire

Nov. 15, 2004
InfiniBand was never really burned, although some companies got singed along the way. The initial hype has given way to a rock-solid technology that blows away the competition, primarily 10G Ethernet. InfiniBand's low latency and high...

InfiniBand was never really burned, although some companies got singed along the way. The initial hype has given way to a rock-solid technology that blows away the competition, primarily 10G Ethernet. InfiniBand's low latency and high throughput combined with low overhead and features like remote direct memory access (RDMA) make it ideal for clusters and blade servers. It has essentially been rediscovered by system designers trying to put an ever-increasing number of processors into a box.

The InfiniBand naysayers remind me of embedded development decades ago (okay, I'm dating myself) when it came to virtual memory. It was a neat technology for mainframes. But who needs it on a microprocessor? Besides, virtual memory induces all sorts of problems regarding interrupt latency and determinism.

Most developers now take virtual-memory microprocessors for granted. Some systems simply exploit the memory protection provided by the hardware. Microprocessor software has changed, too, with decreasing memory prices and increasing processor performance. Operating systems like Linux and Windows only run with virtual memory.

There's effectively one virtual-memory concept, but dozens of implementations exist with varying details. The operating system takes care of most if the implementations. While performance varies, a particular architecture's performance results are the same regardless of the way the virtual-memory subsystem is implemented. That's because the process and overhead are essentially the same: translate address from logical to real, access data.

Moving data between servers is a little more complex, incorporating a higher-level protocol on top of the data being transferred. This is the case with Ethernet and less so with InfiniBand and RapidIO. Protocol overhead was less of an issue with Ethernet when it operated at slower speeds, but at 10 Gbits/s the overhead can be significant. That's why TOE (TCP/IP offload engine) is so important to high-speed Ethernet. It's also why InfiniBand and RapidIO have an edge and in many ways complement Ethernet. Chips supporting these two technologies tend to be simpler and more efficient than TOEs. InfiniBand and RapidIO rarely meet, with InfiniBand entrenched in the data center while RapidIO has a niche in communications and military applications.

A combination of elements makes InfiniBand's flight possible: low overhead, low cost, high speed, and low power requirements. This is especially true for blade servers, where a small footprint and low heat generation are requirements. InfiniBand is building a track record that's harder and harder to ignore.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!