Add 10G Ethernet And InfiniBand, Then Mix Thoroughly

Dec. 15, 2006
Mellanox’s ConnectX host architecture blends 10G Ethernet and 20-Gbit/s InfiniBand.

Cluster building is becoming ever-more common with InfiniBand, but these clusters never operate in isolation. This means a connection to the outside world, one that runs Ethernet. With the ConnectX hardware architecture from Mellanox, the two networking fabrics come together (Fig. 1).

The ConnectX hardware interface will find a home in Mellanox's next iteration of host adapter chips. The same interface will be used for both InfiniBand and the new Ethernet chips. Planned as an interface with Ethernet and InfiniBand interfaces, the first chip will target the cluster nodes between an Ethernet front end and InfiniBand back end (Fig. 2).

This approach works well because 10-Gbit (10G) Ethernet uses the same serial-deserializer (SERDES) as InifiniBand. Mellanox implements stateless Ethernet hardware acceleration that brings significant performance advances with low host overhead, but it's less than a TCP/IP offload engine (TOE). Most TOE implementations running at 1 Gbit/s already consume more than twice the power than InfiniBand, which runs significantly faster (40 Gbits/s/port).

The InfiniHost III Ex Dual-Port InfiniBand adapter consumes only 6 W. The stateless approach will use more host resources, but it will already have extra cycles available because the Infini-Band interface imposes significantly less host overhead.

COMPATIBILITY IS KEY ConnectX is compatible with standard IP-based protocols used with Ethernet, including IP, TCP, UDP, ICMP, FTP, ARP, and SNMP, making it compatible with third-party 1-Gbit/s and 10-Gbit/s Ethernet products. These protocols work over InfiniBand as well, though it's more efficient to use the OpenFabric interface.

The InfiniBand interface will include all of the InfiniHost III features, including OpenFabric RDMA (remote direct memory access) support. The Ethernet interface doesn't provide the RDMA support.

Some vendors of TOE Ethernet adapters have promised or are delivering RDMA support (see "iSCSI Does 10G Ethernet" at www.electronicdesign. com, ED Online ID 13285). InfiniBand offers other features, such as quality-of-service support and end-node application congestion management.

PRICE AND AVAILABILITYSingle-and dual-port InfiniBand-only adapters are available from Mellanox right now. The mixed Ethernet/InfiniBand adapters will arrive in the first quarter of 2007. Both 1-Gbit/s and 10-Gbit/s Ethernet interfaces will be available. Pricing is expected to be comparable to the InfiniBand adapters.

Mellanox
www.mellanox.com

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!