Add 10G Ethernet And InfiniBand, Then Mix Thoroughly

Dec. 15, 2006
Mellanox’s ConnectX host architecture blends 10G Ethernet and 20-Gbit/s InfiniBand.

Cluster building is becoming ever-more common with InfiniBand, but these clusters never operate in isolation. This means a connection to the outside world, one that runs Ethernet. With the ConnectX hardware architecture from Mellanox, the two networking fabrics come together (Fig. 1).

The ConnectX hardware interface will find a home in Mellanox's next iteration of host adapter chips. The same interface will be used for both InfiniBand and the new Ethernet chips. Planned as an interface with Ethernet and InfiniBand interfaces, the first chip will target the cluster nodes between an Ethernet front end and InfiniBand back end (Fig. 2).

This approach works well because 10-Gbit (10G) Ethernet uses the same serial-deserializer (SERDES) as InifiniBand. Mellanox implements stateless Ethernet hardware acceleration that brings significant performance advances with low host overhead, but it's less than a TCP/IP offload engine (TOE). Most TOE implementations running at 1 Gbit/s already consume more than twice the power than InfiniBand, which runs significantly faster (40 Gbits/s/port).

The InfiniHost III Ex Dual-Port InfiniBand adapter consumes only 6 W. The stateless approach will use more host resources, but it will already have extra cycles available because the Infini-Band interface imposes significantly less host overhead.

COMPATIBILITY IS KEY ConnectX is compatible with standard IP-based protocols used with Ethernet, including IP, TCP, UDP, ICMP, FTP, ARP, and SNMP, making it compatible with third-party 1-Gbit/s and 10-Gbit/s Ethernet products. These protocols work over InfiniBand as well, though it's more efficient to use the OpenFabric interface.

The InfiniBand interface will include all of the InfiniHost III features, including OpenFabric RDMA (remote direct memory access) support. The Ethernet interface doesn't provide the RDMA support.

Some vendors of TOE Ethernet adapters have promised or are delivering RDMA support (see "iSCSI Does 10G Ethernet" at www.electronicdesign. com, ED Online ID 13285). InfiniBand offers other features, such as quality-of-service support and end-node application congestion management.

PRICE AND AVAILABILITYSingle-and dual-port InfiniBand-only adapters are available from Mellanox right now. The mixed Ethernet/InfiniBand adapters will arrive in the first quarter of 2007. Both 1-Gbit/s and 10-Gbit/s Ethernet interfaces will be available. Pricing is expected to be comparable to the InfiniBand adapters.

Mellanox
www.mellanox.com

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!