Electronic Design

Add 10G Ethernet And InfiniBand, Then Mix Thoroughly

Mellanox’s ConnectX host architecture blends 10G Ethernet and 20-Gbit/s InfiniBand.

Cluster building is becoming ever-more common with InfiniBand, but these clusters never operate in isolation. This means a connection to the outside world, one that runs Ethernet. With the ConnectX hardware architecture from Mellanox, the two networking fabrics come together (Fig. 1).

The ConnectX hardware interface will find a home in Mellanox's next iteration of host adapter chips. The same interface will be used for both InfiniBand and the new Ethernet chips. Planned as an interface with Ethernet and InfiniBand interfaces, the first chip will target the cluster nodes between an Ethernet front end and InfiniBand back end (Fig. 2).

This approach works well because 10-Gbit (10G) Ethernet uses the same serial-deserializer (SERDES) as InifiniBand. Mellanox implements stateless Ethernet hardware acceleration that brings significant performance advances with low host overhead, but it's less than a TCP/IP offload engine (TOE). Most TOE implementations running at 1 Gbit/s already consume more than twice the power than InfiniBand, which runs significantly faster (40 Gbits/s/port).

The InfiniHost III Ex Dual-Port InfiniBand adapter consumes only 6 W. The stateless approach will use more host resources, but it will already have extra cycles available because the Infini-Band interface imposes significantly less host overhead.

ConnectX is compatible with standard IP-based protocols used with Ethernet, including IP, TCP, UDP, ICMP, FTP, ARP, and SNMP, making it compatible with third-party 1-Gbit/s and 10-Gbit/s Ethernet products. These protocols work over InfiniBand as well, though it's more efficient to use the OpenFabric interface.

The InfiniBand interface will include all of the InfiniHost III features, including OpenFabric RDMA (remote direct memory access) support. The Ethernet interface doesn't provide the RDMA support.

Some vendors of TOE Ethernet adapters have promised or are delivering RDMA support (see "iSCSI Does 10G Ethernet" at www.electronicdesign. com, ED Online ID 13285). InfiniBand offers other features, such as quality-of-service support and end-node application congestion management.

Single-and dual-port InfiniBand-only adapters are available from Mellanox right now. The mixed Ethernet/InfiniBand adapters will arrive in the first quarter of 2007. Both 1-Gbit/s and 10-Gbit/s Ethernet interfaces will be available. Pricing is expected to be comparable to the InfiniBand adapters.


Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.