Electronic Design

HCA Packs A Price/Performance Punch

This single-chip, 10-Gbit/s InfiniBand host bus adapter costs under $70 and consumes less than 2 W.

Most switch fabrics face an ongoing struggle of reaching 10 Gbits/s while keeping power consumption, cost, and overhead in check. On the other hand, there's Mellanox and its third-generation InfiniHost III Lx host channel adapter (HCA). This compact, single-chip solution is smaller than a postage stamp, consumes only 2 W, and costs $69.

The InfiniHost III Lx is a single-port adapter designed to fit on motherboards like those from IWill and Arima. The 4× InfiniBand link operates at 10 Gbits/s. It's an ideal match for the interface on the other side, a 4× PCI Express interface.

Mellanox's MemFree technology exploits the PCI Express interface for direct access to the host's memory. This eliminates the need for dedicated off-chip memory and reduces host overhead. It also simplifies system design where only one channel is required. No additional support is needed because of the built-in InfiniBand transceivers.

Mellanox also announced a PCI Express adapter card with the InfiniHost III Lx. Dubbed the MHES14-XT, the 4× PCI Express card comes with fiber support of up to 300 m.

InfiniBand had a bad rap in the past, mostly because of hyped expectations. But it now leads the pack. The InfiniHost III Lx is a third-generation device, and its features underscore that point.

Making claims as the lowest-cost 10-Gbit/s solution, the performance or power consumption of the InfiniHost III Lx BA is tough to match. The slower Gigabit Ethernet runs around $40/port. The complementary InfiniScale III 24-port 4× switch chip from Mellanox is priced at $30/port, making InfiniBand's cost per port under $100.

The HCA chip imposes minimal overhead, with the lowest latency around 4 µs. It beats most Ethernet TCP Offload Engines (TOEs). For instance, at around 4%, the HCA's host utilization typically is half that of the best 1-Gbit/s TOE.

Software support includes all InfiniBand interfaces, including Internet Protocol over InfiniBand (IPoIB), sockets direct protocol (SDP), User Direct Access Programming Library (UDAPL), SCSI RDMA Protocol (SRP), message passing interface (MPI), and of course, remote direct memory access (RDMA). By using IP or sockets, applications can employ the same kind of interface as Ethernet, minimizing programming changes when migrating to InfiniBand.

Software is key to InfiniBand support, but the overhead is minimal. There's also an open-source Linux InfiniBand stack and management interface, which further reduces the cost of an InfiniBand solution using the InfiniHost III Lx.

The InfiniHost III Lx is a significant step in InfiniBand's growth. It uses well under half the space and power of its dual-port sibling, the InfiniHost III Ex, suiting the Lx for applications that don't require redundant links. Both support all major platforms, including the Xeon, Opteron, PowerPC, and Sparc, along with major operating systems such as Linux, Windows, HPUX, Solaris, OS-X, and VxWorks.

InfiniBand has made a major splash in supercomputing and is working its way into enterprise blade server solutions. The InfiniHost III Lx should help significantly, while lowering the entry level bar for InfiniBand systems. It makes InfiniBand a good fit for entry-level clusters.


Performance: 10-Gbit/s 43 InfiniBand link

Host interface: 10-Gbit/s 43 PCI Express

Power requirements: under 2 W

Footprint: 16 by 16 mm

Memory requirements: MemFree technology uses system memory through a PCI Express connection

Price: $69

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.