Electronic Design

PCI Express Links InfiniBand

Combine two serial bus architectures into a host bridge chip and you get Mellanox's third-generation InfiniHost III EX. This dual-port, 10-Gbit/s InfiniBand host controller links the host processor to an 83 PCI Express interface. It also provides a high-bandwidth link between a host and an InfiniBand switch fabric. The chip will find homes in storage-area networks (SANs) and server clusters. Mellanox is betting that PCI Express will be the norm for blade servers versus PCI-X 2.0.

The InfiniHost III EX almost doubles the throughput of its prior PCI-X-based InfiniHost chip. It delivers 1.6 Gbytes/s per InfiniBand port and features very low latency and low overhead. Mellanox streamlined system throughput using its own PCI Express design, making it easier to optimize system performance via caching and pipelining. The company claims that InfiniHost III EX should deliver significantly better performance than 10-Gbit/s Ethernet, even with a TCP/IP Offload Engine (TOE).

The controller can handle an optional 167-MHz DDR ECC memory interface. The local memory interface supports up to 4 Gbytes of memory. It also can operate in a memory-free mode, though this requires new device drivers. Local memory can be used for caching, pre-fetching, and enhanced I/O operations.

Housed in a 27- by 27-mm package, the InfiniHost III draws less than 5 W of power. The chip costs $231, or under $155/port. A low-profile PCI Express card is sampling with up to 512 Mbytes of on-board RAM.

Mellanox Technologies Inc.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.