Skip navigation
Electronic Design

Infiniband Hits 10M Messages/s

Message bandwidth requirements grow more quickly than the number of processors. So, fast mesh interconnects like InfiniBand are needed with large clusters of multicore processors. Pathscale's 10X-MR meets those challenges, linking InfiniBand with PCI Express. The key to its success is a high message rate with low host overhead without the need for sophisticated protocol engines like those found in high-end Ethernet solutions.

The $795 10X-MR PCI Express adapter implements a cut-through architecture. It supports Pathscale's high-performance MPI stack, which can meet a rate of 10 million messages/s. The adapter also supports the industry-standard OpenIB stack. Both work with offtheshelf InfiniBand switches. The MPI stack reduces host overhead via a connectionless architecture that avoids queue-pairs used with OpenIB.

See Associated Figure


Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.