Infiniband Hits 10M Messages/s

April 27, 2006
Message bandwidth requirements grow more quickly than the number of processors. So, fast mesh interconnects like InfiniBand are needed with large clusters of multicore processors. Pathscale's 10X-MR meets those challenges, linking InfiniBand with PCI Exp

Message bandwidth requirements grow more quickly than the number of processors. So, fast mesh interconnects like InfiniBand are needed with large clusters of multicore processors. Pathscale's 10X-MR meets those challenges, linking InfiniBand with PCI Express. The key to its success is a high message rate with low host overhead without the need for sophisticated protocol engines like those found in high-end Ethernet solutions.

The $795 10X-MR PCI Express adapter implements a cut-through architecture. It supports Pathscale's high-performance MPI stack, which can meet a rate of 10 million messages/s. The adapter also supports the industry-standard OpenIB stack. Both work with offtheshelf InfiniBand switches. The MPI stack reduces host overhead via a connectionless architecture that avoids queue-pairs used with OpenIB.

See Associated Figure

Pathscale
www.pathscale.com

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!