Mellanox is using its BridgeX InfiniBand chip as the centerpiece for a thrust into the enterprise computing space where large, networked storage is the norm. Technologies such as Fibre Channel and Ethernet-based iSCSI typically sit between the storage devices and the rest of the network.
Ethernet is often used as the lone interconnect from a blade server. But even 10-Gigabit (10G) Ethernet sites usually have a mix of storage devices or prefer technologies like Fibre Channel, requiring Ethernet to Fibre Channel tunneling support. This can lead to an interesting mix of networking switches and bridges (Fig. 1). InfiniBand can provide similar capabilities, including tunneling Ethernet.
InfiniBand Can Do It All
InfiniBand can support all of these protocols while providing a 40-Gbit/s backbone that’s faster that 10G Ethernet. InfiniBand and its tunneling also are more efficient. This allows a network that employs InfiniBand everywhere except the storage endpoints (Fig. 2).
The BridgeX chip makes the difference. The same chip can handle two channels of 40-Gbit/s InfiniBand or three channels of 10G Ethernet on one side and four channels of 2/4/8-Gbit/s Fibre Channel and three channels of 10G Ethernet on the other.
The host processor only requires a ConnectX host bus adapter (HBA) to handle any protocol from InfiniBand to Ethernet to Fibre Channel. Using InfiniBand as the backbone can simplify management in addition to providing a very high-speed system. It is also more power-efficient than Ethernet.
The BridgeX incorporates the physical layers (PHYs), reducing the bill of materials. It supports XAUI, XFI/SFP+, and 10GBaseKR. Of course, this approach makes migration to native InfiniBand storage easier. Mellanox will provide third parties with the BridgeX chip in addition to its own version of InfiniBand to Ethernet and Fibre Channel bridge boxes. Integrated management is part of the puzzle.
Mellanox