InfiniBand Promises Greater Speed, Scalability For Servers And Clusters

Oct. 2, 2000
If bus architectures such as PCI and PCI-X could deliver the higher I/O clock rates and scalability essential for 21st century networks, an I/O architecture known as InfiniBand never would have been hatched. In the early 1990s, 66-MHz processors and...

If bus architectures such as PCI and PCI-X could deliver the higher I/O clock rates and scalability essential for 21st century networks, an I/O architecture known as InfiniBand never would have been hatched. In the early 1990s, 66-MHz processors and 10-Mbit/s networks were considered state-of-the-art. Servers back then cranked out 54 transactions/min., a fairly paltry figure by today's standards. PCI and PCI-X were modeled to meet the needs of the the last decade. With their initial 133-Mbit/s bandwidth and subsequent increases to 1.066 Gbits/s, they did.

Look at where we are now. This March, Intel announced a Pentium III Xeon processor that runs at 1 GHz. As a result, servers have sped past 135,000 transactions/s. It's no wonder that an up to date I/O structure has become necessary. This has given rise to a new I/O structure, InfiniBand, whose rollout is under way. Though still a fledgling specification, version 1.0 should launch this month.

InfiniBand is a network approach, rather than a bus approach, to I/O architecture (see the figure). The key components in an InfiniBand network are the switches, the host channel adapters (HCAs), and the target-channel adapters (TGAs).

The switch is a relatively simple device. It forwards dual-simplex, 8-bit/10-bit encoded packets. These packets are based on two fields that they contain, known as a destina-tion/local ID and a service-level field. Messages of up to 2 Gbytes are segmented into packet lengths ranging from 256 to 4096 bytes, depending upon the application.

If a reliable protocol is selected, it can be guaranteed that every packet will be delivered once and only once, in order, uncorrupted. Users are notified if that's not possible. Once packets are sent out, they're reassembled at the far end to complete the transaction.

Aggregate bandwidths are 500 Mbits/s, 2 Gbits/s, and 6 Gbits/s with a 2.5-Gbit/s signaling rate. The InfiniBand Trade Association (IBTA) hopes to eventually boost performance to a higher signaling rate. Bandwidths are scaled depending on how links are aggregated—by 1, by 4, or by 12. Serial links are traditionally described in bits/s, rather than in bytes/s, as is the case with parallel links.

Each InfiniBand width drives 2.5 Gbits/s (250 Mbits/s) in each direction. A 4-wide architecture is 10 Gbits/s (1 Gbit/s per direction). A 12-wide architecture is 30 Gbits/s (3 Gbits/s per direction). From a physical perspective, links may be copper or optical. They will be able to able to drive 20 in. of printed wiring, or 17 m of copper cable, while maintaining a worst-case bit error rate of at least 1 × 10−12.

Room For Innovation On the software side, the IBTA wanted to leave a lot of room for applications vendors to innovate. Instead of defining an absolute applications programming interface (API), the IBTA created an abstraction called "Verbs." This innovation defines the functionality that an HCA has to provide. Application vendors, then, know what services are going to be supported. But they're still free to develop individual interfaces, optimizing them for a particular operating system.

From a management standpoint, one node/switch must emerge as a subnet manager. It can reside in a node or in an HCA. Or, it may be integrated as part of a switch. The subnet manager is responsible for assuring conductivity throughout the fabric. It does this by sending management datagrams. Every InfiniBand device that participates on the fabric has a subnet management agent.

Also, InfiniBand supports unannounced hot-swapping. Designers can just walk up to a module and pull it out. The subnet manager will automatically detect this event.

The IBTA, comprising over 180 companies, came into being in August 1999 as a confluence of two earlier groups: the Next Generation I/O (NGIO) led by Intel, and the Future I/O led by IBM, Compaq, and Hewlett-Packard.

The problems that brought InfiniBand into being are based heavily on the requirements of servers and clusters of servers, sometimes dubbed "farms." Bus architectures lack sufficient headroom. Their capabilities are strained by the voracious demands for data transfers, and in particular, on the Internet and the higher I/O data rates required.

"There is a real crunch at the data centers," says Jean S. Bozman, research director, Commercial Systems and Servers, International Data Corp., Framingham, Mass. Bozman spoke at August's Intel Development Forum in San Jose, Calif.

"A high-speed interconnect such as Infiniband is going to promote flexibility in computer system design. When we have these new, faster links we will be free to move the server pieces farther apart—or arrange them in a little different way. Whereas before it has all been in the confines of a single cabinet or box," according to Bozman. "It will also put an end to fork-lift upgrades," she adds, referring to the practice of removing and replacing servers, en masse, rather than upgrading existing servers. "In fact, expandable servers will enable capacity upgrades on-the-fly."

OEMs looking to participate in the development of InfiniBand-based products have a number of opportunities. Bozman advises vendors to identify early on specific market segments that they believe will adopt InfiniBand. Then, they need to develop plans for phased InfiniBand rollouts by working with software vendors to make sure key applications use InfiniBand APIs.

"Building the InfiniBand infrastructure is going to be kind of a layered approach with, at first, a lot of the technology coming in at the edges," Bozman predicts. She sees InfiniBand arriving in concert with the move in servers from 32-bit computing to 64-bit computing, pointing out that the 64-bit versions will support both 64-bit as well as the large inventory of 32-bit applications now in place.

Early InfiniBand components will most likely be bridge chips and add-on cards for connecting with existing products via existing I/Os. Enlarged support for clustering and server farms will arrive in 2001, tying together existing systems with InfiniBand-type clustering. Full-blown symmetrical multiprocessor servers (SMPs) embodying InfiniBand will probably emerge further down the line.

Some issues remain to be answered, though. For example, it's unclear if InfiniBand will complement or compete with bus architectures, such as the well-entrenched PCI and its successor, PCI-X, which is less than a year old.

For more details, go to the group's web site at www.infinibandta.org.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

How to Quickly Leverage Bluetooth AoA and AoD for Indoor Logistics Tracking

March 27, 2024
Real-time asset tracking is an important aspect of Industry 4.0. Various technologies are available for deploying Real-Time Location.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!