Board and system interconnects have migrated from parallel buses to high-speed serial technologies. Mainstays like Peripheral Component Interconnect (PCI), Integrated Device Electronics (IDE), and Small Computer Serial Interface (SCSI) remain. Yet new systems are more likely to use PCI Express (PCIe), Universal Serial Bus (USB), Serial ATA (SATA), or Serial Attached SCSI (SAS).
Many standards are available, but they tend to be complementary (see the table). A typical system can use two or more standards (see the figure). For example, HyperTransport may be used to connect CPUs together while PCI Express is used with peripheral interfaces. Storage is typically linked via SATA or SAS.
The interconnect taxonomy can get rather complicated because of the different approaches designers use. Most standards incorporate a range of speeds, while others are scalable in width. Packet size, protocols, and latency also come into play. Some, like PCI Express and SAS, are based on the older PCI and SCSI standards.
While this provided a clear upgrade path, the main driving factor for the new interconnects was more performance. One way to get that is to move to high-speed serial transfers. The other is to scale the width of the interconnects.
HyperTransport, PCI Express, Serial RapidIO (SRIO), and InfiniBand can use different widths that normally range from 1 to 32 lanes, based on the technology. The source and destination handle conversion to and from a parallel data stream, such as 32-bit words into a stream of packets. They all can use the same type of serializer/deserializer (SERDES), but the handshaking, timing, and protocols differ between standards.
All of the high-speed serial interfaces transmit data using differential pairs, doubling the number of wires per signal.However, it also significantly increases noise immunity. This is critical at the RF gigahertz range where these interfaces run. It's a quantum leap from the slower, parallel bus speeds of yore, but standard design rules and enhanced EDA tools are addressing design issues.
The major increase in speed allows a single serial lane to exceed the speed of a much wider parallel bus. For instance, a x1 lane PCI Express connection offers more throughput than a parallel PCI bus. New standards with higher-speed signaling provide corresponding throughput increases without a change in the number of connections.
HyperTransport (HT) is the odd man out when it comes to clocking. It uses an explicit clock (or clocks), which simplifies implementation and increases overall throughput but at the expense of more lines. Likewise, HyperTransport is normally implemented in eight- or 16-lane widths.
PCI Express, Serial RapidIO, and InfiniBand use independent serial lanes with embedded 8b/10b encoding (8 data bits/10 signal bits). This 20% throughput penalty is made up by running the serial links at very high speeds. Making the lanes independent means that connections can be more robust versus a parallel scheme where all signals are in sync with each other. The difference becomes more significant as speed and distance increase.
The SERDES can easily handle synchronization with respect to their own lane of traffic, and data streams from multiple lanes are synchronized and merged within the receiver. This makes the SERDES, transmitters, and receivers critical and complex design tasks. Yet users of the technology simply deal with the data streams at the extreme endpoints.
One commonality among these interfaces is their full-duplex operation. At a low level, the systems are implemented as independent transmitter/receiver pairs. Higher-level protocols built on this hardware perform handshaking.
Ethernet fits in a category by itself for a variety of reasons, but the SERDES used by Gigabit Ethernet and other standards are the same found in the prior standards. The big difference is that Ethernet comes in fixed flavors and cannot be incorporated incrementally like PCI Express or Serial RapidIO.
Still, Ethernet will be found inside the box connecting boards together like InfiniBand and Serial RapidIO. These two standards have lower overhead and latency compared to Ethernet, but Ethernet has familiarity pushing it. Standards for backplanes like AdvancedTCA included all the high-speed serial standards, though Gigabit Ethernet was the first to be implemented.
Ethernet often is integrated directly inside microcontrollers, with PCI Express being a common interface to standalone chips and controllers. Transmission Control Protocol/Internet Protocol (TCP/IP) is often the high-level protocol riding atop Ethernet. This enables Ethernet to coexist nicely with InfiniBand and RapidIO, both of which support TCP/IP as well.
The Ethernet family has a wide range of implementations, possibly more than any other interconnect standard. Media and speed tend to separate these standards (see "Speed Shifting").
TO THE POINT OF PROTOCOLS
One common aspect for all of these serial interfaces is that, for the most part, they're all point-to-point switching. Point-to-point connections and protocols are easier to route than parallel bus architectures. In fact, they all can place a hub, switch, or switch fabric between two nodes.
In many instances, no switch will be used. SATA drives are normally connected to their controller. It's more common for SAS and Fibre Channel drives to run through switching networks. At the other extreme is Ethernet, which spans the Internet with millions of switches. In between is PCI Express, where at least one switch is very common.
On the plus side, switching and switch fabrics often provide redundancy, higher throughput, and easy expansion. On the minus side, there is added latency and complexity. Ethernet, PCI Express, InfiniBand, Serial RapidIO, and Fibre Channel all offer cut-through, where a packet moving into a switch can be sent out before the entire packet is received, assuming the path isn't already busy.
With switches comes the issue of routing. Serial RapidIO uses source-based routing at the low level. Address-based routing is also common in addition to identifier-based routing. TCP/IP routing typically occurs at a higher level in all but Ethernet switches, where TCP/IP is essentially a native protocol.
Just to make the traffic more interesting, many of these interfaces support features like virtual lanes and remote DMA. Virtual lanes differ from I/O virtualization, as they permit the segregation and possibly prioritization of traffic.Of course, switching is more common in peer-to-peer environments like Ethernet. But only imagination restricts host-based architectures like USB and PCI Express. In the case of USB, bridges like Standard Microsystems' (SMSC) SB2524 link multiple USB hosts to multiple USB devices (see "USB Branches Out").
PCI Express has defaults to a single root tree with a host that controls devices at the leaves. A technique called reverse bridging allows multiple hosts to exist within a tree. The hosts operate cooperatively, and any host can stomp all over the other devices. Standards that are more advanced like PCI Express IO Virtualization (IOV) tackle these issues.
For more, see "I/O Virtualization".