Since its launch in the early 1980s, the VME architecture has perpetually evolved to keep pace with improvements in microprocessor and communications technology. In the early 1980s the world was learning about PCs that used 5MHz CPUs, 640KB RAM, and the ISA bus. Now we’re seeing dual-core, 2GHz CPUs with 4GB RAM and PCI Express.
As CPU capabilities grow, so must the communications between them, particularly in backplane systems that use several intelligent boards. Parallel-bus technologies have been stretched to the maximum. What’s the choice for the future, then? Namely, serial interconnects with ever-increasing bandwidth. In line with this change, VME has introduced three major standards: VITA 31, VITA 41, and now VITA 46.
THE CHANGE TO SERIAL SWITCHED FABRICS
VME has moved from the 20Mbyte/s backplane speed when it began, through 40 and 80, to 320Mbytes/s offered by 2eSST technology. Each step pushes the limits a little more, and stretches the backplane signal timings. Thus, skew and/or signal-quality effects are increasingly causing concern for further enhancements. Serial interconnects reduce the scale of these problems, allowing for improved speeds (now in excess of 1Gbyte/s) as well as the promise of significant increases. This is countered somewhat by greater latencies and overheads associated with the serial protocols.
THE GOOD AND THE BAD OF SERIAL INTERCONNECTS
The downside of changing to serial interconnects concerns some loss of determinism when hardware- and software-protocol overheads come into play. Although arbitration overheads occur in a parallel-bus system such as VME, there’s a greater knowledge of timings. Techniques exist to minimise the effects of these overheads with serial protocols, but they’re less than perfect solutions. In many cases, the interconnect’s sheer speed may overcome some of the limitations.
Until now, use of serial interconnects posed another problem: devices at the endpoints were still using inherently parallel connections. This is changing rapidly, though, with the serial interconnect being integrated into many new devices. The transition from PCI bus to PCI Express is one example of this progression.
Accepting the need for serial interconnects is only the start. With Ethernet, PCI Express, and RapidIO well established, and others like Infiniband and StarFabric waiting in the wings, the choice of protocol is a significant one for the board vendors and their customers.
Although Ethernet is relatively slow (up to 2Gbits/s, full-duplex) and has relatively high overhead (making it less-than-ideal for lowvolume data transfers), it probably remains the most cost-effective, flexible, and best-understood protocol for general use.
PCI Express offers muchimproved bandwidth compared to Ethernet by supporting multiple transfer lanes (theoretically up to 64Gbits/s for a x16 interface), but needs more complex switching protocols. Serial RapidIO is similar to PCI Express, though its overheads are slightly reduced and its peak bandwidth is lower at around 20Mbits/s for a x4 configuration. Infiniband, despite being the driving force for some of the technology advances, is still somewhat specialised and limited to a small number of suppliers.
All of this why many vendors adopted Gigabit Ethernet for their first “fabric of choice.”
The boards in VME systems, as in other types of system, have become increasingly intelligent. Such intelligence thus has changed the level of inter-board communications. As an example, consider an advanced radar system, which uses several VME CPU boards to collect and process imaging satellite data via multiple DSP-based acquisition boards.
Figure 1 shows the way in which these boards used to be connected for this real-world application. The DSP boards acquire data from the satellite receivers, then process the data to filter noise and to create digitised information. The information is then passed via the VME bus to one or more CPUs, normally located in the same VME chassis.
The user interface is provided by an external PC connected via a LAN. Here, the VME bus provides both the control and data planes for the acquisition boards. As a result, in higher-end applications, the number of VME slots used by the DSP boards—together with the total VME bus bandwidth requirements—limits the system’s total processing capacity.
Now consider Figure 2, where the DSP boards have been replaced by FPGA PMC boards on VITA 41 baseboards, and the main CPU board is replaced by a much more powerful unit using a very fast dual-core processor (Concurrent Technologies’ VX 405/04x). The data processed by the FPGA boards is now passed via backplane Gigabit Ethernet connections to a CPU board in the same chassis.
The VME bus remains in use as the control plane, while the VITA 41 serial interconnect is used as the data plane. The system connects via the chassis switch boards to the LAN, which hosts the PCs providing the user interface. In this system, flexibility is greatly improved by utilising the in-system LAN to provide both the internal and external connectivity. Thus, CPUs or more FPGA engines can be added either inside the chassis or in an entirely separate chassis. Splitting the control and data planes for the FPGA boards also boosts responsiveness to control functions by minimising data-transfer interference.
In adopting a serial interconnect, the traditional backplane “bus” is no longer present. Several interconnection topologies are possible, but the most common are the star and dual-star configurations (Fig. 3). The obvious complication with these topologies is the need to include one or two additional switch boards, which adds cost and, of course, lowers system MTBF. However, a dual-star configuration also provides multiple interconnection paths between boards, potentially improving overall system reliability. Another topology, also in Figure 3, is the mesh. It allows boards to directly connect to each other without using switch boards. This improves the cost, but makes implementing multiple interconnection paths more complex.