High-performance microcontrollers (MCUs) crave bandwidth, which calls for moving new interconnect technologies like PCI Express, HyperTransport, Serial RapidIO, and Gigabit Ethernet on-chip. What will that mean in the long run? Enhanced performance. Reduced latency. Fewer support chips.
The problem of large pin counts has been countered with the switch to scalable, packet-oriented interfaces. These interfaces employ high-speed serializer/deserializers (SERDES), which turns out to be a benefit and a technical challenge when adding interfaces on-chip.
The challenge isn't as great for designers of 64-bit MCUs, who are well-versed in high-speed interfaces, as it is for those dealing with mid-range 32-bit MCUs. Still, technologies like PCI Express and Serial RapidIO will likely find their way into this computing space next year.
Part of the problem is that these new serial technologies gain their throughput numbers by pumping out data faster than the processor. For example, each PCI Express lane runs as 2.5 Gbits/s. Most embedded processors top out at 1 GHz.
So, it's not surprising that only the very high-end solutions like Intel's Pentium and Xeon and AMD's Athlon and Opteron employ wide serial interfaces. These solutions often are dedicated to applications such as graphics. But any MCU that incorporates the new interfaces will need SERDES technology that's a bit different from the core processing and other interface logic.
On the other hand, chip designers have to get these SERDES-based interfaces to work so developers have a well-tested chip at the ready. Of course, basic chip implementation isn't enough—board layout and power-supply design are also critical to a successful system design (see "PCI Express Design: A Lesson In Techno-Shock," ED Online 10174).
Interfaces available for today's MCUs are the aforementioned HyperTransport, Serial RapidIO, PCI Express, and Gigabit Ethernet. Each technology addresses different application areas, but logically, they have a lot in common.
With the exception of Ethernet, the interfaces use small packets and support multiple full duplex links. All are point-to-point interfaces that employ switches for expansion. And, they can support connection distances much larger than parallel bus technology.
Putting these interfaces on an MCU started with FPGAs (see "FPGAs: Hard And Soft Processors," p. 62). FPGAs will continue to provide a more flexible, although more expensive, solution compared to commercial-off-the-shelf (COTS) MCUs. Likewise, FPGAs are on the forefront of delivering high-speed, serial-interface support for the next generation while COTS chips push current standards.
Turning from parallel to high-speed serial interfaces does bring significant benefits. Chip pinouts have steadily increased as functionality and performance grows, and the lower pin count for the serial interfaces offers a respite. At the low end, a single lane lets devices fit into smaller packages. Moving up, the number of pins required for the serial interface still remains below parallel interfaces like PCI-X.
Added benefits of using serial interfaces are lower latency and different kinds of timing issues. In the latter, parallel interfaces must address clock skew issues due to the synchronized multiple signals. This tends to be a system designer's problem, whereas bit jitter associated with the serial interfaces tends to be the chip designer's—or more specifically, the SERDES designer's—responsibility. Typically, a developer using the chip has less concern for this problem when staying within the recommended design constraints.
Chip designers have to determine the amount of serial interface links to incorporate into a product. Obviously, adding links raises pin count and power requirements while adding connectivity and bandwidth. But the new point-to-point interfaces require multiple interfaces on-chip. Otherwise, a designer must use off-chip switches.
Switch chips are mandatory in large fabric-based designs (see "MCUs And Fabrics," p. 63). On smaller embedded applications, however, an MCU may be able to connect directly to a handful of devices. The latter simplifies system design and reduces the overall footprint.
There are benefits to restricting an MCU's serial interface to one connection. It simplifies the chip design. Also, when the chip is only connected to a single device and expandability using a switch chip is straightforward, system design becomes rudimentary. As it turns out, MCUs with high-speed serial interfaces tend to run the gamut.
HYPERTRANSPORT According to the HyperTransport Consortium, HyperTransport can crank out 22.4 Gbytes/s in full duplex mode when using the full 32-lane implementation. At this point, the 8- and 16-bit implementations are most common. Yet cutting the upper limit in half still leaves an impressive bandwidth for a chip-to-chip interconnect.AMD chose HyperTransport as the system interconnect for its 64-bit Athlon and Opteron processors. The Opteron targets multiple processor environments. With its triple 16-bit HyperTransport interfaces, designers can easily create systems by connecting processors together.
The dual-core Opteron simply ties two processors instead of one to the HyperTransport arrays (Fig. 1). A significant advantage of HyperTransport is its ability to share memory between processors. This can take place in a cache-coherent fashion, which happens to be AMD's method of choice.
When examining HyperTransport, you'll see that it's a relatively simple interface. One of the first available, a number of vendors now use it. For example, Broadcom's BCM1480 incorporates four 64-bit MIPS processors in addition to three HyperTransport interfaces (Fig. 2).
Designers also can configure interfaces for SPI-4, a high-speed interface used in communication applications. Broadcom lets SPI-4 data tunnel through a HyperTransport fabric, which allows SPI-4 to be used on the periphery of a system. Not surprisingly, the chip also has quad Gig Ethernet connections.
PMC-Sierra's RM9xxx has only a single HyperTransport channel (Fig. 3). This can link the chip to a coprocessor or to a HyperTransport switch for wider connectivity. These MCUs include dual PCI Express interfaces for high-speed peripheral access. It's not unusual to see this type of mix, with vendors trying to support a wide range of peripheral devices.
Designers could use Cavium Network's MIPS-based Nitrox security processor with the PMC-Sierra MCU. The Nitrox has an 8-bit HyperTransport channel and its own SPI-4.2 interface. At this point, HyperTransport devices are generally integrated MCUs and North Bridge interfaces for high-performance processors and graphics devices.
SERIAL RAPIDIO Tundra Semiconductor's Tsi586A Serial RapidIO switch has added the last piece to the puzzle (see "First Serial RapidIO Switch Arrives," ED Online 10075). While it's possible to have a switchless application as the norm, the Serial RapidIO landscape is a switched environment, especially in larger systems such as AdvancedTCA. Thus, most devices with Serial RapidIO support incorporate one or two interfaces.Freescale's MPC8641D dual-core PowerPC is one example of an MCU with multiple interfaces (Fig. 4). As a standalone device, it supports a pair of x8 PCI Express interfaces that provide a wide, high throughput link to peripherals.
As part of a Serial RapidIO fabric, one of these interfaces turns into a Serial RapidIO interface. As a result, local peripherals can be supported through the other PCI Express channel while maintaining connectivity with the rest of the nodes on the fabric.
Channel configuration will be done once the chip starts up, but autoconfiguration does show how the high-speed SERDES can readily handle different interfaces. This is similar to prior examples that have interfaces that can be configured for SPI-4 or HyperTransport. The MPC8641D, which targets networking applications, also has quad Gigabit Ethernet support.
Texas Instruments' TMS320C6455 DSP contains a dedicated x4 Serial RapidIO channel (Fig. 5). It can be used to access Serial RapidIO peripherals, but it will be more likely used to link the DSP to a larger network.
The x4 links provide a significant amount of bandwidth necessary in high-throughput signal-processing applications. TI's DSP chip is definitely aimed at the high end. It also includes a pair of Gigabit Ethernet connections. In the future, look for x1 Serial RapidIO DSPs. These will target DSP farms that can easily increase the of number DSPs as an application requires.
In many ways, Serial RapidIO can make DSPs look like intelligent peripherals. A single link provides bidirectional connectivity so that data can be streamed in from the source, processed, and then streamed to a different destination. The source and destination are simply nodes on a Serial RapidIO fabric.
This approach is possible with other technologies, but Serial RapidIO seems to be the best alternative at this time. It will be interesting to see if Serial RapidIO appears in other coprocessors, such as security-oriented network processors.
PCI EXPRESS The peripheral connectivity of the future, PCI Express is already replacing PCI and PCI-X interfaces in MCUs. PCI Express is showing up in high-end MCUs like those mentioned earlier. PCI Express channels often ratchet down to x1 links; x4 and x8 links are typical. Wider channels are often found on North Bridge chips for high-end processors versus MCUs.The PCI Express architecture differs from HyperTransport and Serial RapidIO. PCI Express is a tree architecture rooted in a host interface, whereas HyperTransport and Serial RapidIO are a fabric of peers. As such, most MCUs will have a single host or client PCI Express interface.
Some MCUs will possess multiple host interfaces to communicate directly to a small collection of low-speed (relatively speaking) peripherals. Otherwise, a single host interface can easily handle multiple devices via PCI Express switches. A common configuration would be an MCU with a x4 PCI Express host interface that can be split into four x1 interfaces.
In a different approach, Applied Micro Circuits' 440SPe incorporates multiple PCI Express interfaces (Fig. 6). It features an x8 PCI Express host interface and dual x4 client interfaces. The 440SPe is designed to fit between one or more PCI Express host and PCI Express devices—an environment that's quite common in its target market of intelligent storage.
The 440SPe is finding homes in RAID and network-attached storage (NAS) systems, with a set of disks on one side and a processing system on the other. The latter may include many processors, but it's connected to the 440SPe using one PCI Express connection. Multiple connections provide redundancy and are often part of a dual cluster system.
Moving PCI Express into a fabric environment is possible with Advanced Switching Interconnect (ASI). ASI uses the same hardware interface as PCI Express. Intelligent ASI switches can detect and tunnel PCI Express host and client connections.
This won't change the way PCI Express is deployed on MCUs, but it will allow them to be used in a wider range of applications. Native ASI support within MCUs isn't expected for some time.
GIGABIT ETHERNET Though Gigabit Ethernet has been popular, it doesn't scale like any of the other technologies. It's not possible to gang together multiple Ethernet links to provide higher bandwidth. Its latency and overhead is higher, too. Still, it provides MCUs with a link to a fabric that can span the world when using the Internet.Ethernet offers the advantage of being ubiquitous. It's also available in a range of speeds, from 10BaseT up to 10Gbit Ethernet. Among its interesting options is Power over Ethernet (POE). This is less interesting within a confined fabric, but it proves useful in a local- or wide-area network.
Ethernet interfaces are available on-chip for MCUs as small as 16 bits. High-end MCUs often sport quad Gigabit Ethernet interfaces. On-chip Ethernet interfaces are common, though they tend to be used for networking applications instead of chip interconnects like HyperTransport, Serial RapidIO, and PCI Express/ASI. In fact, many chips that include Gigabit Ethernet often contain these interfaces as well.
Moving high-speed interfaces onto the MCU continues to make sense. These interfaces will begin to move down the food chain as more interface chips appear and as fabrics become more common. No designer would be surprised at an MCU with a PCI interface today. The same will be true for PCI Express and possibly RapidIO next year.
Link speed: 2.8 Gbits/s, full duplex, 4-wire
Links: x2, x4, x8, x16, x32
Signaling: low-voltage differential signaling (LVDS)
Fabric: native
Link speed: 2.5 Gbits/s, full duplex, 4-wire
Links: x1, x4
Signaling: low-voltage differential signaling (LVDS)
Fabric: native
Link speed: 2.5 Gbits/s, full duplex, 4-wire
Links: x1, x2, x4, x8, x16, x32
Signaling: low-voltage differential signaling (LVDS)
Fabric: Advanced Switching Interconnect (ASI)
Compatibility: software compatible with PCI/PCI-X
Link speed:1 Gbit/s, full duplex, 4-wire
Links: x1
Signaling: Ethernet
Fabric: native
www.amcc.com
Advanced Micro Devices
www.amd.com
Advanced Switching Interconnect SIG
www.asi-sig.com
Altera
www.altera.com
Broadcom
www.broadcom.com
Cavium Networks
www.cavium.com
Freescale
www.freescale.com
HyperTransport Consortium
www.hypertransport.org
Intel
www.intel.com
Lattice Semiconductor
www.latticesemi.com
PCI-SIG
www.pci-sig.com
PCI Industrial Computer Manufacturers Group
www.picmg.com
PMC-Sierra
www.pmc-sierra.org
RapidIO Trade Assoc.
www.rapidio.org
Stargen
www.stargen.com
Texas Instruments
www.ti.com
Tundra Semiconductor
www.tundra.com
Xilinx
www.xilinx.com