Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
PCI Express (PCIe) Gen 3 is the mainstay for microprocessors. It scales by adding more lanes typically in an x1, x2, x4, x8, and x16 progression. Processor chips may use anywhere from one to more than a couple dozen lanes depending upon the bandwidth needed for a particular application.
The high-speed serial PCIe interface superceded the parallel PCI bus as the foremost peripheral interface, although even the PCI predecessor, ISA, is still in use. Access to peripherals such as Ethernet adapters remains a focus for PCIe, but it can also be utilized as multiple-node interconnect fabric as well as an access mechanism for solid-state storage also known as non-volatile memory express or NVMe.
NVMe is a storage protocol based on SCSI that is also the basis for SAS (Serial Attached SCSI). SAS uses the same electrical interface as SATA (Serial Advanced Technology Attachment) while NVMe runs on top of PCI Express. In general, they are similar in that commands and operations are queued to provide more efficient throughput between the storage device and the host. NVMe can handle other storage technologies, but for now it is primarily NAND flash memory including 3D NAND flash.
NVMe storage devices can be placed on the motherboard or attached in a variety of ways. An NVMe PCI Express card is one way to do it. Another is the M.2 NVMe module form factor like Micron’s 512 Gbyte unit (Fig. 1) that uses 3D NAND and an x4 PCI Express interface. The M.2 sockets are becoming more common on motherboards and are ideal for embedded applications since they are more rugged, but also provide a way for developers to select the amount of storage needed for an application. The M.2 form factor also supports USB and SATA interfaces with keyed sockets so only matching modules can be plugged into a board.
On the enterprise side, the U.2 drive module (Fig. 2) is becoming more popular. The connector actually supports a range of interfaces including an x4 PCI Express interface to handle NVMe as well as multichannel SAS and SATA. These modules are designed for hot-swap operation and are found on systems that may have a half dozen to hundreds of slots. These take advantage of the PCI Express switches that are available so one or more hosts can access the drives.
PCI Express fabrics have been used to link multiple hosts together as in Dolphin’s PCI Express solutions. This consists of a PCI Express switch and PCI Express host adapters that can be cabled to the switch. A system is designed to run a version of Linbit’s DRBD that replicates disk storage. Of course, the hosts could use a PCI Express interface to use NVMe storage as well.
PCI Express has also been used to link other devices together. For example, some GPGPUs can utilize their PCI Express interface to communicate with other systems linked by Ethernet supporting remote DMA (RDMA) using a protocol called GPUDirect. This configuration is useful in supercomputing clusters with GPGPUs located on different nodes within the system. The approach can be used with other interconnects like InfiniBand.
PCI Express can be used with a single-root complex host to interface with peripherals, but these days it can do much more.
Looking for parts? Go to SourceESB.