Electronic Design
Will PCI Express Become The Standard For Enterprise Interconnects?

Will PCI Express Become The Standard For Enterprise Interconnects?

PCI Express (PCIe) is a very common bus standard that’s found in most computers and peripheral equipment. With a long history reaching back to 2003, it has evolved to handle interface speeds of 8 Gbits/s in its revision 3 specification. In fact, an interesting trend is emerging in servers to remove intermediate standards such as SAS in enterprise storage and go directly to PCIe. But will this trend continue to migrate to other peripherals as the primary connection to all internal and external equipment?


Since the birth of the personal computer, bus standards such as S-100, ISA, EISA, and PCI have emerged to allow multiple vendors to build compatible expansion cards for these systems. The original IBM PC of the 1980s used an extension of the 8088 processor bus to form the peripheral connector bus, and major manufacturers subsequently adopted it.

Later, the peripheral connector bus developed into the industry standard architecture (ISA) bus and evolved into a 16-bit version called enhanced ISA (EISA). However, ISA suffered from many issues that prevented scalability as computer processor speeds increased. One notable challenge was the manual configuration of each expansion card.

Microsoft offered up ISA-Plug-n-Play (ISA-PnP) as a vehicle to ease the configuration nightmare. But ultimately, the peripheral component interconnect (now known as conventional PCI) bus inherently had these features as part of the standard and superseded EISA and VESA Local Bus (VLB).

Over time, though, conventional PCI began to suffer from similar issues when scaling bandwidth. The parallel bus structure was susceptible to skew, limiting the error-free data throughput. Thus, it became apparent that a new standard would be required.

In 2004, the PCI Special Interest Group (SIG) released the PCIe 1.0 specification. This radical departure from previous bus structures was serialized point-to-point and full-duplex. It shares a virtualized similarity to the original PCI bus as the original configuration, I/O, and memory operations are now packetized into the serial stream and can occur in either direction simultaneously.

Evolving Standards

Any standard must evolve to meet the needs of its implementers or be retired. PCIe has seen three major revisions since its initial release and now boasts a throughput of 8 Gbits/s per lane (PCIe revision 3). Lanes are multiples of PCIe serialized links, so if more bandwidth is required, multiple links (in powers of two) can be grouped to increase the available bandwidth.

For instance, an x8 or “by 8” (PCIe revision 3) bus has a transfer rate of 8 Gbytes/s. The standard allows x16 and x32 as well. The point-to-point evolution also enables the bus to run as fast as the attached peripheral. The system no longer needs to adapt to the slowest device on the shared bus.

As PCIe emerged as the peripheral interconnect of choice in servers and personal computers, other standards were evolving for communications and storage. Of course, the ubiquitous Ethernet emerged as the pipe of choice for data transfer. It has found homes in every conceivable application from automotive backbones and industrial controls to enterprise server farms and home computers.

There are standards for storage as well that originated from the Seagate (then Shugart Technology) ST-506 interface and evolved into the AT Attachment/Integrated Drive Electronic (ATA/IDE) standards. Those standards have undergone serialization and are known as Serialized ATA or SATA. The enterprise world also saw progress with the small computer system interface (SCSI) evolution resulting in several standards such as Serial Attached SCSI and iSCSI (SCSI embedded into IP packets).

The interesting thing about iSCSI is its ability to move high-performance storage information over a conventional network without special physical-layer (PHY) requirements. In effect, this decoupled the storage concept from the physical infrastructure and allowed the actual storage medium to be located anywhere within the network, including distant (offsite) locations. This fundamentally enables the software to think the hard-drive storage is local to a machine, when it is actually anywhere the system designers want it.

The Enterprise Evolution

At the heart of the World Wide Web, corporate infrastructures and distributed “cloud” computing platforms are data centers. These centers mirror the early days of computing when “main frames” stood behind glass windows and centralized control maintained the information and system integrity. Really, not much has changed.

Today’s data centers are a centralized collection of hardware and software that provide the services we use every day. Servers and blade centers are connected in banks to switches and storage systems through various interconnection schemes. Ultimately, the actual machine where a particular action takes place may not be immediately known since much of the infrastructure is virtualized to manage loading and reduce power consumption.

So, where does PCIe play into the enterprise? If we examine the classic server platform we’ll find native PCIe as the primary means for the CPUs to access storage and peripherals.

Intel’s Romley server platform provides 40 PCIe 3.0 lanes to access the outside world. Some of these lanes connect to planar (on PCB) peripherals, but many of them traverse connectors, mid-planes, and riser cards that hold PCIe x4 or x8 connectors. This is the I/O connection for the server, and dual 10G Ethernet NIC cards often are inserted here as the mechanism to connect to the infrastructure.

Theoretically, a 64-bit processor could directly address more than 16 exabytes of storage (physical RAM or otherwise). Interestingly, the memory space of modern 64-bit processors is limited by the number of bits used for physical addressing (Intel X86-64 at 48 bits, AMD64 at 52 bits).

As CPUs evolve, that number will increase well into the exabyte range. When this happens, large storage arrays may simply be part of the CPU’s memory map. Storage also could be shared between CPUs. In this scenario, the interface best suited for large disk drives could be PCIe.

This has two advantages. First, the drive appears to the CPU as real (un-virtualized) memory independent of how it is physically stored. Second, the performance hit from mapping virtual memory to disk storage is eliminated, improving storage throughput. This is happening today.

The International Committee on Information Technology Standards (INCITS) SATA-IO working group has announced SATA Express, which utilizes PCIe as the physical transport but still uses the SATA software infrastructure. The Small Form Factor committee has released SFF-8639, which is a connector standard designed to allow the transition from SAS/SATA to SATA Express and PCIe (see the figure). Additionally, enterprise solid-state drives (SSDs) based on flash technology are already commercially available with PCIe interfaces.

The SFF-8639 multi-protocol peripheral connector can bridge the gap between current hard-drive standards and PCIe.


It may be some time before storage and peripheral standards merge. However, it is becoming apparent that there is a trend to move to the native mode PCIe standard. As standards merge, they can take advantage of the many benefits that PCIe provides, including hot-swapping, high availability, and extremely high bandwidth.

As PCIe 4.0 is released sometime in 2014 or 2015, the need for storage will certainly have grown. At that time, we may see a merger of the storage and peripheral bus standards into a grand unified PCIe bus standard, along with storage drives with only a PCIe interface.


For more information about PCIe and other interface standards, visit www.ti.com/interface-ca.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.