Electronicdesign 5467 Zarr595x335

The Enterprise Prepares For Life Beyond 100 Gbits/s (Part 2)

Aug. 13, 2012
This contribution explores the use and need for 10 Gb/s Ethernet connections and beyond to meet growing mobile growth and data speeds.
Mobile users are one of the largest growing sectors of new connections to the Internet.1 This growth can be attributed to new smart phones that provide an excellent user experience for accessing the Internet and in part to late adopter countries using cellular technology to provide Internet access to their remote populations. But with growth predicted through 2016 and beyond, the problems with aggregation and service support continue to plague designers.

Going The Distance

Once standards are established, they can remain for an extremely long time. The modern English railroad gauge may directly descend from the width of an Imperial Roman chariot’s wheel spacing, but that’s another story. The standard that affects data centers is the size and spacing of racks within the infrastructure.

Typical racks are 19 inches wide, 42 inches deep, and 7 feet high, which has been a standard for many years. In some industries, racks as wide as 23 inches are used for applications such as telecom switching gear. Even with higher density integration due to shrinking CMOS geometries, these racks aren’t shrinking as expected.

Actually, they’re growing due to higher power dissipation and cabling requirements. The Open Compute Project has proposed plans for a new standard called Open Rack that’s 21 inches wide.2 While growing wider, racks are also growing taller. Some are 9 feet high to maximize server density.

As rack dimensions grow, so do the connector placements relative to the equipment’s internal electronics. Simultaneously, the CMOS density has increased, allowing silicon vendors to integrate physical layer (PHY) components, which originally were located externally to the core switching ASIC and near the connectors.

Two things happen in this scenario. First, the distance between external connectors and electronics increases. Second, the connection data rate increases. This has an extreme effect on the design’s signal integrity. In many cases it requires adding active components to recover and re-clock the data to meet the connector’s specifications (see the figure). The 10-Gbit/s transmission lines now suffer from increased jitter and signal degradation over the original quad (x4) 2.5-Gbit/s versions they replaced.

High-speed interconnects have evolved from 2.5-Gbit/s lanes to 10-Gbit/s lanes.

This becomes a problem for 10-Gbit/s Ethernet, as well as storage and computing. PCI Express version 3 increases the channel bandwidth from 500 Mbits/s to 1 Gbit/s via more efficient coding and by increasing the transfer rate from 5 to 8 Gbits/s.

The higher rate causes issues in meeting the connector specifications over the same distance where PCI Express 2.0 works fine when using passive transmission lines on FR4. In many cases the fix is to use either more exotic printed-circuit board (PCB) material or active equalizer/re-drivers to electrically shorten the transmission line. This is not as simple as it appears since the PCI Express standard uses out-of-band (OOB) signaling to establish a working link between the root complex and a PCI Express node.

Semiconductor vendors routinely supply silicon that allows this standard to work over increased distances. The PCI Express 4.0 specification uses 16-Gbit/s channels. Given the growing size of rack servers and equipment, it will become increasingly difficult to meet the standards for interoperability.

Storage interface standards have undergone the same type of revisions. In the enterprise storage world, the standard is serial attached SCSI (SAS), which is a serialized version of the small computer system interface (SCSI). The SAS-2.0 specification uses 6-Gbit/s channels, which again functions by using careful FR4 PCB layout techniques and high-performance connectors.

SAS-3.0 uses 12-Gbit/s channels and suffers from the same signal integrity issues as seen with other higher-speed standards. The use of connectors and cables to attach a drive to a system compounds the problem.

Cable vendors are introducing high-performance connectors (Mini-SAS HD) and dielectric materials to combat decreasing signal integrity. It won’t get any easier since the drives may be moving farther away in the system, requiring longer connector lengths.

Power Rules

Power is a major concern for both operators and equipment vendors. Even though most ASICs used to design high-performance communications equipment are CMOS, newer (faster) equipment uses more power than earlier models.

Also, even though transistor geometries are shrinking and more power efficient, circuit designers are using the smaller geometries to pack more transistors into these devices. Power in CMOS circuitry is proportional to the clock rate. So even though each transistor is more power efficient, they are being clocked many times faster than earlier generations that had far more transistors.

This trend will continue. It part, it’s what is driving the larger rack spaces, which provide more airflow and better cable management. The improvement in cable management enables better spacing of connectors on equipment so the interconnections themselves don’t block the flow of air from fans. However, it is only a matter of time until the connector density will expand to fill the available space.

Beyond 10 Gbits/s

In modern data centers, most interconnections are 10-Gbit/s Ethernet utilizing either copper or fiber cables. These channels are single (SFP+) or grouped into quads (QSFP). In both cases, these cables are full duplex.

Several standards are faster than 10-Gbit/s Ethernet. Fibre Channel (FC) reaches speeds of 16 Gbits/s, and 20 Gbits/s is defined as well. Protocol standards such as Internet Protocol (IP) can be mapped to FC packets and allow higher-speed interconnections over both fiber and copper cable.

For Ethernet, using multiple lanes of 10-Gbit/s traffic provides higher aggregate bandwidth, but 10 channels are required to reach 100 Gbits/s. A 100-Gbit/s Ethernet fiber module uses a very high-speed 10:4/4:10 serializer/deserializer (SERDES) that converts 10 channels of 10-Gbit/s data to four 25-Gbit/s streams.

These four streams feed four different color lasers configured in a wave division multiplexing (WDM) scheme. The SERDES is expensive and power hungry, and the 10 channels take up connector space. The next generation will use 25 Gbits/s natively feeding the fiber module. It eliminates the SERDES and provides a much smaller connector footprint.

In Part 3 we will examine in depth the signal integrity issues with moving beyond 10 Gbits/s and the solutions that silicon providers and designers both are employing to solve these difficult problems.

References

  1. The Enterprise Prepares For Life Beyond 100 Gbits/s (Part 1)"
  2. Open Rack Standard

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!