One standout among 10 Gigabit Ethernet’s alternatives is 10GBase-T due to its backward-compatibility, flexibility and low cost.
Ethernet reigns as the networking protocol of choice in LANs, MANs, and data centers. While Fibre Channel and InfiniBand have their niches, Ethernet still dominates among data centers that interconnect hundreds and even thousands of servers, routers, and switches. However, choices abound amongst Ethernet’s rich alternatives environment. This article reviews the various 10 Gigabit Ethernet (10GE) options and their relative plusses and minuses.
Table of Contents
- The 10GE Standards
- Optical Modules
- Copper-Based Technologies
- Comparing 10GE Options
- Power-Saving Modes
Not long ago, links operating at 10 Gbits/s were considered exotic, relegated to high-capacity backhaul in the core sections of wide-area networks (WANs) and undersea cables. However, the rise of cloud computing, along with the increased use of unified data/storage connectivity and server virtualization by enterprise data centers, has created an unquenchable thirst for ever-higher data-rate links.
As was the case with three prior generations of Ethernet, the ubiquity, the ready and familiar management tools, and the compelling cost structure propelled 10GE’s rapid dominance over the computer networking scene. Crehan Research estimates that in 2011, more than 8 million 10GE ports were in place among data-center switches. The Linley Group’s January 2011 report predicted robust 10GE growth and estimated that 10GE network interface card (NIC)/LAN-on-motherboard (LOM) shipments alone will surpass 16 million ports in 2014.
Implementing a 10G link can take on many forms, ranging from optical modules and direct-attach twin-ax cable to silicon transceivers connected to Cat6A unshielded twisted-pair cable. What are the differences?
Starting in 2002, The Institute of Electrical and Electronics Engineers (IEEE) created several standards for 10G Ethernet connectivity (802.3ae), including these more popular options:
- 10GBase-SR: operates over multimode fiber using optical modules with 850-nm lasers
- 10GBase-LR: operates over single-mode fiber using optical modules with 1310-nm lasers
- 10GBase-LRM: operates over multimode fiber using optical modules with 1310-nm lasers
- 10GBase-ER: operates over single-mode fiber with reach up to 40 km using optical modules with 1550-nm lasers
- 10GBase-KX4: operates over four copper backplane lanes with distance up to 1 meter
- 10GBase-KR: operates over a single backplane lane with distance up to 1 meter
- 10GBase-T: operates over Cat6 and Cat6A twisted-pair copper cabling with distance up to 100 meters
In addition, a non-IEEE standard approach called SFP+ Direct Attach has gained in popularity. It uses a passive twin-ax cable assembly that connects directly into an SFP+ module housing.
This article focuses on 10GE links inside the data center. Therefore, the variants intended for very long haul WAN connectivity are outside its scope. Such variants include the 10GBase-LR, which has a specified reach of 10 km, and 10GBase-ER, which has 40-km reach.
As in prior transitions to higher transmission speeds, optical technology pioneered 10-Gbit/s data rates. Even early on, optical transceivers came to be housed and available in modules specified by multi-source agreements (MSAs) created by module manufacturers and equipment OEMs. These MSAs continually specified smaller form factors, evolving from XENPAK to XPAK to X2 to XFP to SFP and finally to today’s popular SFP+ modules.
The SFP+ form factor is quite small (5.57 by 1.38 by 1.19 cm), as seen with the module (Fig. 1). The size is achieved by moving the clock and data-recovery (CDR) functions, which were present internally in all its module predecessors, to external devices that it connects to on the printed wiring board. Most of the popular IEEE 10GE transmission standards are implemented in the SFP+ form factor.
For data-center applications, 10GBase-SR-compliant (“short range”) modules have emerged as the most popular variant of the optical options. Of all the optical variants standardized by IEEE, 10GBase-SR delivers the lowest-cost and -power optical modules. The modules, used in multimode fiber, incorporate 850-nm lasers. Over older FDDI-grade 62.5-µm multimode fiber cabling, 10GBase-SR’s maximum range spans 26 meters; over 62.5-µm OM1 fiber, 33 meters; over 50-µm OM2 fiber, 82 meters; over OM3 fiber, 300 meters; and over OM4 fiber, 400 meters.
The 10GBase-SR transmitter incorporates a vertical-cavity surface-emitting laser (VCSEL), which is lower in both cost and power than side-emitting DFB lasers needed for single-mode fiber. OM3 and OM4 optical cabling is sometimes described as laser-optimized, because they’re designed to work with VCSELs.
While optical module prices have dropped dramatically over the last decade, their continued price erosion is hampered by their very construction. It’s essentially a hybrid microcircuit containing a variety of components, each manufactured from different materials (such as silicon CMOS, silicon germanium, and gallium arsenide). Perhaps one day we will see true monolithic integration of both optical and electrical components. However, the conflicting characteristics and requirements of lasers, PIN diodes, laser drivers, and transimpedance amplifiers—all necessary functions in an optical module—currently seem to be inimical to monolithic implementations.
On the other hand, even the so-called “short range” optical module variants stand alone in their ability to service distances longer than 100 meters. It’s true that various copper-based technologies are gaining popularity for the short-link distances characteristic of modern data-center configurations. Still, optical connections most likely will dominate connections between data-center rooms, long spans between server and storage clusters, and connections between end-of-row (EoR) switches and core switches.
However, one important weakness inherent in optical modules is that they’re not backward-compatible with lower-speed link partners. Since they can only handle the one data rate they’re designed for, both ends of the link must be upgraded to the higher-speed 10GE modules at the time of deployment. As will be explained later in this article, other 10GE technologies permit the incremental upgrading of data-center infrastructure with backward-compatible technologies.
Depending on the optical-fiber type and the distance it traverses, an optical module may need an external companion PHY device that incorporates electronic dispersion compensation (EDC). Certain types of optical cable (e.g., FDDI multimode cable) are susceptible to chromatic, or frequency, dispersion of the high-speed 10-Gbit/s signals, leading to interference between successive bits of data. The EDC function compensates for these distortions and recovers the data with high fidelity. It’s typically integrated together with the CDR function needed with SFP+ modules, which is available in single or multi-port ICs from vendors such as Broadcom, Cortina, and Vitesse.
If distance played an important role in classifying optical 10GE technologies, that metric is even more pronounced with copper-based connectivity options. Basically, copper-based solutions fall into two categories: distances appropriate to backplanes within a box, and distances associated with connections between boxes.
Generally, 10GBase-KX4 and 10GBase-KR are intended for inter-box backplane connections with distances up to 1 meter. The major difference between the two is that KX4 operates over four copper lanes and KR is a serial 10-Gbit/s link operating over one lane.
KR has become the more predominant standard because of its lower lane utilization and that equipment manufacturers have learned how to cope with 10-Gbit/s serial data. KR-compliant transceivers contain DSP-based adaptive equalizers, which help mitigate the inter-symbol interference created by the frequency-dependent attenuation of copper traces. As such, they can “open the eye” of 10-Gbit/s data even after traveling one meter on copper backplanes.
For distances “outside of the box” (i.e., typical of distances between boxes), IEEE-compliant 10GBase-T and SFF-specified SFP+ Direct Attach twin-ax links are the more popular choices.
The Small Form Factor (SFF) Committee, an ad hoc industry trade group, created the specification SFF-8431; its Appendix E specifies SFP+ Direct Attach (DA). These cables consist of two coaxial cables, each providing simplex transmission in either receive or transmit direction. The cable assemblies are ordered in pre-specified lengths and come with attached SFP+ module form-factor connectors. Distances between 3 and 10 meters are supported, depending on the coaxial cable gauge.
Like their optical counterparts, DA cables frequently require EDC circuits to mitigate inter-symbol interference caused by the channel’s frequency-dependent attenuation. It’s common practice to attach such an EDC physical layer (PHY) to an empty SFP+ cage on a printed wiring board and have the network designer decide whether to populate that cage with an optical or DA cable connected SFP+ module. The distance required for the link often determines that choice. Optical modules are typically used for distances greater than seven meters.
The other copper-based 10GE connectivity option is 10GBase-T, also known as IEEE 802.3an. With 10GBase-T, 10-Gbit/s communications occurs over unshielded twisted-pair cabling. It’s the fourth generation of so-called Base-T technologies, which all use RJ45 connectors and unshielded twisted-pair cabling to provide 10- and 100-Mbit/s, and 1- and 10-Gbit/s data transmission. A 10GBase-T transceiver (Fig. 2) uses full-duplex transmission with echo cancellation on each of the four twisted pairs available in standard Ethernet cables, transmitting an effective 2.5 Gbits/s on each pair.
Category 6 or category 6A cabling is typically used with 10GBase-T. Cat6 is specified for distances up to 55 meters, whereas Cat6A is specified for up to 100 meters.
As evidenced from the previous discussion, link distance plays a very important role in determining the best option. Backplane traces on printed wiring boards are clearly the domain of the 10GBase-KR option. For distances longer than 100 meters, optical module technology is the overwhelming choice, while the 10GBase-SR variant enables distances between 100 meters and 300 meters. Other optical standards are available for the longer distances outside of the data center. Despite its relatively higher price, optical technology has become mandatory for long-distance links.
For distances between 10 and 100 meters, the data center’s only real options are optical connections and 10GBase-T. Optical modules hold the advantage of dissipating lower power and being inherently immune to electromagnetic interference (EMI). However, they cost more, requiring paired-speed link partners and using relatively expensive optical cable.
On the other hand, 10GBase-T is able to communicate and interoperate with legacy, slower, Base-T systems. Commercially available 10GBase-T transceivers can revert to both 1000Base-T (1 Gbit/s) and 100Base-TX (100 Mbits/s) protocols. Data centers, therefore, can “future-proof” their switching architectures. In other words, a 10GBase-T switch purchased today can communicate effectively with all legacy 1-Gbit and 100-Mbit servers while providing the infrastructure to upgrade to 10-Gbit switching when confronted with commensurate speed servers.
Moreover, data-center expenditures can grow incrementally, rather than having to convert all servers and switches to 10-Gbit speeds, which would be required with a non-compatible technology like SFP+ Direct Attach. A 10GBase-T switching system can convert only those links that truly need upgrades to 10-Gbit speeds while maintaining 1-Gbit speed on legacy servers that don’t require such data rates.
Another advantage of 10GBase-T technology is its ability to use ubiquitous and inexpensive cabling. In many cases, the data center’s installed base of cabling supports 1000Base-T systems. Optical cable usually needs to be purchased new, and it is more expensive than Cat6 or Cat6A copper cabling.
While optical technology is indeed EMI-immune (RF energy doesn’t interfere with photons), enhancements to 10GBase-T transceivers have substantially improved their EMI immunity. To contend with EMI events, 10GBase-T transceivers support adaptive interference cancellation. Equipment using these transceivers will be validated through rigorous EMI tests, such as those mandated by the Telcordia GR1089 standard, which calls for testing with field strengths of 8.5 V/m.
Another portion of distance spectrum to consider lies between one and 10 meters. At this distance, copper-based 10GBase-T and DA Twin-ax cables become potential choices.
One key advantage of DA twin-ax cables is that standalone EDC-enabled PHY devices may not be needed for very short distances (1 to 3 meters) if the CDR circuits are integrated directly into the switch or media access controller (MAC) IC, which communicates directly with them. This will cut down on the support circuits needed, leading to reduced cost and power dissipation. However, distances above three meters typically require EDC PHYs, which turns 10GBase-T into an attractive alternative.
Regardless of the distance, SFP+ connectorized DA cable is more expensive than UTP Cat6 or Cat6A cable. Furthermore, since twisted-pair cabling and RJ45 connectors have been a part of the data-center infrastructure for many years, widely used techniques and tools exist for terminating (attaching connectors to) Cat6 and Cat6A cables on the data-center floor. Such tools give mangers the flexibility of cutting spooled cable to needed lengths, rather than ordering and keeping an inventory of pre-defined lengths of terminated cable, which is the case for optical and twin-ax.
However, by far the most compelling argument in favor of 10GBase-T is the ongoing development of LOM chips. These will allow server manufacturers to offer 10GBase-T as the default connectivity option. The implications of this development are quite profound—servers will come preconfigured with Ethernet connections and be able to negotiate 100 Mbits, 1 Gbit, or 10 Gbits, depending on the capabilities of the link partner on the other end of the line. Data-center managers will want to be ready for such a development by deploying a 10GBase-T-capable switch that can extract the full capability of the server.
One of the arguments against 10GBase-T surrounds power dissipation, though it’s mostly based on early implementations of the technology. Thanks to recent advances in semiconductor lithography, 10GBase-T transceivers have realized dramatic reductions in power dissipation during normal operation. From a per-port power of over 6 W just a few years ago (interestingly enough, the same power per port for initials shipments of 1000Base-T), today’s new 40-nm devices are capable of sub-4-W performance.
In addition, the continuing “Moore’s Law” shrinkage of chip feature sizes will usher in 28-nm devices this year that promise to further reduce power dissipation to about 2.5 W per port when operating over a 100-meter line. Figure 3 shows how enhanced semiconductor lithography has improved the power dissipation of 10GBase-T transceivers.
Advances in semiconductor technology aren’t the only means to augment power dissipation, though. Base-T systems in general, and 10GBase-T systems in particular, can take advantage of some unique and standards-based algorithms that exploit the nature of computer traffic. They include “Short Reach mode” operation, Wake-on-LAN (WoL), and Energy Efficient Ethernet (EEE) operation.
Today’s 10GBase-T PHYs can help substantially reduce overall power dissipation by automatically detecting channel length between compliant transceivers. When channel length is less than 100 meters, 10GBase-T transceivers are able to reduce their power dissipation while still maintaining fully compliant bit-error-rate (BER) performance. This so-called Short Reach mode takes advantage of the larger signal-to-noise ratios present due to lower signal attenuation in short channels, resulting in dramatic power-dissipation reductions.
For example, since the signal strength at the receiver is significantly larger if it’s attenuated by only 10 meters of cabling (versus 100 meters), transmit power can drop substantially without adversely affecting BER. A common misperception regarding Short Reach mode is that it’s an on-off condition directly tied to a specific link length (e.g., 30 meters). In fact, the Short Reach mode power-dissipation profile is contiguous and scalable versus length.
Short Reach mode not only reduces transmit power, it also curtails and internally powers down the number of filter taps used for echo cancellation and line equalization. For instance, a transceiver that typically exhibits 3.5 W of power dissipation when connected to a 100-meter channel may drop to only 2.5 W when connected to a 30-meter channel, or less than 2 W when connected to a 10-meter channel. Because many of the latest data-center configurations rely on the short-length characteristic of server to top-of-rack (ToR) switch connections, exploiting this feature has become more important.
WoL is a networking standard uniquely implemented on Base-T systems. In this case, a network element, such as a server, is put to sleep until awakened by a special network signal called a “magic packet.” The server’s network interface card (NIC) reverts to a very low power-dissipation mode during the sleep period, but remains alert and waiting for the magic packet. Once it arrives, the server is awakened and normal operation is resumed. Since WoL wakeup time is typically tens of seconds, it’s designed for long periods of time when servers are idle, such as at night or during other lengthy periods of inactivity.
Even the most active of data centers experiences periods of time in which it needs only a portion of its capacity. This is a natural consequence of overbuilding resources to accommodate peak compute demands, and the temporal and seasonal fluctuation in those demands due to non-uniform user locations and time schedules. WoL can exploit these demand fluctuations with startling results—putting to sleep a single typical server with 500-W power dissipation is much more beneficial than the difference in power of hundreds of transceiver devices.
It should be emphasized again that optical or DA links aren’t designed to support the WoL protocol and, consequently, force the servers and switches they connect to stay on and dissipate their full power around the clock. On the other hand, 10GBase-T takes advantage of WoL and benefits the data center in overall reduced power needs.
While WoL is designed for lengthy idle periods, another technology called Energy Efficient Ethernet (EEE) specifically targets the bursty nature of computer traffic (EEE was developed by the IEEE 802.3az task force and issued as a completed standard in November 2010). Typical Ethernet traffic contains many gaps ranging in duration from microseconds to milliseconds. Heretofore, these gaps were filled with “idle patterns,” in which no real computer information exchange takes place; however, its waveform transitions can be used to maintain clock synchronization between transceivers. The EEE algorithm exchanges those idle patterns for a Low Power Idle (LPI) mode, which dissipates very little power.
The LPI mode used during idle periods requires a new signaling scheme composed of alerts over the line, and to and from station management. During the LPI mode, a Refresh signal keeps receiver parameters, such as timing lock, equalizer coefficients, and canceller coefficients, in their most current state. These are also critical to enable fast transitions from LPI to Active modes. Typical transition times from Active to LPI mode and back are in the 3-µs range.
The bottom line is that when using the EEE algorithm, transceiver power savings can range between 50% and 90%, depending on actual data patterns. Putting all figures in quantitative terms, a 28-nm 10GBase-T transceiver with a typical Active power dissipation of 1.5 W for 30-meter reach will dissipate only 750 mW when utilizing the EEE algorithm with normal computer data patterns.
System-level optimizations in switches and Ethernet controller silicon are expected to take advantage of EEE’s low-power idle signaling and save far more power than would the transceiver. That’s because they can leverage the consumption of the entire switch or server, which is more than double the power per port of even the previous generation of transceivers.
Overall, 10GBase-T connectivity is the most flexible, economical, backward-compatible, and user-friendly 10G Ethernet connectivity option available. Its benefits include the ability to interoperate with legacy slower technologies, the use of ubiquitous and inexpensive cabling and connectors, the flexibility of full structured wiring reach, the ease of Cat6A cabling deployment, and power-saving features. As a result, 10GBase-T is ideally suited for the rapidly expanding needs of today’s data center. It also offers a a glimpse into the near future, in which 10G Ethernet over inexpensive, widely deployed copper wiring will become ever-more prevalent.
- Jones, William 10GBase-T Tutorial Overview, Solarflare Communications
- 10GBase-T: 10 Gigabit Ethernet Over Twisted Pair Copper, Ethernet Alliance, 2007.
- IEEE 802.3ae 10Gb/s Ethernet Task Force, Blue Book, 2000.
- 10 Gigabit Ethernet Cabling, ProCurve Networking, 2006.
- INF-8074i Specification for SFP (Small Formfactor Pluggable) Transceiver, SFF Committee, 2001.
- SFF-8431 Specifications for Enhanced Small Form Factor Pluggable Module SFP+, SFF Committee, 2009.