Energy Metrics Of Data Transmission
In the past it was enough to say a system moves bits from point “A” to point “B” at “X” bits per second. This was before server farms housed 10,000 servers and petabytes of storage. Today, equipment is graded on its ability to move information error-free (bit error rate (BER) > 10-12) as efficiently as possible.
This has started the war of transmission efficiency where the lowest power wins. Enterprise operators are concerned with server capacity and data transmission speed, as well as the amounts of energy the system consumes. All that energy needed to run the equipment shows up as waste heat so the installation must remove it, which uses even more energy.
It may be simplistic to view data transmission this way, but evaluating the performance of any solution should include power consumption, whether it’s an IC used to drive a data channel or a high-performance multi-gigabit per second switch. I often laugh when I start thinking of metrics like this, much like silicon production rates stated in units of “nano-hectares per fortnight.”
In the bigger picture, it boils down the essence of the goal: move data around the system as quickly as possible with the least amount of energy. The efficiency classification metric is reached in:
Efficiency = PREF • D
where P is power (watts), REF is error-free channel rate (bits/s), and D is distance (meters). It also can be reduced to units of Joules per bit-meter (J/b-m). It translates to how much power it takes to move 1 bit 1 meter in 1 second error-free, or how much energy it takes to move 1 bit 1 meter error-free. It normalizes the various media and coding and allows side by side comparisons of technology.
The 100-Gbit/s+ Challenge
There are two media of choice for 100-Gbit/s data: optical and copper transmission lines. Copper is limited by skin effect and dielectric losses, near and far end cross talk, and other phenomena that erode the BER numbers. Optical transmission suffers from the complexity and power required to convert multiple lanes of electrical signals to one or more modulated laser beams carried over a fiber and back again (see the figure).
Silicon processing and, to some extent, architecture have provided major improvements. Suppliers such as Texas Instruments are introducing new extremely high-performance silicon-germanium (SiGe) processes that are very low power as well as cost effective.
SiGe improves signal integrity due to higher launch amplitudes in drivers versus CMOS implementations. These processes combined with new clock recovery and feed-forward equalization techniques easily can reach the 25-Gbit/s+ per lane numbers now being engineered for the next generation equipment due to hit the market in late 2013.
When comparing the transport efficiency numbers of existing 100-Gbit/s channels, it quickly becomes apparent that optical fiber modules have an advantage in driving long distances. But when it comes to the lengths under 10 meters within the enterprise, copper cables once again shine.
Normalized to 1 meter, optical (100-Gbit/s C form-factor plugable (CFP) consuming 4 W each at each end) takes roughly 80 picojoules per error-free bit transported, versus 24 picojoules per bit for the copper cable solution (100-Gbit/s quad small form-factor pluggable (QSFP) consuming 600 mW each at each end of the cable). The native four lanes running at 25 Gbits/s use much less energy in the box-to-box configurations. Surprisingly, most interconnects in the enterprise environment are less than 1 meter.
Beyond 100 Gbits/s
The traditional method of moving serialized bits across backplanes and cables has, mostly, incorporated non-return to zero (NRZ) binary along with error coding such as 8b/10b. Exceptions include Ethernet standards with variable length CAT5/6 unshielded twisted pair (UTP) cabling such as 1Gbase-T.
These standards require far more complexity due to the limited bandwidth (<350 MHz) of the wire used and the uncertainty of the loss in the channel. To compensate for these limitations, these standards employ multi-level symbol coding, bit interleaving, forward error correction, echo cancellation, and a host of other techniques combined with dynamic link training to establish the fastest error-free connection. All of this consumes more power than its NRZ cousin, but in these cases there is no simple alternative.
As the industry pushes data connections beyond 25 Gbits/s on copper channels, the question is if two-level (binary) NRZ coding will survive. There is talk even now to split the camp at the 25-Gbit/s data rate. Intel has proposed using multi-level coding, even within the enterprise and across backplanes. High data rates would emerge due to higher bit density through symbol coding, but the energy consumed certainly would increase.
This trend can be seen in the complexity of 10 Gbase-T solutions on the market today from Broadcom and other companies. As their predecessors did, these devices rely on multi-level signaling to increase the bit density through symbol encoding. Using multiple lanes (the four pairs in a CAT7 cable), the standard supports 10-Gbit/s traffic with limited channel bandwidths of around 500 MHz.
So will multi-level schemes replace binary coding as the industry moves to 40-Gbit/s channels? It’s possible, though binary NRZ is well understood. A large array of bit error rate testers (BERT) is available for NRZ, in addition to an established infrastructure. There is also the transmitter and receiver complexity issue. Even with CMOS geometries falling below the 40-nm mark, power will be an issue as well as cost.
The Cisco VNI predictions can be pretty exciting, considering the sheer number of devices that will be interoperating within the next four years.2 But they also pose a problem for both the wired and wireless infrastructure providers and the data sources that supply media and information. All of this growth is driving the enterprise and infrastructure to faster interconnection speeds.
It appears that 25-Gbit/s lanes will be here very soon and they may quickly move to 40 Gbits/s, utilizing advanced NRZ or multi-level techniques to ensure error-free communication. But the question remains on what standards will emerge as the winners as the industry moves connection speeds past 100 Gbits/s to 250 Gbits/s and beyond. What do you think will happen? We’d love to hear your thoughts.