Our world has seen amazing changes with the advent of the information age. The latest Cisco Visual Networking Index (VNI) shows a rapid growth in demand from many sources, namely streaming media.1 Cisco estimates a compound annual growth rate for Internet Protocol (IP) traffic of 29% through 2016.
All of this demand aggregates into server farms strategically located near inexpensive (and plentiful) sources of power. This growth rate is problematic for service providers—both those that supply information such as Google, and those that bring that information to your desktop or mobile phone—and challenges abound beyond the need for bigger pipes.
The need for speed has plagued the communications industry since long before the World Wide Web was a household phrase (see the figure). Computers began to get connected around 1983 with the introduction of 10-Mbit (coaxial half duplex CSMA/CD) Ethernet. As the number of nodes increased in local-area networks (LANs) and wide-area networks (WANs), backbone aggregation became a problem.
Fiber-optic breakthroughs along with wave division multiplexing brought much of the high speed to our modern Internet. However, the legacy twisted pair that connected most homes to the Internet quickly reached their Shannon limits. Dialup modems and DSL had run out of capacity to move more information, effectively controlling the growth of capacity requirements even with a steady increase of nodes (users).
When hybrid fiber/coax systems began to be deployed to provide HDTV via digital set-top boxes, large bandwidth was reaching the “last mile” to the consumers. The development of Data over Cable Service Interface Specification (DOCSIS) cable modems allowed speeds of many megabits per second, and consumers began to demand this technology.
DOCSIS modems along with fiber to the home (FTTH) provided the catalyst to propel the Internet to what it is today and what it will be tomorrow. But it doesn’t end there. The recent arrival of smart phones and Long Term Evolution (LTE) wireless networks has once again opened up the bandwidth for consumers, now untethered by cables. However, providers of cellular smart phones missed something fundamental—how consumers would use their Internet browser-enabled phones.
Providers simply thought people would periodically need to find a location, check a phone number, or perform some other short task that required services provided by the Internet. They were completely caught off guard when people started “surfing” the Web on their phones. Their backhaul networks were completely overloaded, so they began offering “plans” designed to limit the consumption of data services—at least for now.
Where It All Aggregates
The more nodes you connect and the faster they go, the more capacity you need, both in the network and from the information service providers. This has been driving a trend that continuously increases the demand on interconnection speeds. What is interesting to note is that most IP traffic in data centers such as those at Amazon and Google is between machines inside the centers, not between the clients connected via the Web.
During an online purchase, clicking the “Place Order” button starts a large number of transactions between servers to verify the user and payment selection, figure out where the nearest fulfillment warehouse is located to send the order, log all the financial transactions for record keeping, gather statistics on the buyer, and perform other functions. The longer it takes to complete all of these transactions, the longer the consumer must wait for a confirmation that the order has been placed.
The same thing happens during online stock transactions, gaming, banking, and other e-commerce. This “multi-transaction” traffic along with the massive numbers of transaction requests is driving the local network interconnection speed beyond 10 Gbits/s per lane to 25 Gbits/s.2
Most modern data centers use 10-Gbit Ethernet via small form-factor pluggable (SFP) or quad SFP (QSFP) connections. These connections can handle either fiber or special high-performance copper cables depending on the length of the run required. But as more equipment vendors have been increasing the density of the ports to provide more capacity in the same space, several problems have emerged.
One issue is placing a large number of fiber modules near each other at the exhaust side of the equipment’s cooling system. Fiber modules typically dissipate 1 W each. Placing these modules—for example, a 48-port, 10-Gbit/s switch—close to each other along with the elevated temperature of the air exiting the equipment can degrade the life of the lasers and pose a safety hazard.
Data center managers are now looking to passive cable where possible to provide some relief from both the power consumption and cost of the fiber modules. Recently, connection lengths of less than 15 meters have been using “active” cables that incorporate semiconductor linear equalizers and re-drivers to condition the signals and improve signal integrity. Active cables emulate a shorter interconnect, allowing smaller gauge wire similar to the diameter of fiber, but with much lower power dissipation and cost.
Also, these managers can introduce “smart cables” with features such as time domain reflectometry (TDR) and eye monitors to continuously manage the integrity of the connection. The SFP and QSFP specifications allow for this kind of communication via a two-pin serial management port on the connector. The equipment can identify the cable and its capabilities, which lets system designers integrate these features into the management system.
In Part 2 we will examine in more detail the data center interconnects and the next generation of specifications driving data rates to more than 25 Gbits/s per lane.