In his 1995 book Being Digital, Nicholas Negroponte, founder and chairman emeritus of the Massachusetts Institute of Technology’s Media Lab, predicted the great convergence of media where books, music, and video would all be represented by bits stored in computer networks.
Today with the help of new legislation such as the Digital Millennium Copyright Act (DMCA) and organizations such as the Electronic Frontier Foundation (EFF), this is almost a reality. But is there any connection between digitizing our world and the drive to 16-nm CMOS and state-of-the-art high-speed analog processes? The answer is yes, and it’s driving up the speed of all things electronic as well as the power they consume.
When the Internet was in its infancy, much of the applications foreseen for the network were peer-to-peer (P2P) such as e-mail and machine-to-machine (M2M) for defense and other mission-critical uses. Much of the redundancy found in the Internet protocols (IP) are thanks to the requirements from the Advanced Research Projects Agency (ARPA) when the project was envisioned in the late 1950s.
It is doubtful that anyone during that time (other than a few brilliant academics) would have predicted the Internet of today. A recent study by Cisco predicts that by 2014, network traffic will exceed 63.9 exabytes (1 EB = 10E18 bytes) per month, which is equivalent to the contents of 16 billion DVDs or 21 trillion MP3 songs (see newsroom.cisco.com/dlls/2010/prod_060210.html).
What is truly fascinating is that this kind of growth is highly nonlinear. In fact, it’s growing exponentially. If you look closely at Cisco’s projection as well as the history from previous studies, it quickly becomes clear that the Internet is taking on a completely new role. Negroponte’s vision where all media is converging is finally taking root.
Negroponte stressed the migration from atoms to bits—that is, instead of buying a CD with 12 songs on it, people would download the bits representing the music on the CD, bypassing the physical media. With the arrival of iTunes, Amazon, and other services, it’s not only possible to download music (now compressed into much smaller files), but also to stream video—NetFlix, Hulu, iTunes, and other services all support this capability.
With the explosion of DOCSIS (Data Over Cable Service Interface Specification) broadband high-speed Internet access, along with legislation such as the DMCA, the stage has been set for an amazing explosion of Internet traffic driven mostly by Internet video and other media. The first hurdle, though, is improving personal bandwidth.
In 2000, downloading an MP3 file would take approximately 3 minutes. Today, that same download takes roughly 5 seconds. According to the Cisco report, personal download speed has increased 35 times over the last 10 years. It could be argued that download speed gave us the capabilities we have today. Alternatively, did the inevitable conversion to bits and instant availability drive the need for speed?
Either way, the network we have today is the product of engineers who defined and utilized the semiconductors used both in personal computers and network equipment. This is somewhat like a snowball rolling down a steep hill in that once the momentum started picking up and more snow was added, the growth was unstoppable.
As more services were made available (e.g., the original Napster), the bandwidth was requested (by consumers) and required for usability. This not only included the personal bandwidth at the consumer’s home, but all of the aggregated bandwidth at the head-end and Internet backbone.
As more bandwidth came online, more services were made available such as YouTube and Hulu, which provide streaming video on demand. The cycle continues today as more bandwidth is coming online with the recent release of DOCSIS 3.0 modems with downstream speeds of up to152 Mbits/s and upstream throughput up to 108 Mbits/s.
The explosive growth of the Internet is not only closely tied to the technology, but also to the legality of the content provider’s copyrights and license agreements for digital copies. The passing of the Digital Millennium Copyright Act in the U.S. and similar legislation in the European Union paved the way by extending the copyright while limiting the liability to service providers for infringement by its subscribers.
This legislation as well as work contributed by the EFF has provided the legal distribution of digital books, music, and video via the Internet. Without the content providers having assurance that the owners of the material will be compensated, nothing was going to stream (legally) anywhere. This has been a painful road for both sides, but again there is continued learning and growth.
The Cisco report shows that the systems that have been put in place to digitize and distribute what was once tied to a physical media has had a profound affect on the future of the Internet as well as the mobile world. Bandwidth is now a requirement. Imagine trying to download a video over a 56-kbit/s modem connection, taking hours—not exactly instant gratification. So content providers and consumers both are looking to technology to improve how they deliver and receive these services respectively.
From the meager beginnings of packet switched networks to the largest machine on earth, the contemporary Internet, the heart and soul of these creations have been semiconductors. Along with the microprocessor revolution of the 1990s, the network revolution has spanned the past decade with bandwidth seemingly growing by decades themselves.
Most computers today come standard with a Gigabit Ethernet connection. Low-cost home routers and switches support 10-Gbit switch fabrics, and low-end small business switches approach 50-Gbit/s capacity.
All of this bandwidth aggregates back to data centers and designers are continuously looking for solutions for higher speeds, but with the caveat that power is critical. This aggregation is driving semiconductor processes and technology to provide faster speeds at lower power both for switching and driving the physical interconnected media.
In modern enterprise networks, switches use fiber-optic versions of Ethernet (i.e., 802.3ae) along with standardized connections for modules such as Small Form-Factor Pluggable (SFP, SFP+), Xenpack, X2, or XFP. This is very common today, though optical modules can be costly and most interconnections within these networks are under 30 meters.
The standard allows for passive solutions, but twinax cables are large with a limited bend radius versus the tiny fiber-optic cables. Additionally, they can block airflow crucial to equipment operation.
Again, the semiconductor industry responded with devices such as the National Semiconductor DS100BR410 receiver, which provides low-power drivers and signal conditioning that can be placed within the shell of the connector and allow smaller gauge wire to be used.
Called “Active Cable,” this technique is challenging fiber-optic solutions in the space under under 30 meters. These cables have diameters similar to fiber cables, providing a competitive bend radius and much lower power compared to the optical modules with an added benefit of lower cost.
Along with driving the physical media, digital processes are being pushed to ever higher speeds requiring ever shrinking geometries. The rush to 40-nm geometries would not have been so substantial if the market was fine with 180-nm CMOS.
Between the ASIC vendors and microprocessor suppliers, smaller geometry processes were both a requirement to reduce cost and a means to go faster with lower power per transistor. The caveat here is that there are “far more” transistors on these devices continuing along the path of Moore’s Law, which surprisingly has continued to hold true.
Interestingly, today the emphasis in the personal computer market is to put more horsepower into the graphics processing unit (GPU) instead of the core processor since the visual information (video and gaming) is most important to most users, driving the need for higher-performance video subsystems as well.
It is now routine to place 2 billion transistors on a single die—something considered science fiction even 10 years ago. This trend will inevitably continue as the forces of consumers, especially in the mobile space, continue to apply pressure to service providers for more bandwidth.
In the mobile space, most Internet connections are made via 3G (third generation) systems with limited bandwidth (somewhere around 200 kbits/s downstream). As service providers quickly found out as smart-phone adoption increased, users weren’t casually looking up a restaurant address or sending a quick e-mail via their mobile device, but rather choosing to stream movies, download songs, and browse the Internet with their phone— something no one (except possibly Steven Jobs) expected users to do.
The next step might be Long Term Evolution (LTE), which should address some of these issues by providing (again) expanded bandwidth. LTE promises download speeds of 100 Mbits/s and upstream speeds of 50 Mbits/s (assuming a 20-MHz channel). Improvements to LTE are reaching 300-Mbit/s download speeds.
But for 4G (fourth generation) systems, LTE Advanced is a candidate with peak download speeds of 1 Gbit/s and peak upload speeds of 500 Mbits/s. All of this bandwidth requires improvements in system architecture as well as the semiconductor technology that lies at the heart of these designs.
There is a very close relationship between the consumer’s desire for media on demand and the technology to provide it. Without the content providers’ support as well, there wouldn’t be anything to consume. So, assurances that everyone will make money will drive the economy of media delivery over the Internet and in that process drive the technology to ever higher data rates.
The growth of data centers to store the endless amounts of “bits” now representing our entertainment media and the fixed and wireless networks to deliver them is a testimony to the vision of Nicholas Negroponte.