If you think the Internet and World Wide Web are big now, just wait. With the forthcoming Internet Protocol (IP) services and the phenomenon of machine-to-machine communications (M2M), a thundering quake looms just around the corner. Can the Internet handle it? Can we? Thanks to electronics technology, the answer is yes in both cases.
How is the Internet expanding? First, take Internet audio. Some have said that music downloads consume roughly one half of the Internet's capacity. That may be an exaggeration, and there may be no way to really know for sure. But you can be certain that lots of audiophiles are forever online downloading their favorite tunes, and that isn't going to stop.
Next, consider IP video. Video puts the ultimate strain on a network, with its need for huge bandwidth even when using compression. On the way are Video on Demand (VoD) and IP video services, which will compete with cable TV. HDTV is here and growing. Some of this video will be delivered partially if not fully over the Internet. Some will be sent to cell phones, too.
Voice over IP (VoIP) is another reason behind the increased Internet traffic. It won't generate near the volume as digital music or video, but we're gradually headed toward an all-VoIP phone system. The current switched network will be with us for a long time yet. Still, VoIP will steadily nab a greater share of that market, ultimately ramping up Internet traffic.
A longer-reaching trend is increased Internet use among younger people for news. TV is still popular, but the Internet is turning into the news source of choice for many. The continuing decline in newspaper sales, from 1% to 2% per year in most cities, has many media companies concerned. Look for newspapers to expand their already substantial news Web sites at the expense of their print operations.
Perhaps the most unexpected source of increased data and Internet traffic will come from M2M communications (see the figure). Any device containing an embedded controller—and that's pretty much everything these days—is a candidate for connection to the Internet. Sensors, controllers, appliances, machine tools, and so on will soon begin to talk not only to the humans who use them, but also to one another.
What exactly is M2M? It's defined as an emerging technology that combines communications, computing, software, sensors, and power technologies to enable remote human and machine interaction with physical, chemical, and biological systems and processes. Examples include sensing of temperature, light, pressure, gases, and other physical characteristics to monitor life, health, property, security, and employee efficiency. Location services also can be implemented for humans, animals, vehicles, and capital equipment via GPS, RFID, and other technologies. Mesh sensor networks will connect to the Internet to gather, analyze, and interpret data for various telemetry applications on a scale never predicted.
As for control, M2M can actuate motors, solenoids, lights, mirrors, and other surfaces, as well as heating and cooling systems, video cameras, robots, and MEMS devices. Much of that control will come via the Internet, but the cellular telephone system will also chip in. Nokia already has implemented this so homeowners can monitor the temperature in their homes, the video cam by the pool, and other conditions while they are away. Businesses can monitor and control any number of crucial systems. And, think of the applications in manufacturing and process control. Machine tools, robots, and critical processes can literally be monitored and controlled from anywhere.
Some projections say that by 2010, there will be 10,000 devices connected to the Internet for every person now online. A Wind River spokesperson at the recent Embedded Systems Conference expects 14 billion intelligent connected devices within five years. Most of these devices will run lots of low-speed data from multiple sources. M2M is about networking every hidden computer-based device to make the monitoring and control transparent to humans for automation on the grandest of scales.
Can the Internet handle this deluge? The answer is yes, but it must scale to accommodate the massive new load. The core of the Internet is primarily 2.5-Gbit/s fiber right now, and that will gradually upgrade to 10 Gbits/s. The hardware is here now, but it's expensive. The ability to go to 40 Gbits/s already exists, but it's even more costly. A rate of 100 Gbits/s is the ultimate target, which will be achievable in time. The 100x100 Project is working to connect 100 million homes with 100-Mbit/s services. We'll eventually need the 100-Gbit/s fiber to grapple with it.
Such enormous traffic will require new fiber to come online. But thanks to the huge buildout of fiber systems in the late 1990s and early 2000s, lots of dark fiber is waiting to be activated. We probably won't have to bury more new fiber in most areas. But that fiber will need conditioning. Some of it won't handle 10 Gbits/s—much less 40 Gbits/s—without dispersion compensation and repeaters. Finally, the art of wavelength-division multiplexing (WDM) with multiple "colors" of light on a fiber at the same time has been perfected. This will push overall data rates to the terabit (1012) and eventually petabit (1015) levels.
On top of that, we're witnessing the further expansion of fiber to the home (FTTH) with passive optical networks (PONs). These shorter-range networks bring broadband triple-play services to the home and small businesses. Both SBC and Verizon are rolling out PONs in selected areas as the technology gears up to better compete with cable for a slice of the broadband business. State and local regulations are keeping the major telecom carriers from offering TV now, thereby allowing cable to continue its monopoly. Asia and some parts of Europe already have PONs, so we're finally catching up.
Faster wireless services will affect the Internet as well. The forthcoming WiMAX broadband wireless systems will help connect even more users in rural areas and small towns that are without broadband. The faster versions of Wi-Fi, like the emerging 802.11n standard, will offer speeds from 100 to 250 Mbits/s in local-area networks (LANs), from hot spots and in-home networks. Ultra-Wideband (UWB) may play a small role in that, too.
Also coming is the more rapid adoption of Internet Protocol version 6 (IPv6), which is an expanded and improved version of the currently used IPv4. Most Internet systems and applications still employ this protocol, which is now more than 20 years old. The Internet Engineering Task Force (IETF) has had IPv6 ready for a while, but it will take time to upgrade all of the equipment and software.
Mainly, IPv6 fixes the increased addressing capability. IPv4 only allows a maximum of about 4 billion Internet devices. IPv6 takes that to 3.4 × 1038 addresses. That ought to be enough for M2M as well as anything else we can think up. IPv6 also fixes existing problems and improves routing and autoconfiguration. It's slowly coming online, but it looks like we will be in a period of co-existence for some time.
Will the Internet2 project and similar efforts to develop the next Internet generation help? The prevailing answer is yes. Internet2 is a nonprofit consortium of more than 200 universities, more than 60 companies, and agencies of the U.S. government that have united to develop and deploy new applications and technology for future generations of the Internet.
The Internet2 already runs at 10 Gbits/s, and the consortium is testing IPv6 and developing things like multicasting and quality-of-service in packet-based systems. The Internet2 won't replace the current system, but the research and development conducted as part of that and parallel efforts will eventually reach the present Internet to improve and expand its capabilities.