The Communications Market Will Survive The Downturn

Jan. 7, 2010
Communications editor Lou Frenzel writes about wired and wireless communications in Electronic Design's 2010 Forecast issue. Some of the topics he covers are the latest in Ethernet and Optical Transport Network Technology, WiMax, WiFi and smartphones.

We use electronic communication devices all day long, taking them for granted. We listen to news on the car radio. We talk on our cell phones. We use the computer for Internet searches and e-mail. And when we get home, we listen to music on our MP3 player, watch some HDTV, or read the latest bestseller on our e-reader. And that’s just the beginning.

Communications technology keeps getting better, with new devices each year. While the wireless segment is especially hot, the wired arena is still healthy and growing. In both cases, we’re demanding faster data speeds, less power consumption, and mobile capabilities. Fortunately, the industry is ready to deliver.

With wireless reigning as king these days, it pays to ponder the fate of communications based on wiring, which won’t ever go away. It’s how communications began with telegraphy and the telephone back in the 1800s, and it continues with faster shorter-range copper cables and even faster fiber-optical cables. While the wired side of communications is mature, it keeps getting better incrementally. Two wired technologies stand out: Ethernet and the Optical Transport Network (OTN).

ETHERNET THRIVES

Ethernet, the ubiquitous local-area network (LAN) technology, has been with us for 37 years. It has evolved with semiconductor and computer technology from the coax cable-based LAN to a carrier-grade optical network with multiple variations in between. The IEEE’s 802.3 Ethernet standards group has never been without some new variation or improvement.

The core of Ethernet is still the 10/100/1000-Mbit/s CAT5e twisted-pair LAN, which is used by more than 90% of all PCs and other computers for networking. Now we have 10-Gbit/s Ethernet, not only with different methods of optical implementation but also a copper-based 10-Gbit/s version that’s growing in use in data centers for server and network interconnectivity.

Then there is the emerging Carrier Ethernet effort, which is transforming Ethernet into a longer-range metro-area network (MAN) and wide-area network (WAN) technology by providing the quality of service (QoS) and management elements that make it competitive with some legacy Sonet/SDH networks at lower cost. And there is more to come.

According to Ethernet Alliance president Brad Booth, the next hot Ethernet variation involves iWARP. The term iWARP is short for Internet Wide Area RDMA Protocol, where RDMA stands for remote direct memory access. The iWARP technology has been kicking around for several years without much progress. However, iWARP is now a standard maintained by the Internet Engineering Task Force (IETF) and promoted by the Ethernet Alliance.

iWARP is a method of reducing transmission latency over the Internet Protocol (IP) using the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP). SCTP is a transport layer protocol that can be used in place of TCP or the User Datagram Protocol (UDP) depending on the application. Latency is a problem with standard Ethernet, and this is particularly limiting in systems using 10-Gbit/s Ethernet.

The iWARP process enables very low-latency data transmissions by being about to write data directly from the memory of one computer into the memory of another with minimal operating-system (OS) engagement. It can enable zero-copy data transmission. Zero-copy transport is a way to use direct memory access (DMA) to move data directly from the memory in one computer or network node on the sending end into the memory on the receiving side node without engaging the CPU on either node.

A combination of hardware and software implements iWARP. It usually resides on a network interface card (NIC) in the form of TCP offload engines (TOEs) that relieve the CPU of this routine task in networking. The software is in the form of an open-source software stack developed and maintained through efforts of the Open Fabric Alliance. The iWARP stack is now deployed in Linux and is available in Windows Server 2008.

While 10-Gbit/s Ethernet is already used in data centers, its use can be significantly increased if the latency issue can be addressed. With iWARP, 10-Gbit/s Ethernet can further its use in data centers and make it more useful in building server clusters and high-performance computer (HPC) systems and supercomputers. It also can better compete with InfiniBand connections, which up to now have dominated short-distance connectivity in data centers and HPC clusters.

Also, the Ethernet Alliance is promoting Converged Ethernet. This effort demonstrates how 10-Gbit/s Ethernet can deliver a high-performance enterprise data center infrastructure that transports client messaging, storage, and server applications communications over a converged network.

Converged Ethernet uses iWARP for inter-processor communications. Also, it supports Fibre Channel over Ethernet (FCoE) and Internet Small Computer Systems Interface (iSCSI) for storage. It uses data center bridging protocols or priority-based flow control and enhanced transmission selection. And, it uses SFP+ direct attached cables and optical transceivers as well as 10GBaseT copper.

Furthermore, the IEEE’s Ethernet Task Force is nearing completion of its 802.3ba standard, which is designed to give us 40-Gbit/s and 100-Gbit/s data transmission systems over fiber. Its goal is to create a network that supports the traditional Ethernet frame format and to create new media access controller (MAC) layers and physical layers (PHYs) to support 40 and 100 Gbits/s.

We’ll likely see 40 Gbits/s over 10 km of single-mode fiber (SMF), 100 m over optical multimode 3 (OM3) multimode fiber (MMF), 7 m over a copper cable, and 1 m over a backplane. For 100 Gbits/s, we will see it over 40 km of SMF, 10 km on SMF, 100 m on OM3 MMF, and 7 m on a copper cable.

With today’s growing and affordable 10-Gbyte/s technology, it’s relatively easy to deliver 40 Gbits/s by using four 10-Gbit/s fibers or putting four 10-Gbit/s streams on different wavelengths on SMF. But 100 Gbits/s is tougher. Some of the schemes being considered are four 25-Gbit/s streams or ten 10-Gbit/s streams on a fiber. It will be interesting to see which emerges as the prime technology. The IEEE is targeting June 2010 as a date for final approval of 802.3ba.

If you’re wondering who needs this speed, think again. With the data load on our networks increasing at a geometric rate, virtually all networks are moving toward an overload condition. Thanks to the endless expansion of video content, it is getting tougher for Internet backbones and ISPs to provide sufficient capacity to keep up. It is even worse for wireless systems.

40-Gbit/s and 100-Gbit/s systems will be welcome as soon as standards are available. While 40-Gbit/s Sonet/SDH systems are available now, these older systems lack a path to even higher rates. With faster fiber, it’s time for a new, more data-oriented protocol for optical systems. And that protocol, OTN, is already in place.

OTN ON TOP

OTN (Optical Transport Network) is bringing us the next-generation fiber-optic network that is gradually replacing Sonet/SDH and other optical systems for the transport of high-speed data. As the traffic on the Internet and wireless networks continues to rise, a new transport system is needed, along with higher speeds and the ability to accommodate many different protocols and formats. IP/Ethernet services are gradually replacing traditional Sonet/SDH and PDH-based (plesiochronous digital hierarchy) time-domain multiplexing (TDM) services, and an improved transport system is needed. OTN is it.

OTN is not new. The ITU approved it back in 2001. But during that early turndown, optical equipment and services suffered and declined, delaying its deployment. Since then, data volume has increased by an order of magnitude, and new systems are desperately needed. The ITU OTN was designed to serve that purpose. The relevant ITU standards are G.709 and G.872.

OTN’s goal is to help converge a wide range of services into one common transport service to reduce operating ex-penses and new capital expenditures. It provides a common solution to providing high-speed optical services to 100 Gbits/s while providing network administration, performance monitoring, switching, and fault isolation just like Sonet/SDH.

Network operators want a system that is compatible with dense wavelength division multiplexing (DWDM) and supports operations, administration, maintenance, and provisioning (OAM&P) like Sonet/SDH. At the same time, operators want to be able to transport other network data over Ethernet, Fibre Channel, enterprise system connections (ESCON), and Sonet/SDH and legacy asynchronous transfer mode (ATM) and Frame Relay.

OTN can carry a full 10-Gbit/s Ethernet LAN PHY from IP/Ethernet switches and routers. For that reason, OTN is sometimes called the Digital Wrapper technology. It can combine multiple signals of different protocols within a wrapper that can be transported over a single wavelength for maximum utilization and cost effectiveness.

OTN can carry data at three basic rates. OTU1 designates a line rate of 2.7 Gbits/s that lets it transport an OC-48/ST-16 synchronous Sonet/SDH signal. OTU2 defines a rate of 10.7 Gbits/s so it can handle OC-192/STM-64 or 10-Gbit/s Ethernet. OTU3 has a rate of 43 Gbits/s and will support OC-768/STM-256 or other 40-Gbit/s protocols. OTU4 covers future 100-Gbit/s Ethernet.

A key feature is the addition of a forward error correction (FEC) capability. OTN uses the Reed-Solomon RS (255,239) algorithm. It adds coding gain of about 5 dB and significantly improves the bit error rate (BER). The result is the ability to increase the link budget and reduce the noise on the line to extend the reach as much as 20 km per link.

OTN is the optical backbone of the future. The movement to this technology is already in place and is expected to ac-celerate as Internet and wireless traffic continues to expand. Carriers cannot afford to wait on the forthcoming 100-Gbit/s standards and are moving quickly to 40 Gbits/s. Therefore, new expenditures on DWDM optical networks will be with OTN and 40-Gbit/s systems.

Applied Micro (formerly AMCC) and PMC-Sierra already are making OTN chips. Applied Micro’s Yahara framer/mapper/PHY chip includes the FEC circuitry. Designed for both long-haul and metro optical networks, it works with the company’s Rubicon and Pemaquid chips, which are also used in implementing OTN equipment.

PMC-Sierra’s META 20G device enables OTN on carrier Ethernet switch and router equipment. It supports Carrier Ethernet IP, OTN, and Sonet/SDH on a single chip. The company’s HyPHY 20G device enables the convergence of high-bandwidth data, video, and voice services over OTN metro infrastructures (Fig. 1).

WiMAX FIRST TO 4G

Despite the downturn, wireless has made significant advances on a number of fronts in 2009, and the outlook is rosy. Developments in other areas, particularly next-generation technology, have offset the slow growth in the cellular area.

Fourth-generation (4G) wireless is generally considered to include WiMAX for broadband and Long-Term Evolution (LTE) for cellular. WiMAX has been around for a while and growing, and LTE is still in the wings. The first deployments of LTE will come in late 2010, but it will be 2011 and beyond before you see any volume of LTE adoption and usage.

In the meantime, WiMAX continues to make progress as a broadband wireless access technology. WiMAX, which is short for Worldwide Interoperabiltity for Microwave Access, is a wireless data communications system that is designed for wireless MANs. It can provide broadband wireless access (BWA) up to 30 miles (50 km) for fixed stations and 3 to 10 miles (5 to 15 km) for mobile stations. Data rates vary from about 1 Mbit/s at the low end to 20 Mbits/s depending on the service operator and the range.

While Wi-Fi/802.11 wireless LANs (WLANs) may be faster, their range is limited in most cases to only 100 to 300 ft (30 to 100 m). WiMAX provides DSL-like services to rural areas. It’s also widely used for backhaul in Wi-Fi hot spots, traffic monitoring, and video surveillance systems. And, it’s a candidate for widespread use in the forthcoming rural broadband stimulus programs being planned by the government.

The IEEE 802.16 standard defines WiMAX. The WiMAX Forum offers a means of testing manufacturer equipment for compatibility. The industry group also promotes the development and commercialization of the technology. Clearwire’s wireless high-speed Internet service is a great example of a WiMAX fixed broadband application.

Although hampered by the downturn and funding delays, Clearwire ended 2009 with about 3.9 million subscribers, according to the Yankee Group. The researchers also project as many as 92.3 million subscribers by 2015. ABI Research projects about 2 million WiMAX mobile subscribers by the end of 2009.

WiMAX is showing up as an embedded wireless technology like Wi-Fi as well. It’s now built into more than 40 laptops and netbooks. Intel’s forthcoming WiMAX chipsets for notebooks are expected to be widely adopted to provide global roaming on WiMAX networks. With embedded WiMAX, you can hit basestations up to several miles away instead of having to be within a few hundred feet of a Wi-Fi access point. You can also expect to see WiMAX cell phones as early as later this year.

Overall, the future of WiMAX is excellent. There are more than 500 global WiMAX deployments in 145 countries, ac-cording to the WiMAX Forum. That will continue to grow. And expect WiMAX to be a major player in the government’s broadband stimulus program. It is estimated that more than 10 million U.S. residents in rural communities do not have a high-speed Internet connection, “high speed” meaning a data rate of more than 768 kbits/s. WiMAX is expected to fill at least some of that need. A good source of WiMAX info is www.wimax.com.

MATURE WI-FI STILL KICKING

Wi-Fi’s big milestone in 2009 was the IEEE’s final approval of the 802.11n standard. Of course, that did not stop all the chip and equipment vendors from offering pre-standard “Draft n” products that did well. But with the final standard in place, Wi-Fi chips and products are expected to continue to grow.

The Wi-Fi Alliance (WFA) announced its Certified n program, which tests and certifies equipment for compatibility and interoperability. The WFA added four additional test procedures to the original Draft n test program. These include test support for simultaneous transmission of up to three spatial streams, packet aggregation for greater efficiency of transfer, space-time block coding (STBC), which is a multiple antenna encoding method to improve reliability, and channel coexistence measures when using 40-MHz bandwidth operation in the 2.4-GHz band.

ABI Research estimates that more than 1 billion Wi-Fi chipsets will be shipped in 2011. Wi-Fi will continue as one of the most dominant wireless protocols around. 802.11n chipsets are now beginning to replace 802.11g chips in single-stream (no MIMO, or multiple-input multiple-output) products and applications.

Furthermore, Wi-Fi will continue to be added to more products such as smart phones, games, portable media players, TV sets, and cameras. Broadcom is still the Wi-Fi chip leader, but Atheros, Intel, Marvell, and Ralink continue to battle for market share. And, both Qualcomm and Quantenna recently entered the fray with their 4x4 MIMO 11n chipsets.

We can expect higher speeds and longer range from Wi-Fi in the coming year and beyond as well. With 11n, MIMO will be implemented in more products for data rates up from the 54-Mbit/s maximum of 11g. We could see rates from 100 to 600 Mbits/s depending on the MIMO configuration. MIMO also will greatly extend the range of Wi-Fi. Under the right conditions, it could compete with WiMAX for some applications.

Two 802.11 task groups (TGs) are looking at data rate standards greater than 1 Gbit/s for Wi-Fi. The 802.11ac Task Group is working on a standard for use below 6 GHz that could provide 1 Gbit/s. The 802.11ad TG is working on a stan-dard to use in the 60-GHz unlicensed band. This is expected to be used in home networking for video transmissions. Data rates could reach 2.5 Gbits/s over short distances using beam-forming techniques. We won’t see much on this for a while, but we will provide news about any updates and announcements.

Look for more standards that offer incremental improvements. For example, 802.11s promises a way to mesh network Wi-Fi nodes. 802.11w will offer greater security. 802.11u will provide a way to connect with other non-802.11 networks such as cellular. The forthcoming 802.11v will improve Wi-Fi management. 802.11k will add a radio resource management capability that will enable client radios and access points to be aware of one another.

Furthermore, 802.11z will permit direct link peer-to-peer connectivity of Wi-Fi enabled devices bypassing access points. The forthcoming 802.11y will implement Wi-Fi in the new U.S. microwave space of 3650 to 3700 MHz. Additionally, there is a white space standard in the works (802.11af). Look for some these standards in the coming year and deployment in later years.

SMART PHONES CARRY THE MARKET

Last year was good for wireless in general despite the ongoing economic downturn. Specifically, it was the year of the smart phone. But lots of other wireless products, services, and technologies emerged.

For example, look for an amazing array of smart phones as more and more consumers accept them and the amazing variety of features they offer. On the design side, maybe we should prepare for a power-management crisis. The wide availability of a huge number of apps is really helping to drive the growth of this market. And, expect new cell-phone players. Everyone wants to get into the billion-plus handset market. Even Dell and ZTE now have or will have new products, especially in the huge and growing Chinese market (Fig. 2).

Smart phones have been the smallest handset category for years with only a few real models on the market. These included some RIM BlackBerry and Nokia models, the Palm products, and the Apple iPhone. But this year, we saw a huge range of new products and a serious increase in smart-phone sales. The new models included Apple’s 3G and 3GS, RIM’s BlackBerry Storm2 and Bold, the Palm Pre and Pixi, the Motorola Cliq and Droid (Fig. 3), and a whole batch of phones from Samsung and LG based on the Android OS.

It’s good to see Motorola and Palm back in this business. The Pre and Droid seem to be great products in a tough segment. And let’s hope Nokia gets back on track with its N97 and N900 models (Fig. 4). Add to all these new smart phones the big app movement, and we have a major growth sector and a very interesting future. The next big effort seems to be focused on bringing a better Web experience to the cell phone. New browsers, new user interfaces, and resized Web pages are in the works.

Ari Virtanen, executive VP of wireless solutions at Elektrobit, sees PCs wanting to be smart phones and smart phones wanting to be PCs. The two are moving toward one another, but the outlook is far from clear. The N900, which is Nokia’s high-end flagship, is a cell phone as well as an Internet tablet (Fig. 5). It has an 800-by-600 touchscreen and a full QWERTY keyboard plus 32 Gbytes of memory. It also features a 5-Mpixel camera, Wi-Fi, and Bluetooth. It is the first in another incremental movement to what we can generally call mobile Internet devices (MIDs).

Netbooks are following that trend as they combine office productivity tools and communications. Networking is probably their greatest application right now. We will see more netbooks in 2010, and their massive presence means lots of activity on Wi-Fi and 3G networks. On the handset side, some consumers would like to see office apps on smart phones. The processing power is certainly there, but the screen size and keyboard will limit what can be done. But who knows? The N900 is a step in that direction.

Virtnen also brings up another interesting phenomenon, the proliferation of handset OSs, including Apple, Google Android, RIM, Microsoft Windows Mobile, Palm, and several other versions of Linux. Symbian is still the world leader in terms of total volume thanks to Nokia, but Nokia used its new Linux-based OS called Maemo5 in its N900 series.

Is Symbian fading or is Maemo just a better fit? Nokia recently announced its commitment to Symbian, and we can expect some updates and improvements in the near future. Also recently, Samsung, which is the second largest cell-phone handset maker, introduced its bada OS.

This means that each handset company wants control of its own OS so it can direct and control the direction of the many forthcoming apps. The open OS idea is tempting. But will developers take, say, Android into different directions with no one app being able to run on more than just the phone it was developed for? Maybe Apple and the other companies are right about using a controlled OS. The cell-phone manufacturers and the carriers want to control the apps movement, but it is not clear who will win that battle.

Finally, according to a new study by Parks Associates, more than 30% of 70 million U.S. broadband households sur-veyed in 2009 will have at least one smart phone by the end of the year. Also, 40% said they would pay extra for the convenience of having Internet services on their cell phones. As mobile-friendly social networking and navigation services grow in popularity as well, non-text mobile data services will continue to lead the industry in 2010. For now, Internet and e-mail services are eclipsing the adoption of other mobile services like mobile TV and music.

DELAY OF LTE

LTE is the cellular industry’s 4G solution. Development is ongoing and we can expect LTE in the future, but not quite yet. Trials continue but service is expected to be available in selected areas of Europe and maybe even here in the U.S. Handsets are scarce, and it is anyone’s guess what kind of battery life they will have. LTE will see a gradual adoption, but some carriers will try to get a headstart over others. Verizon will probably be first in the U.S. with service, as will NTT DoCoMo in Japan. Look for deployments increasing in 2011 and 2012, says ABI Research.

In the meantime, the operators have continued to build out their 3G networks and extend their investment a year or so more. Verizon is probably in the lead here with its CDMA networks, but AT&T has been very active in adding HSPA net-works in major markets. AT&T will continue that while Verizon will get to LTE first. The massive infrastructure investment is slowing the adoption of LTE. But don’t forget, on top of all that new LTE gear, carriers still have to support the older phone technologies.

On the infrastructure side of the business, there are some interesting challenges to meet. Jon Hall of Analog Devices sees several clear trends. One is a movement to a multicarrier common radio platform for basestations. To support the LTE effort while maintaining backward interoperability with existing phones and technologies, a multicarrier capability is essential. And this implies very wide bandwidth. Bandwidths in the 135-MHz range are a target. On top of these tough requirements, the pressure is also there to lower costs, reduce size, and, most importantly, reduce power consumption.

Analog Devices and its broad line of data converters are on track to meet those needs. Already, digital-to-analog con-verters (DACs) are fast enough to implement a direct output transmitter where the baseband digital is sent to the DAC that generates the final RF signal without further frequency conversion. But the analog-to-digital converters (ADCs) are not there yet. Downconversion is still a necessary first step, but direct conversion is now possible and interfaces are getting higher as the ADCs deliver the sampling rates and bandwidth to make multicarrier reception a reality.

Other future basestation goals include a green movement and a backhaul overhaul. Most carriers are looking at basestation power consumption, and they’re on a path to a greener infrastructure that will take years to implement. The operators are also widening their efforts to fix the backhaul bottleneck that is limiting wireless 3G and 4G data services, especially the video explosion.

OTHER DEVELOPMENTS

The femtocell movement has been in the works for years, but there have been few adoptions and even fewer actual customers. Operators are still testing femtos, and lots of reference designs are available. The main issues seem to be both technical and marketing related.

For example, how can the carriers ensure that home femto basestations won’t interfere with one another and their primary macro basestations? And will consumers really pay for improved service at home? Analysts have cut back their projections drastically. Is there a real need, or is this just a technology looking for a market?

Meanwhile, the machine-to-machine (M2M) cellular movement did well in 2009 and is expected to grow massively. 2010 should be a good year as the big carriers like AT&T and Verizon are starting to get involved. The embedded cell phone is also taking off thanks to its inclusion in the Amazon Kindle and other E-readers. Expect similar products and related services to emerge.

Mobile TV is gaining strength as well. More operators are offering broadcast TV service thanks to Qualcomm’s Me-diaFLO effort. Look for more down the line as additional handsets get TV receivers.

Handheld GPS receivers with built-in maps and direction-finding software have been a big hit in the U.S. Garmin, Tom Tom, and other companies have done well with double and triple percentage growth. However, market research firm iSuppli indicates that 2009 was a loss year compared to 2008 and predicts that future personal navigation device (PND) sales will be flat, leveling out at about 41.2 million units in 2013. Will PNDs fade away? That’s not likely, but the competition from navigation systems built into vehicles and embedded GPS and nav software in cell phones has clearly blunted PND growth.

And, 2009 brought us the white space spectrum when analog TV switched off in June. That means many megahertz of spectrum is now available for new unlicensed wireless projects. So far, we’ve only seen some demo projects for faster rural broadband access, but those worked well and we can expect to see more. We should see some innovative new products this year—that is, if the FCC doesn’t change its mind and take that spectrum away and auction it off for broadband efforts.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!