Numbers succinctly tell the story of the unmitigated expansion of mobile broadband: The total amount of cellular mobile subscribers using broadband connections is expected to balloon from 350 million (at the end of 2009) to 1.7 billion by 2014. Of course, with such growth also comes a serious increase in mobile data traffic.
Last December, mobile data surpassed voice traffic on a global basis for the first time in history. The crossover occurred at approximately 140,000 terabytes per month in both voice and data traffic. Data traffic (Fig. 1), which grew globally at a rate of 280% during each of the last two years, is forecast to double annually over the next five years, and reach 1.8 exabytes per month by 2017.
A Strain On Backhaul Networks
Mobile backhaul is the network for transporting mobile traffic between cell sites (BTS/NodeB’s) and radio controllers (BSC/RNCs). It’s also a major contributor to the high costs of building out and running a mobile network—estimated to be approximately 25% to 30% of total operating expenses.
As more demands are placed on supporting increased data traffic, it’s important that mobile operators optimise their networks with the most cost-efficient backhaul techniques. Traditionally, time-division multiplexed (TDM) circuits have interconnected basestations to regional network controllers, which worked fine for voice-only systems or with low-bandwidth data traffic. However, the surge in mobile broadband traffic has overloaded TDM circuits, and providers can’t keep pace.
Simply adding more TDM circuits seems like a solution, but that won’t work. The recurring monthly costs for legacy backhaul technologies (PDH, ATM over PDH, and SONET/SDH) increase linearly with traffic. Also, the relatively flat average revenue per user (ARPU) an operator can charge for enhanced services prevents carriers from passing these increased expenses on to consumers.
As a result, to achieve a lower cost per bit, operators are looking to move to packet-based backhaul techniques using IP and Ethernet. Using Carrier Ethernet for wireless backhaul allows operators to support large bandwidth increases from cell sites, while keeping operational costs in check. Operators can significantly reduce their cost per connection by moving from TDM to Ethernet (Fig. 2).
Jumping From TDM To IP/Ethernet Backhaul
Recent technologies such as 802.1Q VLAN tagging and Ethernet Operations, Administration and Maintenance (OAM) turn Ethernet into a viable technology for transporting services over the mobile backhaul. In a survey of global service providers, recently conducted by Infonetics, 100% of service provider respondents claimed to be deploying IP/Ethernet backhaul in 2010. However, this will be a ‘phased’ migration, as outlined by the MEF 22 Mobile Backhaul Implementation Agreement (Fig. 3).
The first phase is a hybrid implementation: Carrier Ethernet is used for packet offload of data services, and TDM is retained for voice since it requires clock synchronisation across the network for call setup and handover. It’s not an ideal approach, though, because it forces carriers to maintain and pay for two separate networks.
The ultimate goal is phase two, in which a single Carrier Ethernet network is used to backhaul all services. The Infonetics survey shows that 65% of service providers plan to eventually move to a single IP/Ethernet backhaul. Before pursuing this final stage of migration, carriers must have confidence that timing-over-packet (ToP) technologies can satisfy strict clock synchronisation requirements of wireless standards.
Continue on next page
The Timing-Over-Packet Roadblock
Unlike TDM, Ethernet wasn’t designed to carry synchronous information. Moreover, it can’t “natively” align clock frequency across devices in the network to the level of accuracy and stability required for the setup, hand-over, and reliability of mobile-phone connections.
To address this issue, MEF 22 recommends the use of IEEE specification 1588 and ITU-T Synchronous Ethernet (SyncE). These ToP technologies can be used to synchronise clock frequency across devices in the Ethernet backhaul network. They significantly improve clock accuracy and stability to satisfy timing requirements of supporting mobile voice subscribers.
SyncE technology achieves frequency synchronisation across Ethernet network devices. The basic function of SyncE interfaces is to derive the frequency from the received bit stream and pass that information up to the system clock. The receive port and bit stream to be used by the system clock is decided by the Ethernet Synchronization Messaging Channel (ESMC) protocol, which communicates the clock “quality level” of each ingress port. It’s important that the device recognise and select the highest quality ingress clock for setting the system clock. That’s because the transmit interfaces lock to the system clock, and then propagate the bit stream to downstream nodes.
The fact that clock synchronisation happens at the physical layer, hop by hop, in a similar manner to synchronisation in TDM networks, appeals to carriers. However, in addition to frequency synchronisation, multichannel communications require time-of-day (ToD) synchronisation (an accurate value of the current absolute time) to achieve phase alignment. Thus, IEEE 1588 is necessary for synchronising Ethernet networks carrying many cellular technologies, as well as video streaming and interactive gaming.
The IEEE 1588 standard specifies the Precision Timing Protocol (PTP) for network synchronization. Unlike Synchronous Ethernet, IEEE-1588/PTP is a purely packet-based solution, with the actual clock values being passed inside the payloads of special packets dedicated to that task. The standard establishes a master-slave hierarchy of clocks in a network, where each slave synchronises to a master clock that acts as the primary time source.
Version 2 of the IEEE 1588 Precision Time Protocol (IEEE 1588v2) recently introduced two other logical device types that are commonly built into switches and routers used between master and slave clocks:
• A boundary clock (BC) increases system scalability by acting as a slave clock to an upstream master clock, and as a master to multiple downstream slave clocks. In large systems, the introduction of boundary clocks allows for many more slave clocks than a single master could handle.
• A transparent clock (TC) is typically built into each Ethernet switch positioned within a PTP network. It thus mitigates the effects of forwarding delays (caused by packet queuing within the switch, for example) that would otherwise reduce the accuracy of clock recovery.
Testing Must Precede Deployment
With industry standards in place, SyncE and IEEE 1588v2 technologies have received wide acceptance throughout industry, and they’re now finding their way into today’s Ethernet chip sets. However, standards don’t guarantee inter-vendor interoperability—granular details, such as specific field values, are left up to each equipment vendor’s discretion.
Continue on next page
SyncE and 1588v2 interoperability testing between different vendor devices prior to deployment is critical to ensure network-wide clock synchronisation. Phase and frequency synchronisation with 1588v2 and SyncE was a key focus of the EANTC showcase at Carrier Ethernet World Congress, held September 20-23, in Warsaw, Poland. Test equipment played a critical role in identifying, troubleshooting, and resolving communication barriers between the different vendor equipment in the event.
Furthermore, the functionality, performance, and stability of these technologies can be compromised under network stress and heavy load. As you scale traffic, supported protocols, and the number of neighbor devices involved in the exchange/synchronisation of clock frequency, the operation and stability of the timing protocols themselves is put at risk. That may quickly result in missed call handovers, clock frequency drift, and even network downtime. Functional, performance, and stress testing of ToP technologies is required prior to deployment.
Carriers also must understand the tradeoffs between synchronisation scalability and traffic forwarding performance for network planning and optimisation. Until recently, the only way to achieve this insight was to build a testbed of physical network equipment in a lab. Newly released test tools that support SyncE IEEE 1588v2, along with the wide range of technologies and traffic present on mobile backhaul networks, will significantly improve carrier test budget and timelines.
Test equipment can simulate real-world mobile backhaul network conditions in a controlled lab environment, so that carriers can evaluate network equipment and services “at scale.” These measurements reduce any risk to the performance and reliability of services carried over an Ethernet mobile backhaul, and build a carrier’s confidence to pursue widespread deployment. The resultant operational savings and flexibility ultimately allow service providers to accommodate continued growth of mobile subscribers and traffic—without breaking their budgets.