Apply Service-Quality Management To Wireless Networks

March 1, 2003
With The Addition Of Data Services, Operators And Service Providers Must Weigh Network Performance And Customer Perception.

Traditionally, mobile network operators and service providers have relied on network-management platforms to optimize the performance of their networks. The telecommunications landscape is changing, however, with the advent of 2.5G and 3G technologies and the promise of innovative data services. As a result, network operators and service providers now need additional solutions to monitor quality of service (QoS). In particular, they must bridge the gap between the conventional methods of managing network performance and the customer's perception of QoS.

In their quest to balance these demands, network operators and service providers have recognized a need for Service-Quality Management (SQM) software. Ideally, such software can manage the entire service delivery process. In a recent report, the Yankee Group (www.yankeegroup.com) predicted that the market for SQM software will reach $107 million in 2002. By 2004, it will grow to $684 million.

Service quality can be defined as "the collective effect of service performances which determine the degree of satisfaction of a user of the service" (ITU E.800). In other words, quality is the customer's perception of a delivered service. By service-quality management, we refer to the monitoring and maintenance of end-to-end services for specific customers or classes of customers (FIG. 1).

As larger varieties of services are offered to customers, the impact of network performance on the quality of service will be more complex. It is vital that service engineers identify network-performance issues that impact customer service. They also must quantify revenue lost due to service degradation.

For example, a major outage on a link will generate several star alarms. But this problem may not have any direct impact on customers, because the traffic could be rerouted. Yet smaller problems, such as a cellular site going down in a busy city center, might impact a large number of customers. This occurrence might directly impact the operator's perceived quality of service. The ability to allocate resources to high-priority outages—meaning those that are customer-impacting/profit-reducing—dramatically increases overall network efficiency.

Two major software building blocks are required to proactively manage service quality: a powerful data-aggregation engine and an end-to-end service-mapping tool. The data-aggregation engine processes network-event data. End-to-end service-quality data is derived from a number of data sources: the radio-access network, core network, and application network (FIG. 2). The application network comprises service platforms, such as Wireless Application Protocol (WAP), servers, or Short Message Service (SMS).

The most important sources of network data are the fault-management information, performance measurements, usage data records (UDRs), and Internet service monitors (ISMs). Remember that a plethora of standards exist to define these network measurements. In addition, many network equipment vendors have also defined proprietary measurements. Among the most important standards are the Third Generation Partnership Project (3GPP) recommendations for UMTS and IETF RFCs. To monitor quality of service, most operators thus have to collect and aggregate a very large number of measurements, which are often available in different formats. This is especially true of network operators with multi-vendor network and application platforms.

The most valuable service-management information is derived from discrete event records. These records include UDRs, Internet Protocol Detailed Records (IPDRs), ISMs, and more. Today, new data-aggregation engines enable network operators to exploit this information goldmine. In real time, they can aggregate individual customer records in time-series data. By aggregating these records in multiple dimensions, it is possible to evaluate service metrics that were previously unavailable. Access is now provided to the following:

  • End-to-end service views (e.g., aggregated by Access Point Nodes)
  • Customer-centric views based on individual customers—IMSI ranges—or virtual private networks
  • Geographical service views, which are derived from service access based on the point of entry into the network. Examples include Node-B/CellId or location-based service information.
  • By content provider
  • Quality of service requested versus quality of service achieved

The data-aggregator engine groups individual records against aggregation rules (FIG. 3). It includes many details. Among them is entity, which defines the object for which the data will be aggregated (Alarm, UDR, etc.). The interval is the duration of each aggregated block (for example, 10 min.). To find out which objects will be filtered for each aggregation block, look to the time period.

It is the filter that decides which objects at the selected entity will be processed. Finally, the grouping criteria conclude how the objects, which match the specified filter, will be grouped together.

Aggregation can be scheduled to run immediately following a data load or according to a pre-defined schedule. In fact, it is similar to running a scheduled report. Think of a report that is run every 10 min. between 8 a.m. and 5 p.m., Monday through Friday, excluding holidays.

The data aggregator is designed to collect data from a diverse range of sources. In other words, it collects data from different types and locations like UDR, performance data, and network alarms. It also gathers data that was produced by multi-vendor equipment. The aggregator processes large volumes of data to the order of several hundreds of megabytes per day. For example, around 60 million usage data records are produced every day for a network with 10 million customers.

The service-mapping tool comes in next. Performance data is mapped onto service-quality data. Take a customer using Multimedia Messaging Services, or MMS (FIG. 4). If a video download is interrupted many times during a session, the customer will lose interest. The operator's revenue will be lost with it. To avoid this situation, key quality indicators (KQIs) like availability can monitor the QoS offered to customers.

From a customer's point of view, the availability KQI measures how successfully he or she can access and use the MMS service. Many reasons exist for a MMS session to fail, such as loss of a message, variation in throughput due to congestion, or longer than usual round-trip time. Availability reflects the capability of the MMS system to detect customer-impacting problems. Such problems can be caused by long delivery delay, which was a typical problem with early General Packet Radio Service (GPRS) and UMTS implementations.

With the service-mapping tool, it's possible to combine KQIs from multiple key performance indicators (KPIs) across different service resources (FIG. 5). As defined in TMF GB923, KPIs measure a specific aspect of the performance of either a service resource or a group of service resources of the same type. A KPI is restricted to a specific resource type and derived from network measurements.

By following this top-down approach, the service-mapping tool provides several benefits. It helps operators manage end-to-end quality of service from a customer's perspective. It also allows them to reuse key performance indicators and key quality indicators across services and products. Lastly, it helps operators drill down to the service elements that are responsible for quality degradations.

For effective service-quality management, however, other ingredients also are needed. An example is a high-performance standard database. To be effective, this database must have a low cost of ownership. It also must be designed to handle large volumes of data and a large number of services. The database should integrate service and network-management applications while supporting multiple technologies (Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), cdma2000, etc.).

Service quality also demands a simple and easy-to-use user interface. With this interface, Network Operations Center (NOC) staff and service managers can monitor service-quality objectives against thresholds. These thresholds may be internal targets for the network operator. Or they could be derived from Service Level Agreement (SLA) definitions.

When the service quality falls below the contracted levels, managers could then initiate corrective actions. They could focus on the service degradations that affect the greatest number of customers. A set of standard reports for different user communities should also be available. Network Operations, for example, may request reports on service capacity, the number of customers affected by service degradation, N-Worst or N-Best services, and N-Worst or N-Best service elements. For new services, marketing and sales may be interested in reports on service usage and service uptake. National regulators may also request historical service quality against given service objectives.

In summary, service-quality management is an organization-wide process that affects many functions of a network operator. While today's focus is on the development of service-centric solutions based on network-related aspects, tomorrow's requirements will no doubt include a wider definition of quality of service. This definition will range from the provisioning of services to customer care. An example of a key quality indicator might include the time for resolution of complaints, the time to provision a service, or billing accuracy. Yet this would be the topic of another article. Network operators are concerned with introducing new services and maintaining and improving quality of service. This concern includes perceived quality as measured by the strength of the carrier's brand. To measure it best, such operators should consider adopting a quality-of-service initiative. With new services rolling out, it is time to begin exploring a service-quality-management solution.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!