What you’ll learn:
- Newly defined performance metrics for Wi-Fi devices enables the building of repeatable Wi-Fi testing. This enables manufacturers and service providers to validate performance of the devices used in broadband deployments.
- New Wi-Fi testing based on TR-398i2 focuses on performance testing of real, and often complex, deployment scenarios based on how broadband subscribers actually use Wi-Fi.
- Wi-Fi performance testing can be broken into a few key areas: coverage, capacity, and robustness.
Developing and implementing testing for Wi-Fi shares many of the same challenges you would expect in testing any wireless or RF technology. With the new TR-398 Issue 2 test plan, a strong focus has been on creating a repeatable set of test procedures, setup, and configurations. It’s been necessary to ensure the testing can produce repeatable pass or fail results from absolute performance requirements.
This focus sets the TR-398 testing apart from much of the other widely available Wi-Fi testing that’s focused on interoperability or functionality of specific features or components of the IEEE Wi-Fi specifications. Combining these two categories of testing—functionality and interoperability—with TR-398 performance testing creates a reliable process that manufacturers and service providers can use to deliver truly carrier-grade Wi-Fi to their subscribers.
Service providers have faced a seemingly endless task of keeping up with growing application bandwidth along with the simultaneous expansion of device count. In Wi-Fi, where the physical layer is a shared and scarce resource, these two parameters are directly at odds with each other. The problem compounds as subscribers view end-application performance as a measure of the overall internet performance observed on mobile or small devices with limited space available for antennas.
To help provide the service providers with the most applicable data on the expected performance of their Wi-Fi devices in the field, the focus and design of TR-398 test cases has been on performance as would be observed by an end user. For example, testing defines performance requirements for cases using two spatial streams instead of four spatial streams, or 20-MHz bandwidth configurations for 2.4 GHz instead of 40 MHz.
Testing Real-World Scenarios
As such, the overall test coverage of TR-398 Issue 2 has focused on performance testing of real deployment scenarios for how Wi-Fi is used by broadband subscribers (Fig. 1). More specifically, the testing could be broken down into a few key areas: coverage, capacity, and robustness.
Coverage testing
Coverage testing measures the items, including spatial consistency, to ensure the device under test (a.k.a. the access point or AP) provides the required performance regardless of the angle or orientation toward the station. With spatial consistency as the first dimension of coverage testing, an equally important test is the range vs. throughput testing. This testing verifies the AP achieves a minimum expected level of throughput performance as the station moves closer or further away from the AP.
In the lab, the range or distance between the devices is implemented using programming attenuators. Figure 2 shows a simplified view of the test setup that’s used to implement many of the test cases. The AP is placed into a Faraday cage, with near-field antennas coupling the RF signals into the device. This is by design and ensures the device’s antenna design and layout is taken into account when measuring the device’s performance.
The near-field antennas connect to the programmable attenuators, which then connect to the station emulator (an Octoscope Pal-6). The emulator enables the lab to simulate a connection load on the AP of many stations simultaneously and with precise control. Finally, the AP device under test also sits on a turntable in the Faraday cage, allowing for automated control of its orientation relative to the near-field antennas. This setup creates a high degree of control and repeatability, especially when the testing is controlled through automation.
Capacity testing
Capacity testing focuses on the maximums supported by the AP device, including verifying the AP can support a minimum number of stations simultaneously. As the number of IoT devices continue to expand, it’s not unusual to see Wi-Fi networks saturated with 30 or more devices. On another dimension, a test checks for the maximum throughput supported by the device, while other tests measure the performance of the AP device for bidirectional throughput (i.e., transmitting and receiving simultaneously).
The testing is carried out on multiple Wi-Fi technologies, including IEEE 802.11n, IEEE 802.11ac, and IEEE 802.11ax, sometimes referred to as Wi-Fi 5 and Wi-Fi 6. IEEE 802.11n testing is run on 2.4 GHz only, since this is the predominant deployment mode. IEEE 802.11ac testing is obviously run on 5 GHz, while the IEEE 802.11ax is run on both 2.4 and 5 GHz. The pass/fail metrics for the 6-GHz variant of IEEE 802.11ax, a.k.a. Wi-Fi 6E, will be released as part of the forthcoming Issue 3 version of the test plan under development now. Since AP devices will see connections from multiple generations of devices, on both the 2.4- and 5-GHz bands, another test case verifies the performance of the AP transmitting or receiving on both bands at the same time.
Robustness testing
All of this functionality, performance, capability, and coverage would mean little if the AP isn’t able to remain running at these performance levels for long periods of time. This is the final category of testing: verifying the stability or robustness of the AP.
There are two key tests in this category. First is the coexistence test case, which measures the AP’s performance in an environment with other Wi-Fi networks in operation. This scenario would exist for subscribers in multi-dwelling buildings or cases where multiple APs are in use (and in some cases misconfigured to overlap in the Wi-Fi channel usage).
The second test case in this category is the longest test case in the TR-398 test plan—the long-term stability test case. It could be referred to as a “soak” test case, where the AP is run over a long duration, at a fixed performance level, while carefully observing its performance and watching for degradation, errors, or packet loss. In addition, during the stability test run, other stations beyond those measuring the performance are set up to join and leave the Wi-Fi network at regular intervals.
In summary, that’s a lot of testing! A TR-398 Issue 2 test run, for both Wi-Fi 5 and Wi-Fi 6, takes several days to complete. The end results of that effort ensures and verifies the performance that can be expected of the AP devices in real deployments, giving service providers a critical tool in preparing new devices or new software/firmware versions for rollout into the field. Service providers can expect better performing Wi-Fi networks, less support calls, happier subscribers, and ultimately less subscriber churn.