WLAN Benchmarking Mixes Old With New

July 1, 2003
Using Wired-Network-Device Methodologies, Figure Out A WLAN Device’s Performance And Configure It For A Benchmark.

Wireless devices are becoming increasingly common in the networking landscape. Yet unlike most networking devices, wireless products offer no accurate method for users and implementers to gauge their performance. To complicate this matter, a device's environment can have a huge impact on device performance. Unfortunately, existing network benchmarks are not sufficient to test wireless devices. They presume that any performance deficiencies are the result of the device under test (DUT). With wireless networks, however, this is clearly not the case.

This article is the second of a three-part series that focuses on WLAN test methodologies. The first article addressed the value of WLAN performance testing. The last article will examine the testing of IEEE 802.11g as provided by the CENTAUR Lab at the University of Georgia. It is the job of this article to delve into benchmarking.

The goal of any good network benchmark is to provide a repeatable, fair, and quantitative comparison between devices. If vendors, magazines, and customers all used different tests with varying results, then the whole point of testing—which is to make comparisons—is missed.

The most widely used metric for measuring network-device performance is frames or packets per second. Others, such as latency and jitter, also are used. The performance metric in which people seem to be the most interested, however, is the actual forwarding limit of the device before data is dropped.

Additionally, every network technology has an absolute forwarding limit based on signaling rate, packet size, and technology overhead. For Ethernet, these values are well known and understood. Most reputable vendors produce devices that are capable of performing at 100% of these maximum rates (see table).

Although the forwarding limit is an important performance metric, there is much more to benchmarking a network device. The Internet Engineering Task Force (IETF) has developed and documented basic procedures for characterizing network performance in RFC 2544: Benchmarking Methodology for Network Interconnection Devices. This document contains setup and configuration information as well as six tests for benchmarking network devices. In addition, the document recommends frame sizes, test duration times, and traffic burst patterns to be used in the benchmark tests. Of the six tests described, four are relevant to performance benchmarking: throughput, latency, frame loss rates, and back to back. They are described in more detail below:

  • Throughput: Throughput is defined as the maximum rate at which a device can forward traffic without losing data. Throughput tests are normally conducted at a fixed frame size for a specified amount of time. If the device fails to pass all of the transmitted frames in the allocated time, the test is repeated at a lower frame rate. If the device succeeds, the test is repeated with a higher rate.
  • Latency: The latency of a device is defined as the amount of time that it takes a device to forward a frame from the input port to the output port. For store and forward devices, which account for almost all modern bridges, this is the difference in time between a device receiving and transmitting the final bit of a frame. This value can vary greatly depending on the load and architecture of the device being tested. The variance is known as jitter.
  • Frame loss rate: The frame loss rate is defined as the rate at which packets are dropped for a given frame size and rate. The frame loss rate of a device should be 0 frames/s up to and including the measured throughput value.
  • Back to back: A back-to-back test measures the DUT's ability to forward packets with a minimum interpacket gap. The minimum interpacket gap is the smallest amount of time that a device must wait before it can transmit a subsequent data frame. The result of this test is the maximum number of minimum interpacket gap frames that the device can forward without loss. Line-rate devices have a back-to-back value of infinity.

Although this subset of tests does not characterize every conceivable aspect of a device, the results of these tests will give a good indication of a device's basic forwarding abilities. In addition, many more tests can build on the foundation provided by these simple tests.

Wireless 802.11 networks and Ethernet share many similarities. Both have similar frame formats, common 48-b media-access-control (MAC) addresses, and the ability to transmit a range of frame sizes. Plus, devices on both networks use the Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) scheme to determine when other devices are using the transmission medium. The existing network-device benchmark methodologies could be used for the foundation of WLAN testing. The WLAN-network-layer performance-affecting features, however, will clearly impact performance. Specifically, these features include acknowledgements, retransmissions, and rate shifting. IEEE 802.11-specific benchmark methodologies must take these factors into account.

In a wired environment, the impact of electrical signals or light pulses is not an issue for the transmitted signal across the physical link. The wired-network medium that conducts the signals is mostly constant. But the environment of a wireless network is much more variable. Thus, the 802.11 specification requires that any wireless station that receives a frame must acknowledge that frame by sending a 14-B frame back to the transmitter. This acknowledgement, or ACK, can only acknowledge one frame. It contains no information about which frame it is acknowledging. As a result, a wireless station must wait for an acknowledgement after every transmitted frame before resuming transmission.

Because the radio channel can be unpredictable, a couple of problems could occur in the above exchange. The intended recipient may not receive the transmitted frame. Or, the recipient's acknowledgement may not reach the transmitting station. In either case, the sending station will not receive an acknowledgement. It will then attempt to retransmit the message.

A retransmission is identical to the original failed transmission except for one bit. This bit, which must be set in the MAC header, indicates that the frame is a retransmission. Should the retransmission fail, the sending station will continue to send retransmissions until the station reaches its retry limit.

Wireless stations can keep statistics on how many retries are required to transmit a frame to other stations. This enables a sending station to take advantage of slower, more robust signaling rates, which are more efficient than sending a number of retransmissions with a faster, more complex encoding scheme. This action is known as rate shifting. Acknowledgements and retransmissions have analogies at higher network layers, but rate shifting has no wired-network equivalency.

The nature of the transmission medium for wired networks is constant. In addition, these networks are unable to rate shift. As a result, theoretical maximum frame-rate calculations for existing wired technologies are rather simple and straightforward. In contrast, the situation for WLAN networks is completely different. All three of the current 802.11 specifications require each compliant station to be able to support multiple signaling rates. Additionally, these specifications contain physical (PHY) -layer options that can affect device performance.

For 802.11b devices, four signaling rates are specified: 1, 2, 5.5, and 11 Mbps. Devices on the network also may use a long 144-b preamble or a shorter 72-b preamble. The short preamble shaves 96 ms off of the transmission time of every transmitted frame. It can significantly improve performance—especially for smaller-sized packets. The Wired Equivalency Protocol (WEP) also can impact performance. After all, using WEP adds 8 B of overhead to every sent frame.

The 802.11a specification supports eight different signaling rates: 6, 9, 12, 18, 24, 36, 48, and 54 Mbps. Of those eight, 6, 12, and 24 Mbps are required. Again, WEP can have a minor impact on performance. But for 802.11a, this influence is negligible. In fact, it is nonexistent for most frame sizes in the higher data rates.

Out of all of the current 802.11 specifications, 802.11g contains the most complex set of PHY-layer variables. In addition to supporting all eight 802.11a rates, two additional optional rates are specified: 22 and 33 Mbps. 802.11g devices must be backwards compatible with 802.11b devices. They also must provide support for 1-, 2-, 5.5-, 11-, 6-, 12-, and 24-Mbps data rates. Note that there are three separate preamble formats. Two of these formats have additional PHY options. These options make use of either long and short slot times or the long and short preambles of 802.11b.

Aside from the signaling rates and physical-layer options, the maximum frame rate achievable between any two WLAN devices depends on the distance between the devices. As stated, every receiver must acknowledge the receipt of each frame before the transmitter can continue transmission. The signal round-trip time will therefore affect the frame rate. This effect is most noticeable when high data rates and small frame sizes are being utilized.

RATE CALCULATION To make a maximum frame-rate calculation for 802.11a, some assumptions must be made. Assume that two devices communicating at 54 Mbps are located 5 m apart. Next, assume that there is no rate shifting and that the only traffic on the radio channel is transmitted data. Finally, assume that beacons and management frames are being ignored for the time being.

To find out the maximum theoretical frame rate, one must find the time that it takes to successfully transmit a packet of data across the assumed wireless link. A successful transmission requires a data frame and acknowledgement. Start by calculating the time that is needed for the devices to transmit and process both frames.

To begin, calculate the time required to transmit a frame of 80 B. This is the smallest packet data unit (PDU) that most 802.11 networks will transmit. It corresponds to a 64-B Ethernet frame, which is the smallest legal Ethernet frame size.

In 802.11a, data is transmitted via Orthogonal Frequency Division Multiplexing (OFDM) symbols. Each symbol has a duration of 4.0 µs. At the selected data rate of 54 Mbps, each OFDM symbol can transmit 216 data bits. The 80-B data frame thus requires \[16 service bits + (8 b/B * 80 B) + 6 tail bits\]/216 b/symbol = 3.06481481... symbols. Because there is no way to transmit a fractional symbol, the datagram is padded with 0s. The transmitted data is then a multiple of the number of bits per symbol. As a result, the 80-B frame will require four symbols.

Assuming that the channel is clean and ready for use, the transmitting radio must first send a preamble. The receiving radio can then prepare and synchronize its receiver. Next, the radio sends a symbol indicating the signal rate and frame length. Finally, the device transmits the data. The 80-B frame in this example requires 16 µs (preamble) + 4 µs (rate and length) + 4 µs/symbol * 4 symbols = 36 µs to complete the transmission.

Assuming that the intended receiver received and decoded the message correctly, an acknowledgement must be transmitted back to the original source. An ACK is 14 B. At 54 Mbps, an ACK uses \[16 service bits + (8 b/B * 14 B) + 6 tail bits\]/216 b/symbol = 0.67032...symbols or one complete symbol. Thus, the radio will need 16 µs + 4 µs + 4 µs/symbol * 1 symbol = 24 µs to transmit the ACK.

Before the receiver may acknowledge the received data, however, it must wait the Short InterFrame Space (SIFS) interval. This interval is dependent on the PHY being used. For 802.11a, it equals 16 µs.

So far, the frame/ACK sequence has taken 36 µs + 16 µs + 24 µs = 76 µs. The calculation is not yet complete. Assuming that the transmitting station receives the acknowledgement frame, the transmitter must wait the Distributed InterFrame Space (DIFS) interval before proceeding. The DIFS is equivalent to the Inter Packet Gap (IPG) in Ethernet. No device may use the radio channel for the DIFS interval after an ACK is received. For 802.11a, the DIFS is 34 µs. Thus, the total time required to transmit an 80-B datagram before another frame can be transmitted is 76 µs + 34 µs = 110 µs.

This 110-µs value is the absolute best case when no propagation delay exists between the devices. In the described scenario, the devices are 5 m apart. The speed of light in air is 299,702,589 m/s. The radio waves will therefore take (5 m/299,702,589 m/s)/1,000,000 µs/s = 0.0167 µs. To be completely accurate, add an additional 2 * 0.0167 = 0.0334 µs to the total time calculation. This last calculation will account for the distance between the two devices. This small amount of delay may seem insignificant. As distances between stations increase, however, the maximum theoretical frame rate will gradually decrease.

For simplicity, we will neglect this tiny delay and use a frame/ACK total transmission time of 110 µs. Our theoretical maximum frame rate is now:

110 µs /frame * 1 s/1,000,000 µs = 0.000110 s/frame = 9090.90... frames/s

Sending fractional frames is not possible. As a result, our theoretical maximum based on our given assumptions is 9090 frames/s.

Knowing the theoretical maximum frame rate for an 80-B frame is interesting. To be really useful, however, one needs to be able to generate these numbers for all frame sizes and signaling rates. Despite the apparent complexity, the above steps can be simplified into a single two-variable equation.

Let frame size = x and data bits per symbol = y. Then for any valid x and y, use the equation above. Here, ceiling(x) returns the smallest integer >= x and floor(x) returns the largest integer ≤ x. The verification of this formula is left as an exercise to the reader.

Using this equation, one can calculate the theoretical maximum frame rates for every supported frame rate and a variety of frame sizes. Figure 1 displays a graph of the results. Note that the theoretical maximum frame rates for 802.11b and 802.11g may be calculated in a similar fashion.

Now that the threshold of wireless 802.11a device performance has been established, a test methodology can be designed. The DUTs must be allowed to perform as closely as possible to the calculated theoretical limits. They must therefore be configured to mimic the assumptions that were utilized during the calculations.

First, the test must be conducted in a controlled RF environment. For the calculations to be reasonable, every transmitted frame must reach its destination on the first attempt. This will only be possible on a completely clean RF channel. On such a channel, each device will have a strong signal level from its peer. Even on a clean channel, however, a device could rate shift.

In addition, management, broadcast, and multicast frames are usually forwarded at one of the mandatory signaling rates of 6, 12, or 24 Mbps. The transmission time of even small frames is significantly higher at lower data rates than at higher rates. One can therefore minimize the influence of these unavoidable PDUs by fixing the data rate at a constant value. Most 802.11 devices allow the signaling rate to be fixed for all transmitted data. For benchmark testing, the DUT should use a constant signaling rate.

With a clean radio channel and fixed data rates, an 802.11 link will have the same consistent and repeatable properties as a wired data link. Clearly, using existing benchmark methodologies is a reasonable way to test performance.

Now that we know how well wireless devices should be capable of performing and how to configure them for a proper benchmark, we are ready to test them. To set up a proper test bed, some equipment is needed. The main components of this test bed include a control PC, an Ethernet generator, and an RF multipath channel emulator or fader. A benchmark test setup for an access point (AP) and a client device is shown in Figure 2.

Ideally, a benchmark should test one and only one device. Unfortunately, no traffic generators are presently dedicated to 802.11. The test bed will therefore have to test a system that uses Ethernet as input and output, but transmits the data over an 802.11 link. An example of such a system is a fast PC workstation with a WLAN client card and an Ethernet port configured as a router and an access point. Or a system can be comprised of two APs configured as a wireless bridge.

A PC is far from ideal as a wireless-to-Ethernet translator. Its large buffers and inaccurate clock can induce large variances in latency and jitter measurements. When used for comparative tests, however, the same PC can at least act as a control for these fluctuations.

The production of the test traffic requires an Ethernet generator/analyzer. Obviously, a test box will be more useful if it has more functionality. At the minimum, the generator/analyzer should support Ethernet address resolution (ARP). It also should generate line-rate, 100-Mb, Ethernet Layer 3 traffic and perform RFC 2544-style tests.

As stated previously, benchmarks should be repeatable and performed in a controlled environment. WLAN performance depends on the RF environment, so any repeatable test setup must include controlled RF surroundings. One way to accomplish this is with an RF multipath channel emulator/fader. This type of device provides complete control and repeatability of faded environments for radio receiver and sensitivity testing. The test setup of a wireless bridge is analogous (FIG. 3).

Once the test bed is correctly configured, the communication between the devices under test must be verified at the selected signaling rate. The devices should only be able to communicate through the cabled RF links attached to the RF multipath fader. If one of the channels is disabled in that multipath fader, the DUTs must not be able to communicate with each other. This aspect ensures a proper test configuration. In addition, the devices should be unable to communicate if they are disconnected from the RF multipath fader.

Before beginning any benchmark testing, there are a few details to highlight. First, when using the Ethernet-to-Ethernet testing model, the engineer is testing a system. Either or both of the devices in the system could be responsible for poor tested performance.

Secondly—and most importantly—an 802.11 transmission channel is simplex only. Only one device can transmit at a time. This is exactly like shared Ethernet. Both protocols use the same means to determine collisions. If only one device is transmitting, that device will have access to the entire medium without incident. If two devices are transmitting, a collision is likely.

Clearly, collisions reduce network performance. They can become a significant problem when a large number of clients use the network. When using multiple transmission sources, be careful not to oversubscribe the channel. Also, unidirectional traffic should be used when testing throughput. It will minimize unwanted collisions.

Testing may begin once a setup is properly connected and configured and proper IP addresses have been assigned to the DUTs. Select an RFC 2544 test and start measuring performance.

A number of variations can be made to the test bed to check various aspects of the 802.11 protocol. A few examples are listed below:

  • Repeat throughput and frame loss tests with fade models in place. Make comparative runs with rate shifting enabled and disabled. Does the rate-shifting algorithm truly help performance?
  • Make comparative tests between the power-save mode and the constant-access mode.
  • Add attenuation to the signal strength to test receiver sensitivity.
  • Add delay to the signal to simulate radio distance.

Clearly, this test bed could use a few improvements. Using a PC to forward test frames is definitely not an ideal solution. However, many vendors are already working to produce true 802.11 generator and analyzer interfaces. These devices have the added benefit of being able to report MAC-level errors and statistics. Some reportedly have the ability to emulate multiple wireless clients in order to perform real load testing. These two features will definitely need to be bundled together. After all, performance drops off significantly with a large number of clients and their associated collisions as each client contends for shared network resources.

The popularity of wireless networks ensures that they will become a permanent part of the networking landscape. With the advent of useful testing procedures and equipment, users can be assured that manufacturers will continue to improve their products. This evolution ensures the improvement of the overall wireless experience with each new generation of gear.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!