Diverse Roads Lie Ahead For WLANs

Oct. 1, 2003
To Succeed In Today's Market, It's Vital To Identify And Address The Architectural Issues That Span The RF And DSP Domains.

By capturing the imagination of home, enterprise, and mobile users, wireless-local-area-network (WLAN) technologies have attracted both chip and equipment companies. The rapid and unrelenting evolution of WLAN technology has strained the standards-setting bodies and interoperability forums. Product OEM designers and their semiconductor-design counterparts also are struggling to deal with this market's pressures.

Within the past few years, IEEE 802.11-based products have proven to be low-cost, easy-to-use devices for enabling wireless-Ethernet and -Internet connectivity. The 11-Mbps 802.11b network is now available in a variety of form factors to suit computers, PDAs, cell phones, and gaming devices.

For enterprise applications, interest in higher-performance WLANs originally led to the 802.11a standard. This standard transmits data at up to 54 Mbps. However, its effective range is less than the range of 802.11b networks. The 802.11a networks also have met with muted market acceptance because they use the 5.2-GHz spectrum. This spectrum is incompatible with the 2.4-GHz spectrum that's used by the many previously installed 802.11b devices. With an 802.11a-only network card, users can't readily communicate with the already deployed 802.11b access points.

The latest WLAN technology to emerge is 802.11g. The 802.11g standard provides several important improvements for WLAN users. For instance, it includes the faster data rates and robust orthogonal frequency division multiplexing (OFDM) of 802.11a. But it operates in the same 2.4-GHz unlicensed ISM frequency band as 802.11b products. Obviously, 802.11g makes backward compatibility possible with 802.11b-based products. The radios operate in the same frequency spectrum. In addition, such compliance is mandatory according to the 802.11g standard that was established by the IEEE this past summer.

For OEMs and ODMs, the existence of 802.11 in a, b, and g versions leads to a plethora of new-product alternatives. This trend is reinforced by their customers' reluctance to discard recently deployed systems. Potential products include g-only, dual-mode (a + b), and multimode devices (a + b + g). Each of these approaches has its own cost, performance, and time-to-market tradeoffs.

The multimode products are the likely long-term market winners. They'll provide the best overall user experience and performance. For example, they'll offer seamless roaming. Such roaming will be supported by the dynamic selection of 802.11a, b, or g, depending on the system capabilities, channel loads, and the make-up of information being exchanged by the user. Multimode products enable customers to take advantage of the larger coverage area that's offered by 802.11g and the higher user density supported by 802.11a. They also support a gradual meshing of enterprise end points, thereby enabling service at 2.4 and 5.2 GHz.

To productize these new technologies, semiconductor solution providers are following varied paths. Radio system architectures continue to play a pivotal role in determining these solutions' overall cost, performance, robustness, size, and power-consumption characteristics. Meanwhile, semiconductor companies are struggling to deliver the best possible technology to wireless-system integrators. But those integrators are reluctant to pay a premium for multimode solutions. The only way to ensure success is to identify and address the architectural design issues that span the radio-frequency IC (RFIC), baseband digital-signal-processor (DSP), and media-access-controller (MAC) functional blocks. Together, those blocks collectively comprise a WLAN chip set.

When designing multimode 802.11 solutions, it's no longer viable to independently optimize the RF and DSP sub-blocks. In older wireless systems, the data rates were low enough that second-order RF impairments didn't require compensation. But today's radio systems use very dense I/Q constellations to achieve the required high bit rates. The employed radio architectures therefore need to deliver signals with higher signal-to-noise ratios (SNRs) to the data detector. Using active-distortion-cancellation techniques, the RF + DSP subsystem can meet these stringent requirements for throughput and range.

The performance that is demanded by these new bandwidths and modulation schemes is putting quite a squeeze on radio designs. For proof of this statement, look at the evolution of WLANs. In the early days, WLANs migrated from 1 to 2 Mbps to 5.5 to 11 Mbps. In the process, the RFIC architectures evolved from discrete components to a combination of discrete and integrated circuits. They could then support zero-intermediate-frequency (ZIF), or direct-conversion, radios.

Yet constellation sizes largely remained the same: BPSK (1 bps/Hz) and QPSK (2 bps/Hz). Instead of the earlier direct sequence spread spectrum (DSSS) modulation, the design challenges on the DSP side grew to include support for complementary code keying (CCK) modulation. To reach 54-Mbps data rates, OFDM was introduced. It uses 64-QAM constellations (6 bps/Hz). This evolution split the design community into two camps. One camp supported ZIF as the optimal radio architecture, while the other favored very-low IF (VLIF).

The radio subsystem's SNR is a crucial design constraint because of the combination of the bit rate and the dense constellation of current Wi-Fi radios. The final SNR that's seen by the data detector is determined by:

  • The aggregated noise that's injected at the transmitter-thermal noise, phase noise, 1/f noise, quantization noise, and local-oscillator (LO) leakage
  • The Rayleigh fading and path loss that occur during transmission through the wireless medium
  • The noise injected at the receiver (thermal noise, phase noise, 1/f noise, etc.) combined with distortions due to frequency and DC offsets and I/Q imbalance

The receiver also experiences distortion due to signal clipping and interference from adjacent channels. All of these sources of noise and distortion degrade the quality of the signal that's seen by the data detector. If the final SNR is less than the target number, the DSP needs to perform distortion cancellation.

To design and implement an OFDM radio, the challenge is in understanding the total noise budget and its distribution across the RF and DSP domains. The SNR at 6 Mbps is limited more by thermal noise and less by signal distortion. At 54 Mbps, the signal-to-noise ratio has a higher distortion contribution.

The choice of the radio architecture also carries important implications. A 54-Mbps OFDM radio cannot achieve the required performance without some level of distortion compensation and calibration. The actual degree of that calibration depends on whether the architecture is VLIF, ZIF, or superheterodyne. It also is influenced by the choice of the semiconductor process used in the RFIC (CMOS, BiCMOS, SiGe, etc.). SiGe, for example, inherently provides a much lower noise environment than CMOS.

That required degree of compensation, in turn, impacts the overall system BOM. It can cause possible increases in die size and off-chip filtering. The compensation level also affects the following: the power dissipation of the overall design; the delivered performance; the robustness of the total solution in various operating scenarios; and the tolerance to manufacturing variations.

Traditionally, radio front ends have been based on heterodyne/superheterodyne architectures. Such architectures use one or more intermediate-frequency (IF) stages to achieve "good" selectivity and sensitivity properties. As a result, heterodyne/superheterodyne radios demand a large number of discrete components. This requirement makes it difficult to achieve high-level integration. To drive WLAN systems to low-cost solutions that are targeted for mass-market applications, the channel-filtering function must be pushed to low frequencies. This task can be accomplished through the use of either ZIF or VLIF radio architectures.

The choice of a ZIF or VLIF architecture should be determined by the targeted SNR, the blocker specifications, and the targeted minimum sensitivity of the radio. When it comes to design, the two architectures present different challenges on four separate points:

  • Receiver I/Q imbalance is always a problem for ZIF. To avoid imbalance like crosstalk, it requires orthogonality between the in-phase (I) and quadrature (Q) components (i.e., 90° of separation). Such orthogonality is difficult to achieve in integrated transceivers. In the absence of an adjacent channel, I/Q imbalance isn't a problem for VLIF receivers.
  • In an OFDM signal, frequency offset causes intercarrier interference. When combined with a DC offset, the frequency offset causes further signal distortion. For VLIF architectures, only intercarrier interference is a problem. Some degree of frequency-offset correction is required for both ZIF and VLIF receivers.
  • As the "Achilles heel" of ZIF receivers, DC offset requires compensation for optimum radio performance. Direct conversion translates the radio signal straight to baseband. As a result, the majority of the gain and filtering are performed in a frequency band from DC to the signal bandwidth. In the process, the signal path's intrinsic DC offsets are amplified. The dynamic range of the circuit is thereby degraded. In addition, a DC offset can be created if the LO signal leaks to the RF front end and self-mixes. In contrast, DC offset isn't a problem with VLIF architectures. The intermediate stage simply rejects it out.
  • 1/f noise also is a primary source of concern for ZIF designs—especially if the RFIC is based on CMOS process technology. For VLIF architectures, 1/f noise doesn't pose a problem.

Cost is one of the major advantages of a ZIF architecture. ZIF eliminates the need for expensive filters at radio frequencies for image rejection. It also does away with filters at intermediate frequencies for channel selection. As a result, it offers the potential for a relatively high degree of silicon integration. Direct conversion has proven to be a good alternative for select applications. It leads to reduced component count, power consumption, and printed-wiring-board real-estate requirements.

In the 802.11a and 802.11g standards, OFDM is used to increase 802.11 data rates to 54 Mbps. It increases spectral efficiency and allows greater channel throughput. With OFDM, the high-speed data signal is transported via 64 parallel subchannels within a 20-MHz channel. A 64-QAM constellation is used on each subchannel.

The data demodulation of 64-QAM requires a high SNR. OFDM modulation is sensitive to the distortion caused by the combination of DC and frequency offsets. As a result, the ZIF architecture is a poor choice for 802.11a/g receivers.

By their very nature, VLIF receivers filter the OFDM signal so that DC is rejected. This approach avoids the DC-offset issues of ZIF-based architectures. In a VLIF receiver, image rejection can be achieved by providing precise quadrature signals from the local oscillator to the mixers. The channel selection is performed by a polyphase filter. This filter aids in the image rejection of the final downconversion to DC. It also relaxes the dynamic-range requirements on the analog-to-digital converter (ADC).

The key to a successful OFDM radio implementation is to avoid the distortions that compromise data detection. If DC offset is eliminated through a judicious radio-architecture choice, the DSP only needs to perform frequency-offset correction.

Because of the distortion demands on zero-intermediate-frequency receivers, some designers are reducing transmit distortions. Their goal is to relax receiver distortions and still meet operating requirements. The WLAN market is governed by standards set by the IEEE, however. By reducing transmit distortions to loosen the allowable receive-distortion limit, it's possible to create interoperability problems between vendors. This situation would certainly dampen WLAN industry growth.

Given the SNR requirements on 54-Mbps wireless communications at both 2.4 and 5.2 GHz, range is very limited. Often, range is a key barrier to satisfactory enterprise deployment. Figure 1 shows how the received signal strength (SNR at the data detector) varies with Rayleigh fading. For future WLAN systems, the use of multiple-input, multiple-output (MIMO) configurations is being studied. Aside from the potential for increased throughput, a MIMO-enabled WLAN solution could provide increased range. At the receiver DSP input, a two-antenna receiver system provides a much higher SNR level with decreased fading depth.

A MIMO configuration could potentially use a dual-antenna WLAN access point with a single-antenna legacy client (FIG. 2). In the receive mode, the time-division-duplex (TDD) access point would use a maximal-ratio-combining (MRC) algorithm. In transmit mode, it would utilize a maximal-ratio-transmission (MRT) algorithm. To implement compliant products and deploy them in the field, no changes have to be made to either the 802.11a/g standard or the legacy-client installed base.

With the growing use of high-bandwidth applications, such as High-Definition Television (HDTV) delivery, the future of Wi-Fi radios is likely to stay the course on OFDM technology. At the same time, it will probably deviate a little. To extend both range and bandwidth, future OFDM technology will employ MIMO techniques (FIG. 3). With the SNR improvements provided by properly calibrated dual antennas, future WLAN systems may even be capable of switching dynamically between throughput and range enhancements. Ultimately, as a client moves within radio coverage of an access point, all they'll have to do is dynamically adjust the throughput.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!