What are some of the challenges facing basestation designers today?
The five biggest are the numerous wireless standards that must be accommodated, increased and complex frequency assignments, increased operating efficiency, capital expenditures (CAPEX), and scalability.
Why must designers deal with so many standards?
Most new basestations benefit from being able to handle multiple air interfaces. New basestations that implement Long Term Evolution (LTE) or WiMAX that can also support past air interfaces like WCDMA, cdma2000, and GSM will be in strong demand. A typical basestation will still have to support single-channel and multiple-channel GSM as well as, say, LTE.
What are the frequency issues?
Basestations will still use their assignments in the 800- to 950-MHz, 1800- to 1950-MHz, and 2110- to 2170-MHz range but will extend to new frequencies like 1.7, 2.3, 2.6, and 3.5 GHz and even 700 MHz. A basestation must be flexible enough to change as new subscribers and services are added.
Why is operating efficiency so vital?
Power amplifiers (PAs) in a basestation typically constitute up to 70% of the overall site power consumption. This is due to the poor efficiency inherent with supporting higher modulated signals. Deployment of efficiency improvement algorithms in the digital domain significantly improves the efficiency of the PAs.
Lowering power consumption greatly reduces the operating expenses (OPEX) of a network operator. Furthermore, with the green movement demanding less power usage and a lower carbon footprint, carriers not only can help the environment but also cut many millions of dollars from their bottom line.
What is the issue with CAPEX?
CAPEX increases as new standards and systems are implemented. Carriers are constantly under the gun to reduce CAPEX. However, new FPGA-based equipment featuring a new multimode design using digital pre-distortion and other technologies is smaller, lower in cost, and more power efficient.
What does scalability mean?
It means designs should be able to be used in a variety of configurations, not only the macrocell. Today, microcell, picocell, and femtocell basestations are more common than ever, and great savings accrue if a single design can be used for all or most configurations.
What are some of the major objectives in a new basestation design?
Besides the usual lower development costs and faster time-to-market, new designs must be more flexible so new air interface standards may be quickly changed or updated. The design also must be more reliable. A big objective is to reduce both the initial CAPEX and OPEX associated with the equipment.
What part of the basestation design will yield the greatest benefits?
The transmitter is the primary target, as many functions now implemented with ASICs and other individual ICs can be combined in an FPGA, with lower costs and higher reliability. Of course, any circuitry associated with the PAs will yield major gains in efficiency and lower power consumption and heat.
What does the transmitter architecture look like?
The figure shows a typical UMTS WCDMA transmit chain. Note the individual segments with their approximate costs and power consumption levels. This design uses receive diversity and digital pre-distortion in the PA.
Yet these sections can be implemented in a single FPGA. Six chips become one, cost is cut to less than a third, and the power consumption is almost halved, all improving reliability. And, the FPGA’s programmability provides the flexibility to handle multiple frequencies for different air interfaces.
How does digital pre-distortion (DPD) improve performance?
It improves the efficiency of the PAs. All modern cell-phone standards use air interfaces that need a linear PA.
WCDMA and LTE are both broadband techniques to achieve greater capacity and increased spectral efficiency. Higher-level modulation techniques like QAM demand the linearity.
Most PAs are LDMOS class AB designs that rarely achieve an efficiency greater than 10%, so 90% of the total power used ends up as excessive heat. The reason for the inefficiency is inherent in the class AB design, but it’s also the result of having to reduce the amplifier output to deal with signals that exhibit high peak to average (PAR) power and to prevent distortion, which results in adjacent channel power leakage.
Continued on page 2
How does the DPD fix that?
The solution is to drive the amplifier harder to get more power, which introduces some signal distortion. But if you can predict the type of distortion and then pre-distort the signal in a reverse manner, the distortion is cancelled out. This extends the linear region of the operation range and produces more output power at an efficiency approaching 40%. A smaller amplifier at higher efficiency then can be used with DPD to achieve the desired output power.
To implement the DPD function, the transmitter develops the signal to be transmitted. It is then upconverted and put through a crest factor reduction process before being sent to a DPD process—an algorithm implemented in the FPGA using a combination of DSP elements and a soft processor. But for the algorithm to know how to predistort the signal, it must know what the actual transmitter output looks like.
The PA output is sampled with a directional coupler, amplified, and downconverted to an IF, where it is digitized. The ADC output becomes the feedback the predistortion circuit uses to compute the output signal. This signal then goes to the DAC to be converted to analog and upconverted to the final frequency before it is applied to the PA. The result is a clean, undistorted signal at high power and improved efficiency.
The distortion compensation algorithm is very intricate, as it has to compensate for complex nonlinear interactions and memory effects. The key is to simplify the implementation process as much as possible. This lets engineers achieve the results they need without becoming too embroiled in the algorithmic complexities and saves many hours of development time.
The evolving Long Term Evolution (LTE) wireless standard being put forth by the 3rd Generation Partnership Project (3GPP) promises to offer higher data speeds for both download and upload transmissions and reduce packet latency on wireless networks. Xilinx has been on the forefront of 3GPP LTE systems development and many of its FPGA products are already used in the advanced development platforms of top-tier wireless equipment manufacturers. To keep pace with the evolution of the standard and deliver the most efficient solutions to support it, Xilinx has introduced new design resources for advanced LTE designs that leverage the performance and flexibility of its Virtex®-5 SXT programmable platform.
LTE DIGITAL FRONT END
Xilinx offers a robust development platform for an LTE digital front end (DFE). The solution is made up of a highly optimized set of IP blocks for Digital Up Conversion (DUC), Digital Down Conversion (DDC) and Crest Factor Reduction (CFR) connected together to form a complete LTE radio sub-system. In addition, the DFE design is compatible with existing Digital Pre-Distortion (DPD) designs from Xilinx, enabling systems architects to rapidly implement all the digital system elements necessary for a full performance, commercial LTE system. This development system allows designers to accelerate the design of LTE baseband designs using Xilinx FPGAs and significantly shortens design time compared to using older ASSP and ASIC design methods.
The LTE DFE design is highly configurable and has been architected to support seven single- and multi-carrier scenarios. It provides an optimized solution for each chosen configuration, enabling designers to select the one that meets the targeted system needs, without paying a penalty on design area while helping reduce total system cost and power. The seven scenarios provided are:
• Single-carrier 5-, 10-, 15-, and 20-MHz bandwidths
• Dual-carrier 5- and 10-MHz bandwidths
• Four-carrier 5-MHz bandwidth
Xilinx programmable platforms allow specific radio designs to be easily customized for 3GPP-LTE. A full suite of easy to use design tools is available, including the Xilinx System Generator for DSP, the industry’s leading tool for designing high performance DSPs.
LTE CHANNEL UPLINK & DOWNLINK
For high performance and low latency uplink and downlink, Xilinx supports its LTE solution with optimized IP from its LogiCORE™ library. Its Turbo Encoder and Decoder cores integrate the key functional blocks necessary for a practical LTE baseband channel system, for both the uplink and downlink functions.
Each LogiCORE IP block is controlled via a GUI, using the Xilinx CoregenTM software, which generates a design that is optimized to user-specified parameters. This streamlines development time and verification effort, making integration much simpler. The designs generated by this tool exploit all the features offered through Xilinx XtremeDSPTM technology, resulting in faster performance, reduced power dissipation and lower system latency.
Learn more about this comprehensive portfolio of wireless IP, reference designs and collateral at: http://www.xilinx.com/esp/wireless.