As 4G wireless (3GPP Release 8) begins being deployed around the world, basestation vendors are searching for new ways to reduce wireless infrastructure costs. Most 4G basestation vendors use an architecture that locates baseband processing at the base of the cellular tower, while placing the radio processing including the analog-to-digital converters (ADCs), digital-to-analog converters (DACs), digital pre-distortion (DPD), and analog RF processing in remote radio heads (RRHs) at the top of the tower.
Within the RRHs, channel counts are rapidly increasing due to a twofold to fourfold increase in antenna counts that’s required for multiple-input multiple-output (MIMO) processing. Wireless customers who want the fastest 4G performance (up to 300 Mbits/s for Long-Term Evolution or LTE and up to 1 Gbit/s for LTE-Advanced/3GPP Release 10) will require 4x4 MIMO processing, with four separate antennas at the mobile device and four antennas at the basestation. In some three-sector basestations, ADCs and DACs may have to service up to 24 antennas.
By The Numbers
ADCs and DACs for wireless infrastructure are among the most sophisticated, most expensive, and most profitable data converters on the market. Wireless ADCs for 4G must digitize between 65 MHz and 100 MHz of spectrum. While individual carriers such as AT&T, Verizon, Orange, and China Mobile may not own 65 MHz of contiguous bandwidth, carrier consolidation has caused surviving operators to own multiple, non-contiguous carriers in their country’s wireless frequency allocation.
RRH equipment for non-contiguous frequency allocations becomes less expensive when ADCs and DACs can handle the country’s entire frequency band, regardless of which bandwidth the carrier owns. With a typical 4x oversampling ratio, a 65-MHz bandwidth will require a data converter that can sample at 250 Msamples/s or faster.
The complex orthogonal frequency division multiplexing (OFDM) modulations of 4G wireless require about 75 dB of dynamic range to recover signals at the required bit error rates (BERs). Thus, 4G wireless ADCs need from 14 to 16 bits per sample. Today these bits are sent using 2 bits per differential low-voltage differential signaling (LVDS) pair.
Some quick back-of-the-envelope math identifies a key problem for next-generation wireless, though. With 24 antennas and 16 pins per antenna, the number of high-speed signal traces on RRH boards quickly becomes unmanageable. Converter vendors are also moving to dual and quad packages (two or four converters per IC) to support the antenna counts of LTE and LTE-Advanced, but this puts more high-speed LVDS pins in close proximity to their neighbors, worsening cross-talk. Is there a better solution?
Standards To The Rescue
In 2006, the JEDEC JESD204 standards group defined how digital devices (FPGAs and ASICs) would use high-speed serializer-deserializer (SERDES) links to send and receive samples from multi-channel ADCs and DACs. In 2011, the JESD204B committee (of which I was a member) completed the JESD204B spec revision.
New features in the JESD204B specification include support for SERDES rates up to 12.5 Gbits/s, deterministic latency, and harmonic clocking (the ability to derive a high-speed data converter clock from a lower-speed input clock with deterministic phasing). JESD204B continues to use the familiar 8B10B encoding for physical-layer transmission and synchronization, the same that PCI Express and USB 3.0 use.
A quad, 250-Msample/s, 16-bit ADC using JESD204B would generate 20 Gbits/s of real-time bits, which could be transmitted across two 10-Gbit/s JESD204B lanes or four 5-Gbit/s lanes. To take advantage of JESD204B’s improvements, basestation vendors will have to invest in expensive Altera Stratix-IV or Xilinx Artix-7 FPGAs to support 10-Gbit/s SERDES rates, since SERDES rates of less expensive FPGAs (such as the Altera Arria II GX family and the Xilinx Virtex-5 LXT family) are limited to 3.5 Gbits/s.
Alternately, users could implement the earlier JESD204A standard, whose SERDES rates are limited to 3.5 Gbits/s. But in that case, basestation vendors have a lane count problem: 20 Gbits/s delivered via JESD204A requires seven JESD204A lanes, but eight-lane SERDES FPGAs are also pricey.
As a novel alternative, if ADC vendors integrated compression of their captured samples, and decompression into DACs, JESD204 lane counts and lane speeds could be reduced by a factor of three or four. The figure illustrates the effects of compression ratios between 3:1 and 4:1 on the signal-to-noise ratio (SNR) of a 20-MHz LTE downlink test model signal that was sampled at 491.52 Msamples/s in-phase and quadrative (I&Q), using the Samplify Prism 3 compression algorithm.
In lossless mode, Prism achieved 3.16:1 compression on the LTE test model signal. For DPD applications where an SNR of about 65 dB is required, even 4:1 compression maintains 68.5-dB SNR. The Prism compression algorithm lets users select the per-channel compression ratio, allowing base transceiver station (BTS) vendors to trade off system performance and cost. Compression ratio selection (in increments of 0.05) enables BTS vendors to fine-tune the uplink, downlink, and DPD feedback performance for various macrocell, microcell, and femtocell configurations.
The economic benefits of a compression-enabled threefold reduction in JESD204 lane counts and lane speeds can be significant. For a data converter vendor that sells 100,000 SERDES channels in a 4G wireless ADC, a $2 million non-recurring engineering (NRE) charge for a 10-Gbit/s SERDES design or intellectual property (IP) block adds $20 of cost per SERDES lane. At typical wireless ADC margins, that translates to an $80 end-user price delta per SERDES lane.
In contrast, if a compression-enabled 3.25-Gbit/s SERDES lane cost $7 (an end-user price delta of $21), basestation vendors would realize savings of about $60 per SERDES lane, with no degradation in wireless performance. Basestation vendors should consider compression as an innovative, less expensive way to get 10-Gbit/s SERDES performance in data converters from 3-Gbit/s SERDES lanes.