Using ATE for Efficient DigRF Interface Testing

Although the term DigRF may lead to initial impressions of a digital signal somehow integrated into an RF signal path, this is not the case. DigRF is a published standard that describes a digital interface between baseband and RFIC chips.

The first version of this standard, DigRF v1, is widely used in RF cellular devices and addresses 2G/2.5G GSM formats. The Mobile Industry Processor Interface (MIPI) alliance already has defined a 3G version of the standard, and work is in progress on a new version to address long-term evolution (LTE) cellular formats.

With DigRF standardization, the analog-to-digital conversion is no longer part of the baseband processor and now is affiliated with the RFIC chip, effectively moving the baseband IC design to a less expensive digital-only process. As a result, challenges in DigRF testing primarily are digital, not RF. This is an important distinction that helps us focus on the right techniques for solving test challenges related to DigRF interfaces on 2G/2.5G, 3G, and upcoming 3.9G/4G devices.

During receiver testing, baseband data is collected by transmitting large amounts of interleaved in-phase (I), quadrature (Q), and padding data through the DigRF interface. With increased physical layer speeds and symbol rates, later DigRF versions use high-speed serial transceiver lanes to transfer data and control information between the RFIC and baseband IC. Test time reduction techniques and efficient protocol-aware test methodologies are critical for such tests.

Many system and board-level models for testing DigRF transceivers are designed using custom FPGAs to emulate the baseband IC. It is not practical to design a different ATE digital card for each generation of the DigRF interface, so we have developed the concept of protocol-aware solutions for ATE emulation of the baseband interface. Testing DigRF interfaces requires knowledge of the major features of the physical interface and anticipating the resulting protocol challenges.

Physical Interface Challenges

As the sample rate of the baseband analog-to-digital converter (ADC) has increased with new cellular standards, so has the capacity of the digital interface. Beginning with DigRF v1, the progression of the DigRF physical layer can be considered in three categories: data transfer, control interface, and clock synchronization.

Data Transfer
DigRF v1 uses a single RxTxData pin to perform data transfer. It is a single-ended, bidirectional pin that both sends and receives symbol information at 26 Mb/s.

During receiver operation, the first serial I/Q data bit and the last received I/Q bit are framed by an RxTxEn pin. Typical frame signals pulse high and signal the start of every symbol, but the RxTxEn pin is held high during the entire receive sequence.1 This poses some challenges during receiver testing since the single rising RxTxEn edge is the only marker for the start of valid I/Q data, and the ATE must monitor the interactions between two pins during protocol analysis.

For 3G cellular, the I/Q sample rate increases from 270.833 kS/s to 7.68 MS/s. To address the increase in data rate, the interface was changed to a low-voltage differential signaling (LVDS) interface to enhance noise immunity and minimize power consumption.

The interface uses separate, unidirectional receive (RX) and transmit (TX) lanes to send I/Q data between the baseband and RFIC chips. The maximum data rate for the 3G DigRF standard is 312 Mb/s with enough capacity for 8-bit I/Q data words per symbol and two multiplexed RX signals for spatial receiver diversity.2

One disadvantage of the LVDS interface is the relatively weak drive strength of the LVDS output buffers. In a real-world application, the RFIC and baseband chips are very close together so this is not a problem. However, for test applications, we need to design the loadboard with a minimum trace length from the device output to a buffer or ATE digital channel.

As we move toward the LTE cellular format, the I/Q sample rate increases to 30.72 MS/s. We can calculate this sample rate by looking at the LTE downlink frame structure. The frame is defined with 15,360 samples in 0.5 ms, corresponding to a 30.72-MS/s sample rate.3,4

At this rate, the future 4G DigRF LVDS interface must operate well above 1Gb/s, challenging the efficiency of protocol-aware solutions that must keep up with the higher-speed data links. Assuming conversions of 8 bits to 16 bits per symbol, two RX signals, and 20% protocol overhead, the data rate would have to be between 1.2 Gb/s and 2.4 Gb/s. This big jump from the 3G standard pushes the ATE digital requirements above the traditional 1-Gb/s barrier toward higher-performing digital interface cards.

Control Interface
A separate control interface for DigRF v1 consists of the CtrlClk, CtrlData, and CtrlEn pins. It follows a simple protocol and is used to communicate control information between the baseband and RFIC chips. For example, the baseband IC sends the requested RF channel number to the RFIC over the control interface before beginning an RF transmit or receive sequence. The control interface also is used to communicate gain control settings, receive and transmit power levels, and a variety of other register settings to the RFIC.

The later DigRF standards use the RX and TX data lanes to communicate the control information between the baseband and RFIC chips, reducing pin count and using the protocol to determine which information is the control data and which is the I/Q data.

Clock Synchronization
All of the DigRF standards share a SysClk output pin that transmits a constant clock reference from the RFIC to the baseband IC. A SysClkEn pin is asserted by the baseband IC to enable the RFIC clock output. It is critical for data to be properly aligned to the clock edges of SysClk, and either hardware or virtual protocol-aware algorithms can perform this alignment.

After a device is powered on and sent through the proper reset sequence, the SysClk signal is present at the output of the RFIC. However, the phase is not always aligned to the ATE timing system.

In most transceiver applications, free-running clock sources are used to provide the reference clock for the PLL, for example, at 52 MHz. For a test program, the best solution is to use a low phase noise RF source that can be locked to the ATE reference clock to avoid frequency drift during testing. For DigRF v1, if the test is run without the exact specified edge alignment to the SysClk output, the data may be invalid.

Rerunning the capture pattern costs valuable test time and threatens test stability. For the latest DigRF standards, the protocols use embedded clock schemes but still rely on the SysClk output to define the time base and phase alignment for the system operation, similar to the common 10-MHz clock distribution in ATE systems.

Alignment with the SysClk signal can be done in two simple ways. A fast clock spec search method sweeps a strobe location on the SysClk pin across the tester period in predefined increments. The sweep can be automated and should take only a few milliseconds to execute.

Depending on the required accuracy, the search can be performed using a binary search algorithm, a simple linear sweep, or a combination of the two. Figure 1a illustrates the spec search swept over the period of the SysClk output.

Once the ideal strobe SysClk strobe location is found, the compare edges of the data interface are set in relation to the SysClk signal, and the receiver test can be immediately executed. The search is valid until the part is powered down or reset.

Another technique that bypasses a spec search routine is a single-shot, multiple-edge capture. During this method, the SysClk output is over-sampled by inserting multiple edges throughout the tester period. After a single pattern execution, the individual edge failures are analyzed to determine the optimal edge location for the receiver test.

SysClk should be sampled at least 8x the clock rate to avoid marginal edge placement. Figure 1b illustrates the execution of a multiple edge capture and selection of the optimum location.

Figure 1. Phase Alignment to the DigRF RFIC Time Base

Protocol Test Challenges

While there is some flexibility in the number of bits per sample, samples per symbol, and consequently, the number of padding bits per frame, a common receiver test for DigRF v1 follows a frame structure in which 16-bit I and Q samples are interleaved, followed by 64 bits of padding. This padding data is not used for data analysis, and the focus of test time reduction for DigRF v1 is to eliminate these 64 bits from each frame of data.

In some cases, the I/Q ADC may sample at twice the symbol rate, leaving only 32 bits of padding in a DigRF v1 frame. A simple calculation shows why there are 96 bits per frame for a GSM receiver test. Since the GSM sample rate is 270.833 kS/s, and the serial interface speed is 26 Mb/s, the total number of bits for each I/Q data sample is

26 x 106/ 270.833 x 103 = 96

Beyond DigRF v1, the protocols become more intricate, introducing complexity in analyzing the data stream for control and I/Q data information and requiring a certain level of intelligence in the ATE capture algorithms for efficient testing. Here are just a few examples of the challenges in testing the 3G DigRF receive stream:
• Nondeterministic data arrival times.
• Possible combinations of state and receiver information in the data stream.
• Possible phase shifts during long pattern execution.
• Receiver diversity.

The challenge of programming the capture sequence can be simplified by capturing the entire receiver bit stream and using a protocol-aware implementation to separate the data into I/Q symbols according to the different protocol rules. For modern tester architectures with very fast data transfer between the digital pin memory and the tester workstation, this is the most simple test design and still provides excellent throughput. Both virtual and hardware protocol-aware solutions become more critical for 3G and 3.9G/4G implementations as the speed and complexity of the DigRF interface and protocols approach that of more traditional high-speed digital serial links such as Serial ATA and PCI Express®.

Capturing DigRF I/Q Receiver Data

Before detailing the ATE capture method, we first need to understand the generic architecture of a DigRF transceiver device. Figure 2 shows a high-level block diagram of a DigRF receiver chain, and the following steps describe the basic data flow through the receiver chain:
• Program the device to receive an RF signal on a specific channel, typically a CW signal for gain and SNR tests.
• Tune the PLL and divider circuitry to mix the RF signal down to a predefined IF.
• Digitize the IF signal with the I/Q ADC at a given conversion rate.
• Once the signal is digitized, a digital IF processing block mixes down the signal to a lower baseband frequency. The result is a digital baseband signal.
• The DigRF interface block processes the digital I/Q data according to the DigRF protocol before sending the data to the baseband IC.

Figure 2. High-Level DigRF Receiver Block Diagram

Note an important distinction between the baseband test and the RFIC test strategy. Typically, the baseband DigRF interface is tested with predefined data. In other words, when the baseband IC is tested in transmit mode, there is a known set of bits to compare, and the ATE pattern can be defined accordingly. For an RFIC receiver test, this is not the case.

Analysis of DigRF Receiver Patterns
When testing ADCs, the data received at the tester is a digital representation of an analog signal that cannot be analyzed with a predefined capture sequence of low and high compare states. DigRF receiver testing is no exception since there is variability of both the RF signal phase and its expected value at the DigRF interface.

However, we still can define a digital pattern to analyze the data in one of two ways. First, we can specify locations in the pattern where the DigRF receiver interface outputs data, can capture the entire bit stream, and can post-process the results. Second, we can select a low pattern state for all receive bits and analyze the data based on which bits failed the pattern.

Using either ATE digital pattern definition, the result is a stream of binary data that represents I and Q data as defined by the specific DigRF protocol. The next step is to implement a virtual protocol-aware algorithm to reconstruct the I/Q data for analysis. This can be implemented in software and tailored to the protocol and different receive states of the DigRF RFIC.

The reconstruction algorithms can perform many tasks, some of which are the following:
• Serial-to-parallel conversion.
• Bit alignment across multiple pins.
• Selective discard of data.
• Sample decimation.

While the protocol for 2G DigRF modulation formats is a simple one, a virtual algorithm is modified quickly and easily to adjust to other more complex DigRF standard formats. Following is some example pseudo-code for a generic DigRF v1.12 receiver measurement:

function ReadRxDataAndCalculate
Define array for RxTxData capture, size 96*2117.
Define array for RxTxEn capture, size 96*64.
Activate/Run capture pattern.
Fill RxTxData and RxTxEn capture arrays with results.
Find rising RxTxEn edge and assign to variable enRise.
Align starting point of RxTxData array to enRise location.
Convert RxTxData array to 16-bit parallel words.
Discard 2 words per frame of padding data.
Separate the 2084-pt interleaved I and Q array data.
Calculate Gain and SNR.

For production test of DigRF receivers, the goal is to provide an ATE test time as close as possible to the theoretical pattern execution time. Even with efficient protocol-aware algorithms, the overhead generally relates to the number of bits that are captured for processing.

To minimize this overhead, we want to avoid capturing data that does not provide any useful information. This makes the fail-only data analysis technique an efficient and simple method for capturing DigRF receiver data. Both techniques are illustrated in Figure 3a and 3b using the DigRF v1.12 protocol as an example.

Figure 3. Protocol Aware Failure Analysis

For fail-only data analysis, the DigRF receiver test pattern is set to expect all low states. When the pattern is run, all of the low states are deemed passing bits while the high states are considered failing bits.

Since most receiver tests are performed at low RF input signals, nearly all of the bits of small, positive I/Q samples also will be ignored. In this scenario, assuming a two’s complement format, the run-time analysis is limited to only small, negative I/Q samples, reducing the amount of transferred data and the computation required to reconstruct the I/Q data.

A few requirements are necessary to enable this technique. First, the protocol-aware algorithm must have the capability to reconstruct the full I/Q data bits according to the individual failure locations. Second, the reconstruction must be on the order of a few milliseconds or less to feasibly save test time. Once reconstructed, the RxTxEn rising edge and RxTxData bits are aligned.

Finally, a single 2,048-pt 2G DigRF receiver test contains 196,608 RxTxData bits for analysis. There may be more than 150,000 failures in each test. With multiple bands and gain settings tested for a given device, the ATE architecture must have large failure memory behind each digital pin and the capability to quickly access and analyze the data.

A Flexible Approach to DigRF Receiver Testing

In this article, we’ve shown the progression of the DigRF standards from the simplest 2G protocol to the more complex protocols for 3G standards and beyond. While test challenges exist for each protocol, a generic approach to capturing and analyzing the DigRF receiver data provides the flexibility necessary to keep up with the constantly evolving DigRF standards.

This flexible approach, using protocol-aware solutions in software, hardware, or a combination of both, is the simplest way to ensure fastest time to market for testing to the various DigRF standards. Some ideas such as the fail-only data analysis help maximize the throughput by focusing data analysis on a limited set of data.

Although many devices today use the DigRF v1.1 standard, the test challenges for the newer DigRF standards will bring the complexity of RF receiver testing in line with similar serial high-speed interfaces. This progression will surely ignite debate on DigRF loop-back testing and other DFT techniques that may alter the next-generation DigRF protocols, making a flexible ATE protocol-aware solution even more necessary in the future.


1. DigRF Baseband/RF Digital Interface Specification, Version 1.12, Digital Interface Working Group, 2004.
2. Using the Agilent DSO80000B Series Real-Time Oscilloscope to Validate the DigRF v3 Cellular Phone Digital Interface, Agilent Technologies, Application Note 1598, September 2007.
3. 3GPP Long Term Evolution: System Overview, Product Development, and Test Challenges, Agilent Technologies, May 2008.
4. Long Term Evolution (LTE)/System Architecture Evolution (SAE), 3rd Generation Partnership Project (3GPP), May 2008,


The author wishes to acknowledge Jeffrey Tang, Verigy Shanghai Application Development Center, who was an important contributor to the original research, and Mike Kozma, Verigy Americas Customer Team, for contributions to this article.

About the Author

Richard Lathrop is a senior technical consultant with Verigy. Since joining the company, then Agilent Technologies, in 2000, he has focused primarily on RF transceiver test issues. Mr. Lathrop holds a M.S. in engineering from the University of Texas at Austin and a B.S. in applied physics from Yale University. Verigy, 10100 N. Tantau Ave., Cupertino, CA 94105, 408-864-2900, e-mail: [email protected]

October 2008

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!