According to several recent market studies, roughly 750 million cellular handsets were shipped in 2006. This quantity is likely to increase to more than one billion handsets by 2011. These numbers are just one of many indications that the cellular handset market is rapidly becoming a commodity industry with lower profit margins per product.
While commoditization generally is beneficial for consumers because it drives innovation, shrinking profit margins pose a real problem for today's test engineers. As margins decrease, manufacturers face the challenge of reducing the cost of production test.
Reducing the test time of today's wireless handsets is difficult because of increasingly complex architectures. For example, many wireless handsets require testing at multiple cellular bands according to multiple cellular standards and for adherence to noncellular standards such as GPS, Bluetooth, and WiFi.
As the pressure to reduce test time increases, escalating product complexity requires more and more tests on the production line. Accordingly, engineers must find more time-efficient ways to test wireless handsets.
While growth in the cellular industry has produced pressure to reduce test costs, several key innovations in the PC industry enable higher-performance PC-based PXI test instrumentation. In fact, today's multicore central processing units (CPUs) dramatically reduce the test times of wireless handsets through the use of parallel processing. Engineers can realize the benefits of multicore systems by applying a variety of parallel programming techniques that enable independent processes to execute concurrently.
New Technologies for Automated Test
Until recently, new innovations in processor technology have resulted in computers with CPUs that operate at higher clock rates. However, as clock rates approach their theoretical physical limits, new processors are being developed with multiple cores. With new multicore processors, automated test applications achieve the best performance and highest throughput when using parallel programming techniques. However, it is widely recognized that programming applications to take advantage of multiprocessors have traditionally been a significant programming challenge.
LabVIEW offers a suitable programming environment for multicore processors because of its intuitive environment for creating parallel algorithms. In addition, LabVIEW block diagrams are compiled as multithreaded applications. As a result, engineers can optimize automated test systems using multicore processors to achieve the best performance.
PXI Express modular instruments enhance this benefit because of the high data transfer rates possible with the PCI Express bus. So with today's software-defined instrumentation, engineers can vastly improve the speed of making parallel measurements using multicore CPUs.
Parallel Programming Techniques
Two specific parallel programming techniques in LabVIEW—task parallelism and data parallelism—can improve system performance on multicore processors. As an overview, task parallelism involves configuring code so that multiple measurements are executed in parallel.
Data parallelism, on the other hand, describes measurement algorithms that divide a large data set into subsets. Each subset then can be processed in parallel. For both programming techniques, better processor utilization is achieved by balancing the processing load between multiple cores.
Basics of Task Parallelism
The strategy behind task parallelism is to configure code so the compiler assigns independent measurements to unique operating system threads. This can be done by allowing multiple measurement subroutines to share and operate on the same set of raw data concurrently.
In LabVIEW, a series of dataflow wires determines the order in which various subroutines are executed. By wiring the same data to the input of two or more measurement subroutines, they are handled by the operating system as independent threads.
This technique is illustrated in Figure 1.
Figure 1. Example of LabVIEW Task Parallelism
LabVIEW will make identical copies of the raw data in memory and assign each measurement to a unique thread. Upon execution, the operating system dynamically schedules each thread on a processing core.
On multicore processors, overall measurement time is reduced through more efficient use of the CPUs. Task parallelism yields the greatest benefits when a DUT requires a large number of measurements.
More specifically, error vector magnitude (EVM) and output RF spectrum (ORFS) are two measurements that need completely different measurement algorithms:
• EVM shows the phase and amplitude error of each demodulated symbol as a comparison with the ideal symbol. In a typical cell phone, EVM is used to characterize the modulation quality of a transmitter and reveal problems such as carrier leakage and phase noise. Maximum likelihood detection, the algorithm required to perform an EVM measurement, is quite processor intensive. At a high level, it requires clock recovery through a phase locked loop (PLL) and may need digital filtering as well.
• ORFS is used to characterize the power output of the transmit signal at a series of offsets from the carrier. The purpose of this measurement is to determine the level of emissions that a particular carrier will emit into adjacent channels. For a typical cellular transmitter, factors such as spurs in the LO and insufficient filtering will yield a poor ORFS measurement.
As Figure 2 illustrates, ORFS is a frequency-domain measurement. It requires an FFT of the baseband data to return the power vs. frequency for a transmit signal. While an FFT is a processor-intensive type of computation, it must be performed completely independently of the EVM measurement. As a result, using task parallelism as a programming technique can significantly reduce measurement times on multicore processors.
Figure 2. Output of a Typical ORFS Measurement
Basics of Data Parallelism
Data parallelism, the second parallel programming technique, operates by dividing data into subsets and processing each subset in parallel with identical measurement routines. After processing each subset in parallel, the results are compiled to complete the measurement. Generally, data parallelism is most effective on large data sets where the overhead of copying the data is small compared to the overall processing time.
In LabVIEW, data parallelism can be implemented by acquiring raw IQ data in chunks and passing these to a queue structure. As observed in Figure 3, multiple dequeue function calls retrieve and process individual data sets in parallel. In this scenario, the same measurement algorithm operates on each subset of data, and the results are compiled to return the overall measurement.
Figure 3. Example of LabVIEW Data Parallelism Using the Queue Structure
The queue structure actually yields several benefits. First, it provides an easy mechanism for dividing a waveform into subsets, which is necessary for parallel execution. Second, the queue structure reduces the latency of the measurement. In this case, the first subset actually can be processed before the entire waveform is acquired.
For cellular test, one of best examples of a measurement that can benefit from data parallelism is phase and frequency error (PFER). In this measurement, a demodulated IQ waveform is compared with the desired phase and frequency on a symbol-by-symbol basis.
To complete this measurement, a series of FFTs must be performed on subsequent data sets. Because each of these calculations can be made completely independently of one another, the overall measurement time can be reduced by dividing the processing load between multiple threads.
Performance Benchmarks
To characterize the real benefits of parallel programming techniques, the AmFax GSM test solution can be benchmarked against a competitive one. In this scenario, AmFax compared the performance of a multicore PXI and LabVIEW-based test system with a traditional leading benchtop instrument.
To ensure the highest accuracy, AmFax configured a common test routine for each instrument in NI TestStand. In this sequence, each instrument was abstracted at a common programming layer. By following identical test sequences for each instrument, the comparison quantified the effect of measurement speed on overall test time.
Using this benchmark architecture, AmFax verified that the multicore programming approach was between 50% and 90% faster than the traditional instrumentation method. As a result, AmFax reduced the overall test time for a typical cellular handset to 43 seconds vs. 93 seconds using traditional instruments.
For example, actual individual measurements of transmit power (TXP), power vs. time (PVT), and PFER take 4.7 milliseconds each, which is virtually occurring in real time. Even the ORFS measurement, which is the most time-consuming process, can be reduced to 5 milliseconds.
The results of the benchmark suggest that parallel programming techniques enable more efficient processor utilization on at least dual-core processors. As CPUs evolve to include more processing cores, a combination of task and data parallelism can be used to balance the processing load between many processing cores. The quadcore and octalcore processors of the future will likely reduce test times even further.
Conclusion
As wireless devices increase in complexity and volume, the pressure to reduce the cost of wireless handset test will continue as well. Fortunately, multicore processors provide software-defined instruments with a high-performance test solution.
Today's multicore processors significantly improve test times in single device testing. Additionally, these processing times can be reduced further with the implementation of parallel DUT configurations. As multicore processors continue to evolve, test engineers will see improvements in overall test time.
About the Author
Mark Jewell is the business development manager for Amfax. He graduated from Sussex University with BSc Honours in physics and trained as an RF communications engineer with Marconi before moving into commercial sales. Mr. Jewell has worked with many leading cellular manufacturers and operators around the world on infrastructure projects and product enhancement over the last 20 years. +44 1258 480777, e-mail: [email protected]
Steven Bird is a software team leader at AmFax and works on RF and wireless test. He has industrial experience in the aerospace, defense, and communications industries. e-mail: [email protected]
AmFax Ltd., Unit 3, Clump Farm Industrial Estate, Blandford Forum, Dorset, U.K. DT11 7TD
David A. Hall is a product marketing engineer for RF and communications at National Instruments. He graduated from Pennsylvania State University with a B.S. in computer engineering and has been with NI since 2004. National Instruments, 11500 North Mopac Expwy., Austin, TX 78759, 512-683-5661, e-mail: [email protected]