For years, computing has trended to distributed architectures that divide processing among multiple computing nodes. For example, one of the most powerful supercomputers in the world isn’t in a university laboratory but distributed among PlayStation 3 video consoles performing protein folding around the world. And, Google search queries are not run on one single supercomputer but on a network of more than 450,000 low-cost interconnected servers.
Even the computer you may be using is running multiple computing nodes, such as a GPU to process and display graphics and a CPU that distributes processing among its multiple cores. Test systems are no different because they will have to adapt to distribute acquisition, processing, and storage.
The trend to software-defined instrumentation is giving engineers unprecedented control over automated test systems and enabling new types of applications. Much of this is due to their ability to access the raw measurement data that can be analyzed and processed for their exact needs. With higher digitization rates and channel counts, the amount of available data is increasing at exponential rates. Within five years, some high-performance test systems will be processing petabytes of data per day.
Beyond the amount of data being acquired, much of it will need to be processed in real time. Applications such as RF signal processing benefit immensely if demodulation, filtering, and FFTs can be done instantly. For example, it becomes possible to move beyond power-level triggering in RF applications and create custom triggers based on the frequency domain of the signal.
Peer-to-peer computing uses a decentralized architecture to distribute processing and resources among multiple nodes. This is in contrast to traditional systems where there always has been a central hub responsible for transferring data and managing processing.
In automated test systems, peer-to-peer may take the form of acquiring data directly from an instrument like a digitizer and streaming it to an available field programmable gate array (FPGA) for inline signal processing. Other systems will use it to offload processing to other test systems or high-performance computers.
Data Transport in Peer-to-Peer Systems
New high-performance distributed architectures are required to transfer and process all of this data. These architectures will share three key characteristics:
- High-Throughput, Point-to-Point Topologies—The architecture must be able to handle the transfer of many gigabytes of data per second while allowing nodes to communicate with each other without passing data through a centralized hub.
- Low Latency—Data will need to be acquired and often acted upon in fractions of a second. There cannot be a large delay between when the data is acquired and when it reaches a processing node.
- User-Customizable Processing Nodes—The processing nodes must be user-programmable so that analysis and processing can meet exact test system needs.
Very few distributed architectures have been able to meet all three of these criteria. For example, Ethernet provides an effective point-to-point topology with a diverse set of processing nodes. But with high latency and average throughput, it is not well suited to inline signal processing and analysis.
Figure 1. Bandwidth vs. Latency of Data Buses
The architecture with the most initial success and future promise in meeting these criteria is PCI Express (PCIe). The bus that has formed the core architecture of every PC and laptop for much of the last decade, PCIe was specifically designed for high-throughput, low-latency transfers. It provides throughput of up to 16 GB/s and soon to be 32 GB/s with latencies of less than a microsecond (Figure 1).
PCIe is already seeing use as a distributed architecture in military and aerospace applications. In defining its next-generation test systems, the U.S. Department of Defense Synthetic Instrument Working Group identified PCIe as the only bus capable of providing the data throughput and latency required for user-customizable instrumentation. This architecture now is seen in synthetic instruments from BAE Systems that use PCIe to stream downconverted and digitized RF data directly to separate FPGA processing modules for inline signal processing.
Inline Signal Processing
FPGAs are programmable silicon with a hardware-timed execution speed that enables a high level of determinism and reliability. This comes from being a truly parallel architecture where each independent processing task has its own dedicated section of the chip, and each task can function autonomously without any influence from other logic blocks. As a result, adding more processing does not affect the performance of another part of the application. All of these features make FPGAs ideally suited for performing tasks like data decimation and inline signal processing in automated test applications.
Using peer-to-peer computing, it now is possible to stream data directly from instrumentation to FPGAs for inline analysis and process control. One common example of this is in the need for real-time frequency-domain triggering in RF applications.
Figure 2. NI FlexRIO FPGA Module With a Frequency-Domain Trigger to the VSA
As seen in Figure 2, the vector signal analyzer (VSA) uses peer-to-peer streaming to send data directly to the NI FlexRIO FPGA module, where it is windowed, converted to the frequency domain, and then compared against a mask. When the data exceeds this mask, the FPGA module asserts a digital trigger on the PXI backplane. Once the VSA receives this trigger, it uses its normal acquisition memory to capture a record of data, including pretrigger samples. It then is possible to access this record from the host through the VSA driver for additional processing or storage.
PXI MultiComputing
PCIe provides the highest bandwidth and lowest latency for peer-to-peer systems but does have the challenge of being originally designed as the local bus of a computer. As such, PCIe has the concept of a root complex that is responsible for assigning all resources and managing data transfers.
If two root complexes are connected, as in the cabling of two PCs over cabled PCIe, questions arise such as which system owns the bus and which resources are available to each computer. To allow for this, the concept of a nontransparent bridge (NTB) is used to separate the PCI domains and isolate resources. Through the NTB, systems can maintain their own resources on the PCIe bus and communicate to the other system by mapping memory addresses to physical memory on the other system.
NTB technology has been in use for more than a decade but only in vendor-specific solutions. In late 2009, the PXI Systems Alliance (PXISA) released a new specification called PXI MultiComputing (PXImc) that standardizes the NTB hardware and software to ensure that multiple vendors’ products work together.
Figure 3. Peer-to-Peer Test System Using PXImc
Using PXImc, it is possible to create a vendor interoperable system that allows communications over PCIe to multiple PXI systems plus communications among laptops, high-performance computers, and even stand-alone instruments (Figure 3). While enabling all new capabilities to PXI systems, the specification also ensures backward compatibility with the more than 1,500 PXI controllers, modules, and chassis already available.
PXISA member companies are hard at work developing PXImc-compatible products, and some companies already have demonstrated working prototypes. For example, at NIWeek 2010, NI previewed a system using PXImc to simulate the adaptive optics algorithms necessary to control the mirror of the European Southern Observatory’s extremely large telescope. This PXI system featured four interconnected servers attaining 50 GFLOPS and 600 Mbytes/s of sustained throughput between each processing node. In other words, this is enough processing power to simultaneously control more than 320,000 inverted pendulums.
Peer-to-peer computing using high-performance distributed architectures is early in its application, and many innovations are still to come. With exponentially growing amounts of data and increasingly complex testing requirements, test engineers will need to learn how best to apply these new technologies to create smarter test systems.
About the Author
Matthew Friedman is a senior product manager for PXI and automated test with National Instruments. He holds a B.S. degree in computer science from the University of Colorado at Boulder. National Instruments, 11500 N. Mopac Expwy., Austin, TX 78759, 512-683-5435, e-mail: [email protected]