Looking ahead to 2011, the two leading trends in test will be the use of multiple computing architectures in a test system, also known as heterogeneous computing, and applying higher-level software abstraction tools to implement advanced IP-to-the-pin for FPGA-based reconfigurable instrumentation. Additional trends will include a focus on achieving increased organizational test integration and investing in proper tools and architectures for designing flexible system software stacks.
Automated test systems have always consisted of multiple types of instruments, each best suited to different measurement tasks. This same trend is now affecting how we perform computation in a test and measurement system. Applications like RF spectrum monitoring, for example, require inline, custom signal processing and analysis not possible using a standard PC CPU.
To address these needs, engineers will have to turn to heterogeneous computing architectures to distribute processing and analysis among different computing nodes. The most common computing nodes in test systems are central processing units (CPUs), graphics processing units (GPUs), FPGAs, and cloud computing.
While heterogeneous computing provides new and powerful computing architectures, it also introduces additional complexities in the development of test systems—the most prevalent being the need to learn a different programming paradigm for each type of computing node. For instance, to fully utilize a GPU, programmers must modify their algorithms to massively parallelize the data and translate the algorithm math to graphics rendering functions.
Meanwhile, FPGAs often require the knowledge and use of low-level hardware description languages like VHDL to configure specific processing capabilities. Fortunately, work is underway to abstract the complexities of specific computing nodes so heterogeneous computing can enable many new possibilities in test system development.
Moore’s Law is bringing FPGA capabilities in line with ASICs. In fact, the Gartner research firm stated in a 2009 report that FPGAs now have a 30 to 1 edge in design starts over ASICs. This performance boost and the empirical advantage of being software-defined have created a market shift toward FPGA-based designs for both electronic devices and test instrumentation.
This common programmable core is enabling engineers to deploy design building blocks, known as intellectual property (IP) cores, to both their device under test (DUT) and reconfigurable instruments. The IP cores include functions/algorithms such as control logic, data acquisition, digital protocols, encryption, math, signal processing, and more.
The ability for test engineers to directly embed the design IP in their test instrumentation to perform system-level test can dramatically shorten design verification/validation and improve production test time and fault coverage. This capability is called IP-to-the-pin.
Moore’s Law will continue to accelerate this trend by providing more powerful FPGAs. Vendors are also beginning to integrate FPGAs with devices such as processors and data converters to deliver more performance and user programmability even closer to the pin.
Continue on next page
Another element of this trend is the increase in availability and capability of high-level synthesis tools (HLS), such as NI LabVIEW FPGA, for test engineers. This abstraction increases the accessibility of FPGA designs to more engineers and provides a platform for programming at a system level.
Organizational Test Integration
For the past two decades, organizations have sought to improve the performance of test teams in design and production by drawing clear boundaries around these groups and allowing them to improve independently. However, this strategy has started to generate diminishing returns. To keep up with the increasing time-to-market and cost pressures of next-generation products, companies are now looking to organizational test integration.
Best-in-class companies are integrating test organizations in design and production to decrease test development time, reduce costs, and improve quality. For example, by improving the new product introduction process and involving production test earlier in design, organizations can develop test systems faster and reduce time-to-market.
In addition, the increased use of test automation in design and production has shown both teams that common software and instrumentation platforms can be used across the organization, reducing capital and training costs. Finally, test teams are developing reusable software components that not only reduce development time but also increase quality by providing more reliability and repeatability of measurements.
System Software Stack
With the increasing role of automated test software during the past decade, today’s industry-leading companies are putting greater emphasis on designing more robust system software stacks to ensure maximum longevity and reuse. Most companies are moving away from monolithic test applications that contain fixed-constant code and direct driver access calls to the instruments. Instead, modularity is achieved in the form of separate yet tightly integrated elements for test management software, application software, and driver software.
Two key technology components gaining increased usage are process models and hardware abstraction layers. The process model plays a primary role in separating all of the test steps in a sequence from the non-test tasks, such as reporting, database logging, importing test limits, and DUT tracking. Hardware abstraction layers separate the test application software from the instrument hardware, which minimizes the time and costs associated with migrating or upgrading test systems.
As the test industry continues to keep up with the innovative advances of DUTs, you must constantly evaluate the impact it has on your test technologies and methods. Applying these trends for 2011 to your test strategy will help you keep your organization ahead of the industry and enable you to keep up with the growing demands on your business.