Five Key Trends Highlight 2012’s Automated Test Outlook

Jan. 17, 2012
NI's Richard McDonell takes you inside the 2012 Automated Test Outlook, revealing trends and providing insights that will affect how you approach test this year.

Fig 1. Test organization transformation includes four maturity levels.

Fig 2. Test development software is shifting from microprocessor-based only tools to also include HDL abstractions, high-level synthesis, and hardware-agnostic models of computation for improved test system performance and lower development time through more portable measurement algorithms.

Fig 3. The inherent parallelism achieved by improving the integration of RF design and test enables engineers to bring products to market more quickly than before.

Each year, National Instruments publishes The Automated Test Outlook, a comprehensive view of key technologies and methodologies impacting the test and measurement industry. We’ve found that one of the biggest challenges for test engineers and managers is keeping up with technology trends.

National Instruments offers a broad knowledge of technology trends and interaction with companies across many sectors, which gives us a unique vantage point on the test and measurement market. For the latest edition, we have divided the outlook into five categories. In each category, we highlight a major trend that will significantly influence automated test in the coming one to three years.

Transforming Test Into A Strategic Asset

In tough economic conditions, companies are more diligent in looking for opportunities to gain a competitive advantage and simultaneously grow revenue, profits, and customer loyalty. To achieve such results, companies have leveraged popular business improvement strategies such as six sigma, lean manufacturing, capability maturity model integration (CMMI), and agile product development.

Another strategy is to take a support function within a company and elevate and strategically leverage it as a marketplace differentiator. For example, the role of information technology (IT) has changed dramatically over the last two decades and has evolved from a standard support function to a critical asset.

An emerging trend for electronics manufacturing companies is the use of product test for competitive differentiation. This has elevated the test engineering function from a cost center to a strategic asset.

This shift is exemplified by the result of a recent global NI survey of test engineering leaders that showed the top goal over the next one to two years is to reorganize their test organization structure for increased efficiency. This strategic alignment delivers significant business impact by reducing the cost of quality and improving a company’s financials by getting better products to market faster.

Research has revealed that “optimized” is the ideal maturity level—where a test engineering organization provides a centralized test strategy that spans the product life cycle. This optimized organization develops standardized test architectures with strong reuse components, enables dynamic resource utilization, and provides systematic enterprise data management and analysis that results in company-level business impact.

Companies making this transformation must be committed to a long-term strategy because, according to NI research, it generally takes three to five years to realize the full benefit. It takes a disciplined investment strategy focused on innovation to transform the test organization through the four maturity levels: ad-hoc, reactive, proactive, and optimized (Fig. 1).

Each level includes elements of people, process, and technology. The right people are required to develop and maintain the cohesive test strategy. Process improvements are required to streamline test development and reuse throughout product development. Finally, the latest technologies must be tracked and incorporated to improve system performance while lowering cost.

This phased approach enables organizations to realize benefits early on—after the completion of just one or two projects. Examples of these transition projects include:

  • Standardized test architecture/process (ad-hoc -> reactive): Adopting standardized software and hardware architectures and test methodologies improves productivity including faster test code development and increased test asset utilization.
  • Test total cost of ownership (TCO) financial model (reactive -> proactive): A TCO financial model for test can enable you to calculate the business productivity metrics and financial metrics (return on investment, payback period, net present value, internal rate of return, etc.) of test improvement initiatives.
  • Enterprise test data management (proactive -> optimized): Developing a comprehensive test data infrastructure that spans across sites with universal access improves real-time decision-making.

Portable Measurement Algorithms

Over the past 20 years, the concept of user-programmable, microprocessor-based measurement algorithms has become mainstream, allowing test systems to rapidly adapt to custom and changing test requirements. This approach is called software-defined or virtual instrumentation.

If the microprocessor initiated the virtual instrumentation revolution, then the field-programmable gate array (FPGA) will usher in its next phase. FPGAs have been used in instruments for many years.

For instance, today’s high-bandwidth oscilloscopes collect so much data, it is impossible for users to quickly analyze all of it. Hardware-defined algorithms on these devices, often implemented on FPGAs, perform data analysis and reduction (e.g., averaging, waveform math, and triggering), compute statistics (e.g., mean, standard deviation, maximum, and minimum), and process the data for display, all in an effort to present the results to the user in a meaningful way. 

While these capabilities are of obvious value, there is lost potential in the closed nature of these FPGAs. In most cases, users cannot deploy their own custom measurement algorithms to this powerful processing hardware.

Open FPGAs on measurement hardware offer many advantages over processor-only systems. Because of their immense computational capabilities, FPGAs can deliver higher test throughput and greater test coverage, reducing test time and capital expenditures.

The low latency of FPGA measurements also provides the ability to implement tests that are not possible on a microprocessor alone. Their inherent parallelism offers true multi-site test, even more so than with multicore processors. And finally, FPGAs can play a key role in real-time test hardware sequencing and device under test (DUT) control.

The emerging trend of open FPGAs in test systems was articulated in the 2010 National Instruments Automated Test Outlook (http://zone.ni.com/devzone/cda/tut/p/id/11287), and a growing number of open FPGA products, available from both Agilent and National Instruments, are on the market today.

While hardware options continue to reach the market, most test and measurement algorithms, developed for execution on microprocessors as part of the virtual instrumentation revolution, are simply not easily portable to FPGAs.

It takes significant expertise and time to develop verified, trusted FPGA measurement IP, and this is why today, most FPGAs in instrumentation hardware use only fixed, vendor-defined algorithms and aren’t user-programmable.

In the 2011 Automated Test Outlook, National Instruments discussed heterogeneous computing (http://zone.ni.com/devzone/cda/tut/p/id/12570), or distributing algorithms across a variety of computing architectures (CPUs, GPUs, FPGAs, and the cloud), selecting the optimal resource for algorithm implementation.

While a very powerful concept from a hardware architecture perspective, there are unique challenges with programming each of these targets, and measurement algorithm portability between them can be difficult.

To overcome this paradox, the industry is currently attacking all of these challenges with advances in development tools that promise to provide algorithm portability across hardware targets, making the advantages of FPGAs available to all engineers developing test systems (Fig. 2).

The first set of such tools can be broadly classified as those that provide hardware development language (HDL) abstraction. HDLs describe gate and signal-level behavior in a text-based manner, while HDL abstractions attempt to provide higher-level design capture, often in a graphical or schematic representation.

These tools include Xilinx System Generator for DSP, Mentor Graphics Visual Elite HDL, and the National Instruments LabVIEW FPGA Module. While they do provide a much lower barrier to adoption of FPGA technology than HDL, they do not completely abstract some of the hardware-specific attributes of FPGA design such as pipelining, resource arbitration, DSP slice architecture, and on-chip memories. As such, algorithms still require rework when ported to an FPGA, motivating future advances in development tools.

High-level synthesis (HLS) tools provide the ability to capture algorithms at a high level, then independently specify performance attributes for a given implementation such as clock rate, throughput, latency, and resource utilization. This decoupling provides algorithm portability, as the specific implementation is not part of the algorithm definition. Moreover, algorithm developers do not need to incorporate hardware-specific considerations into their design such as pipelining and resource arbitration.

The concept of HLS has been around for more than 20 years, but the tools on the market are only just becoming mature enough to be viable. Offerings include Synopsys Synphony, Xilinx AutoESL, Cadence C-to-Silicon, and Mentor Graphics Catapult C. While these tools do offer advantages over HDL abstractions, they only target FPGAs or application-specific integrated circuits (ASICs) and not other computing platforms such as microprocessors and GPUs.

Attempting to address some of the limitations of these HLS tools, National Instruments recently announced beta software that incorporates the familiar LabVIEW dataflow diagram with the advantages of HLS. This promises to provide a path for the large number of LabVIEW-based measurement algorithms to an FPGA implementation, without compromising microprocessor execution or requiring significant algorithm redesign for FPGA implementation. It is not yet ready for mainstream adoption, but initial results are very promising.

The last step in the evolution of development tools focuses on coupling measurement portability across hardware targets with multiple models of computation and design capture. These models of computation might include the LabVIEW dataflow diagram, DSP diagrams for multi-rate signal processing in RF and communication applications, textual math for capture of textbook-like formulas, or state machines for digital logic and protocols. 

Take, for instance, a future system-on-a-chip (SoC) such as the Xilinx Zynq extensible processing platform, which couples an ARM microprocessor with an FPGA. This silicon offers tremendous potential for heterogeneous computing, yet programming it will be difficult as separate languages and models of computation are required for the microprocessor and the FPGA.

Ideally, one would have a multitude of models of computation supported for all targets, allowing you to capture your algorithm in the most efficient manner, and then deploy it to the best execution target for a given application. Depending on the business needs, “best” could mean highest performance, most cost effective, or fastest time-to-market. Tools that support hardware-agnostic models of computation are currently under development and are an inevitable outcome based on the needs of today’s test system developers.

While hardware-agnostic measurement algorithms and high-level synthesis tools may not yet be mainstream, open FPGAs are becoming increasingly prevalent in automated test systems. The benefits of FPGAs in test are already worth the increased development investment in many applications, and as graphical system design software tools improve and development time and complexity decreases, the number of these applications will only rise.

Just as microprocessors and the associated software development environments and measurement algorithms brought about the virtual instrumentation revolution, user-programmable FPGAs and new graphical system design software tools will define test systems of the future.

PCI Express As System And Interface Bus

Automated test systems have always relied on PCs for providing the central control of all instrumentation hardware and automating the testing procedure. PCs in various form factors, such as desktops, workstations, industrial, and embedded, have been used for this purpose.

These PCs have provided various interface buses such as USB, Ethernet, serial, GPIB, PCI, and PCI Express that have been utilized for interfacing instrumentation hardware in automated test systems. Since they play such a critical role in an automated test system, it’s imperative for the test and measurement industry to track the progression of the PC industry and exploit any new technologies for increasing capabilities and performance while lowering the cost of test.

Over the last 10 years, PCs have evolved rapidly in multiple different dimensions. As predicted by Moore’s law, the processing capabilities of CPUs have increased by more than 75 times in the past decade. Besides the dramatic increase in processing capabilities, another significant trend has been the emergence of serial communication interfaces and the demise of parallel communication interfaces.

PCI Express has replaced PCI, AT, and ISA as the default internal system bus for interfacing peripheral system devices to the CPU. External interface buses such as USB and Ethernet have replaced parallel port, SCSI, and other parallel communication buses. With the proliferation of wireless communications standards such as Wi-Fi and Bluetooth, the consolidation of external physical interfaces on PCs is another recently emerging trend.

Based on current adoption rates, the PCI Express bus, utilized as both an internal system bus and external interface bus, is becoming the interface of choice for automated test systems. It offers the ideal combination of high data bandwidth and low latency and is an extremely pervasive technology since it’s a fundamental element of every PC. It has also started to blur the boundaries between a system bus and an interface bus and will likely continue to dissolve this delineation.

As a system bus, the serial PCI Express bus has various inherent advantages over parallel buses such as PCI and VME. Technical challenges with PCI and VME such as timing skew, power consumption, electromagnetic interference, and crosstalk across parallel buses become more and more difficult to circumvent when trying to increase its data bandwidth.

Besides being a technically superior bus, PCI Express has seen continuous improvements in its data transfer capabilities since its release in 2004. PCI Express 2.0 was released in 2007, doubling the data rate from PCI Express 1.0. PCI Express 3.0 was released in 2010, doubling the data rate over PCI Express 2.0.

Although PCI Express has been consistently modified, these improvements have not come at the cost of compatibility. PCI Express uses the same software stack as PCI and provides full backward compatibility.

Automated test and measurement platforms that leverage PCI Express as the internal system bus, such as PXI, can leverage all these advances to continue to offer more and more capabilities at low cost. Such platforms, based on their technically superior capabilities, will likely become the central core of all automated test systems.

As an interface bus connecting PCs to instruments and instruments to instruments, PCI Express addresses many prior issues of interface buses, such as GPIB and Ethernet, by significantly lowering latency and dramatically increasing the data bandwidth. This can have a major impact on lowering test times as prior GPIB and Ethernet interface buses fundamentally constrain the overall efficiency of a test system by limiting the rate of data transfer and time it takes for every transaction.

Furthermore, since CPUs don’t natively provide access to these external interfaces, inside the PC, some form of conversion usually occurs to translate these external interfaces in to the internal system bus, which is PCI Express.

PCI Express offers better performance over these other external interfaces and is directly available from the CPU in a PC. Thus, it removes the bottleneck imposed by these other external interface buses and allows for test times to be lowered significantly.

The use of PCI Express as an external interface bus is not new. PCI-SIG, the official governing body of the PCI Express specification, supports an external implementation of PCI Express, formally known as Cabled PCI Express. Released in 2007, this implementation of PCI Express provides a transparent way to extend this system bus to interface external devices.

Modular instrumentation platforms such as PXI already use Cabled PCI Express to provide flexible and low-cost control options. Cabled PCI Express formally only supports its use with copper cables, which limits the physical separation between the PC and the device to 7 m. However, when used in conjunction with electro-optical transceivers, this technology can be extended over fiber cables to provide more than 200 m of physical separation and electrical isolation.

PCI Express has been an excellent choice for interfacing PCs directly to devices. But in isolation, it can’t be used as an interface between intelligent systems that have their own independent PCI Express domains.

The use of PCI Express non-transparent bridges (NTBs) can address this challenge. An NTB logically separates the two PCI domains while providing a mechanism for translating certain PCI transactions in one PCI domain into corresponding transactions in another PCI domain, just enabling PCI Express to be used as a communication interface between intelligent systems.

NTBs can be used in a system configuration to interface multiple intelligent sub-systems and also to interface physically independent systems when used in conjunction with Cabled PCI Express or Thunderbolt. The PXI MultiComputing (PXImc) specification, released by the PXI Systems Alliance (PXISA) in November 2009, standardizes the usage of NTBs and provides the framework for creating complex high-performance test and measurement systems.

Based on the current technology trends in the PC industry, such as the dominance of serial communication interfaces, I/O consolidation, and the pervasiveness of wireless communication, besides being the default choice for a system bus, PCI Express is expected to emerge as the leading external interface bus.

Automated test systems that leverage PCI Express, in its various implementations, are positioned to offer the highest performance, most flexibility, and lowest cost. They also will become the default choice for automated test and measurement applications.

Explosion Of Mobile Devices

One of the biggest trends in automated test over the last three decades has been the shift toward PC-based modular platforms that use the latest off-the-shelf computing technologies with increasingly powerful processors, new I/O buses, and more advanced operating systems.

While this trend is likely to continue, a completely new class of intelligent, ultra-portable computing devices, namely tablets and smart phones, has emerged in recent years and offers new opportunities for forward-thinking organizations to leverage off-the-shelf technologies in automated test systems.

Two major differences separate today’s tablets and smart phones from earlier devices. First, the shift from using a stylus to multi-touch interactions with the device has generally resulted in a better, more intuitive user experience for consumers. Second, nearly ubiquitous Wi-Fi and cellular wireless coverage has provided connectivity for these devices almost anywhere in the world.

Most industry experts predict that tablets and smart phones may occasionally replace but will more typically augment the use of still more powerful and ubiquitous desktop and laptop computers. Tablets and smart phones provide a convenient solution for portable content consumption, whereas traditional PCs excel at content creation.

When the Nielsen Company surveyed consumers in 2011 to study why they were using tablets instead of traditional PCs, the top reasons cited included user experience improvements like superior portability, ease of use, faster startup time, and longer battery longevity.

While tablets and smart phones can’t replace the PC or PC-based measurement platforms like PXI, they appear on a path to offer unique benefits as extensions to a test system. The two principal use cases for mobile devices within automated test are the monitoring and control of test systems and the viewing of test data and reports.

Many technicians, engineers, and managers would like to access the status of a test system directly from a tablet or smart phone. This is convenient when the test system is nearby, but is especially useful when a test system is located on the other side of the world.

A tablet or smart phone can instantaneously check on a remote test system or even change its mode of operation. This requires the test system to have access to either a local intranet or the public Internet. Intranet access will allow for remote monitoring from mobile devices located on the same campus or with VPN access to the intranet, while a test system connected to public Internet can theoretically be accessed by any mobile device anywhere.

Similarly, test engineers, managers, and technicians may alternately desire to view consolidated test reports that characterize previous tests and display test trends. In this use case, the test systems themselves do not need to be connected to the network so long as their data is available to another computer with network access. The secondary machine functions to monitor test systems, analyze test results, and create test reports that can be delivered to remote users.

While providing a test organization with mobile access to important information via tablets and smart phones is an attractive proposition, there are key challenges to building such a solution. These challenges lie in two main areas: transmitting data between the mobile client and the test system (or proxy), and creating client-side applications to interpret and display that data.

The explosion of mobile devices, such as tablets and smart phones, offers compelling benefits to engineers, technicians, and managers who need remote access to test status and results. While the technology exists today to develop solutions that allow for monitoring or remote reporting via mobile devices, it will require new expertise within a test organization to unite the networking, Web services, and mobile app portions of the solution.

Integration Of RF Design And Test

Today’s engineers face steeper time-to-market requirements than ever before. In the consumer electronics and semiconductor markets, shortening time-to-market requirements are often driven by ever shortening product lifecycles. Today, time-to-market statistics vary widely across industry and product complexity.

For example, in the medical and aerospace/defense industry, products or systems often take nearly a decade to develop. By contrast, pressure to bring products to market more quickly is most intense in industries with the shorter lifecycles such as the semiconductor marketplace.

According to a recent survey conducted by the Global Semiconductor Alliance, the average time-to-market for a new semiconductor part was 19 months for new designs and 14 months for spins of existing designs.

The ratio of product life cycle to product-development time in semiconductors is half that for a mobile phone and a third that for an automobile. And for the growing ranks of “fab lite” or fabless players, R&D excellence is the key differentiating factor.

Fortunately, a proven and growing trend to assist with decreasing the average time-to-market is improving the integration of design and test to parallelize the two processes. To understand this concept it’s important to define the discrete stages of the product development schedule. Generally, we can break up this process into four steps: research and modeling, design and simulation, verification and validation, and manufacturing.

Historically, product development is completed in this precise order with little parallelism built into the process. Since different groups or departments often execute each phase of development using disparate tools, the potential for inefficiency in product development is substantial.

One emerging technique that introduces greater parallelism into the product development processes, and hence faster time-to-market, is the integrated use of both simulation software and test software throughout the product development processes to parallelize stages of product development. There are two primary applications of integrating design and test software:

  • Using sophisticated models to provide detailed product performance data
  • Using sophisticated models to parallelize product development processes

Engineers frequently use engineering design automation (EDA) tools to develop sophisticated behavioral models for a new design. Unfortunately, the modeled design is often verified using measurement criteria that are ultimately different than what will be used to verify the final product.

In fact, because the tool chains used for design and test were historically different, it’s nearly impossible to use the same measurement algorithms for design through production test. A growing trend is to use a common tool chain for design through test to introduce measurements earlier into the design flow (Fig. 3).

Consider the design of a WCDMA/LTE cellular RF power amplifier (PA) using EDA simulation software. Traditionally, RF EDA software reports only general RF characteristics such as expected 1-dB compression point, harmonics, and gain. However, for an amplifier designed for the cellular domain, engineers will be required to perform additional measurements that are specific to the WCDMA and LTE standards.

These measurements, including error vector magnitude (EVM) and adjacent channel leakage ratio (ACLR), are specifically defined by standards bodies and traditionally measured using test equipment such as an RF vector signal analyzer. Going forward, one example of continued integration between software for both design and test is to introduce these final measurement algorithms into the simulation software.

In the second use case, a growing trend is the use of design and simulation software to create behavioral models of a particular product and then use those models to accelerate the product verification and validation and manufacturing test processes. Traditionally, one source of inefficiency in the product design process lies in how the development of test code for a particular product often occurs after the first physical prototypes are available for testing.

One way to accelerate this process is to use the software prototype of a given design as the DUT when writing either characterization or production test code. Using this approach, development time for both characterization and production test software can be parallelized with product design, resulting in an overall reduction in time-to-market.

Consider the development approach chosen by Medtronic on a recent pacemaker design. Its engineers were able to utilize the new Mentor Graphics SystemVision SVX Client environment for NI LabVIEW software to connect measurements to their initial design.

As a result, the engineering team was able to begin development of a LabVIEW-based testbench before physical hardware was ever produced. The inherent parallelism achieved by this approach fundamentally enables engineers to bring products to market more quickly than before.

Visit ni.com/ato for further information on these trends in The Automated Test Outlook and a deeper look at trends from years past.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!