Each year National Instruments gathers input from thousands of industry experts, customers, researchers, and suppliers to assess the key technologies and methodologies that are poised to shape the landscape of automated test in the coming year and beyond. NI then compiles these conclusions in its annual Automated Test Outlook. The 2013 edition is organized into five categories and discusses the following major trends:
- Business Strategy: Test Economics
- Architectures: Big Analog Data
- Computing: Software-Centric Ecosystems
- Software: Test Software Quality
- I/O: Moore’s Law Meets RF
Executives look to their engineering and manufacturing operations to maintain their competitive advantages in the marketplace. These leaders consistently measure common financial and business metrics such as return on invested capital (ROIC), return on assets (ROA), time-to-market, profit margin, and product quality to drive improvements in product development schedules and processes. However, how they measure their test organizations is less standardized.
To successfully meet this challenge and justify strategic investment in test, best-in-class test organizations are proposing transformation initiatives backed with financial metrics including return on investment (ROI), cost per unit tested, annual test costs and savings, payback periods on strategic investments, and the breakdown of capital versus noncapital costs in a test organization.
This trend defines the total cost of ownership equation and how the proper modeling of total cost of ownership will uncover all the lifetime costs of certain test assets. It also will provide a financial framework for justifying future strategic investments.
Big Analog Data
In test and measurement applications, engineers and scientists can collect vast amounts of data. For every second that the Large Hadron Collider at CERN runs an experiment, the instrument can generate 40 terabytes (1E12) of data.
For every 30 minutes that a Boeing jet engine runs, the system creates 10 terabytes of operations information. For a single journey across the Atlantic Ocean, a four-engine jumbo jet can create 640 terabytes of data. Multiply that by the more than 25,000 flights flown each day, and you get an understanding of the enormous amount of data being generated.
That’s “Big Data.” As it becomes both necessary and easier to capture large amounts of data, engineers will face challenges with creating end-to-end solutions that require a close relationship between automated test and IT equipment. This trend shares how companies are leveraging IT providers to offer integrated and bundled solutions for automated test applications.
The transition underway in mobile devices offers insight into an important trend for test and measurement: the power of the software-centric ecosystem. Early mobile telephones first were built to make calls and later to send text messages, but the vendor almost completely defined the capabilities.
Once the software on these devices was opened up to the user, capabilities ranging from music players to cameras to e-mail quickly followed. But the effectiveness of the transition was more than just an open software experience. Apple and later Google built robust ecosystems around their products and created a sizeable community of developers for “apps” that accelerated their usefulness.
This same concept is impacting test. Communities of developers and integrators, building on standard software platforms, are extending the functionality of complex hardware into applications previously impossible with commercial off-the-shelf technology, including robust hardware abstraction layers enabled by standard instrument drivers and highly abstracted FPGA programming with third-party code integration.
This trend explains how the level of productivity and collaboration delivered by software-centric ecosystems will have a profound effect on system design over the next three to five years and greatly impact the value derived from automated test systems.
Test Software Quality
The importance of software in automated test applications has increased rapidly over the last decade due to the need for highly customizable, flexible, and capable measurement systems. The trend toward increased product complexity and capability spans all industries, and it has had a direct impact on the complexity and capability of test software and the importance of test system reliability, performance and accuracy. This trend reveals how companies are investigating best practices to manage and mitigate the risk of test software to find software defects as early as possible.
Moore’s Law Meets RF
The development pace and proliferation of mobile devices today has leveraged Moore’s law, with a 24.9% compound annual growth rate (CAGR) forecasted for 2011-2017. Unfortunately, traditional instrumentation has not kept pace with this growth affordably or efficiently.
Because of extremely stringent performance requirements, the instrumentation space has often relied on more traditional discrete design methodologies. Despite delivering world-class accuracy and stability through these methods, box instruments are often extremely expensive and complicated to design and can quickly fall behind the pace of change exhibited by the devices they aim to test by failing to leverage the benefits of Moore’s law. This trend speaks to how users of RF instrumentation will soon benefit greatly from three trends that shift RF instrumentation on a trajectory to match Moore’s Law: advanced CMOS technology, greater FPGA utilization, and reduced form factors.
For More Details
Visit ni.com/ato to download the full-length PDF of the 2013 Automated Test Outlook (officially available on January 14, 2013). You can also access test outlooks from prior years to gather further insight for your organization.