How AI Will Transform Test and Measurement
What you’ll learn:
- Views on the ways AI is changing test & measurement.
- What is the trustworthiness of AI?
Artificial intelligence (AI) has the potential to transform every stage of product lifecycles, from design and manufacturing to operation and maintenance. While this statement may seem obvious, exactly what this will look like for test and measurement products and processes is still coming into focus (Fig. 1).
As might be expected in a notoriously conservative industry like test and measurement, AI has to date been used mostly for modest, incremental improvements. Tools that parse documentation to help users set up and operate products was an obvious place to start.
Going a step further, some products now offer AI-based features like neural networks, or AI-based optimization for specific parameters. For scientists and engineers managing large datasets, software tools that post-process, detect anomalies, track trends, and guide decision-making are powerful, but bespoke.
User-Prompt Builds
However, change is afoot, and these incremental improvements are giving way to more ambitious AI implementations. Capabilities like Generative Instrumentation, recently announced by Liquid Instruments alongside the new Moku:Delta software-defined instrumentation platform, allow AI to build entire instruments and test setups based on user prompts (Fig. 2).
The user describes what they want to accomplish, and AI determines and configures countless parameters across multiple instruments to achieve the best results. If a required feature doesn’t exist, AI can create and deploy it to a user-programmable FPGA to augment the capability of standard instruments in real time.
But can the results be trusted?
The promise of AI to unlock innovation and accelerate product development can’t be ignored, even within traditionally risk-averse fields. Test and measurement vendors pride themselves on testing every corner of their products prior to release. As AI makes its way into products, it will no longer be possible—some of that responsibility will shift to the users as they develop never-before-seen capabilities using AI (Fig. 3).
Trust Put to the Test
As with any application of AI, blindly trusting it to deliver the right answer every time is a bad idea. Incorrect test results can, at a minimum, be an inconvenience with time lost. At worst, they can jeopardize the safety of end users. The usual guidance is a good start: Start simple, choose training data carefully, maintain expert human oversight. But when working with a test system, users have a few other tools at their disposal.
Auditing the code to ensure what it’s doing makes sense is a potentially arduous, but foolproof approach. This can be done with a hardware description language (HDL) for FPGA customization or a software programming language like Python for instrument configuration. Creating self-tests that provide visualization of signals and measurements to validate whether the system has been configured or customized in the intended way offers another method to build trust in the design and its results.
Today, AI is in the process of transitioning from a tool to an assistant. Perhaps one day it will be a true partner (and hopefully not a boss!). We can draw an analogy between these early days of AI and the first days of the internet.
“You can’t trust what you read online” was a common statement made at the end of last millennium. But we developed strategies to make the internet not just a usable source of information, but perhaps the most valuable source of information. To get the most out of AI, we need an upgraded toolset and a fundamentally more modern, flexible approach to test and measurement.