The decision to move to a modular, or software-defined, test system for either lab test or production test should not be made lightly. It represents a significant investment and a paradigm shift in your approach. But it can have significant benefits if it’s done correctly and for the right reasons. On the other hand, a software-defined test system is not the answer for everyone’s test requirements.
Why Go Modular?
The first question is whether a modular architecture is right for a given scenario. There are a number of obvious advantages to adopting one. For one thing, a modular architecture means assembling a collection of smaller functional blocks, which makes for a system assembled with a specific purpose in mind. You can custom-build the system to fit your exact measurement criteria, which is harder to do with traditional benchtop instrumentation (Fig. 1).
Along with that comes some associated cost benefits that result from choosing only exactly what is needed for the application as opposed to a benchtop system that may go above and beyond those specific requirements. There are also potential space savings, because a rack-mounted modular setup’s footprint is likely to be competitive with bench instruments.
The best scenario for a modular architecture, of course, is if you’re looking to implement automated test, where modularity can provide significant benefits. “As you move through a traditional design cycle, from simulation to prototyping and debugging, and then to validation and manufacturing test, you gain more benefits from automation as you go,” says Jean-Manuel Dassonville, outbound manager for modular solutions at Agilent Technologies. “Modular architectures fit pretty well with the speed requirements that are typical of test automation.”
“Modular is dominant where you need any type of automation,” says Matthew Friedman, senior product manager for automated test at National Instruments. “It’s production test, but is used in validation and verification as well.”
Multiple instruments have to be able to communicate with the device under test (DUT), obtain responses, and then decode those responses. When you look at automation on the production side, it’s a very high-throughput, low-latency architecture that provides massive increases in test speed.
On the other hand, today’s benchtop instruments have their own set of advantages. If you’re troubleshooting a prototype board, it’s a simple matter to turn it on, hit a run key, and grab your measurement.
Also, time to measurement is very short because you don’t have to write a script to run your test, and all of the features are built into the instrument. If you want to realize a measurement, you don’t have to program the instrument to do your bidding. Thus, the modular system provides greater overall flexibility, but the benchtop instrument delivers faster access to standard measurement tasks.
Perhaps the most critical aspect of making a decision on a modular approach to test is to consider the nature of the measurement tasks at hand. Any number of tasks may come into play, such as data analysis or stimulus generation (digital or analog). Data analysis may involve analog or digital signals, or perhaps microwave or RF. Depending on the nature of the stimulus, some products and architectures may fit better with that measurement.
If you’re looking primarily for high-end measurements with leading-edge accuracy, chances are that a benchtop instrument will better fit your needs. However, if you’re making measurements that are less on the leading-edge and you need to make many of them at high speed, you should strongly consider a modular architecture.
Another aspect that should influence your selection of measurement hardware for a modular system is whether you will be testing a single product, a product family, or multiple product families. Depending on the scenario at hand, you want to specify your test system so its specifications are superior to that of the DUT(s). The temptation will be to widen the test system’s scope as much as possible to have a common platform for several programs. But be careful in pursuing such a strategy. A number of pitfalls can arise:
- To accommodate different product lines, the complexity of the core test system increases, increasing nonrecurring, recurring, and material costs.
- Maintaining configuration control is difficult among a larger group of modules.
- Obsolescence issues increase.
- Costs increase on high-production-rate product lines requiring multiple test systems even though a DUT may use only a small portion of the test system capabilities.
- Designing test systems for a new product line becomes difficult because of the constraints to use only the existing capabilities of the system.
- Keeping up with state-of-the-art technology grows more difficult as test capabilities start to stagnate.
Building a Strong Core
Determining the measurement needs of your test system is a good first step. Then you can begin architecting your hardware framework. A good approach is to first pinpoint a suitable test platform that can serve as the core or nucleus of your test system.
You can choose from many platforms, most of which are based on one of the four most commonly used instrument backplanes/buses: PXI, GPIB, USB, and LAN. Because each of these buses has at least some advantages and limitations, you often have to build hybrid test systems based on multiple platforms. Even so, it is often a best practice to pick a prominent or core platform for your test architecture.
A number of factors come into play when you’re choosing a core platform. For one, when selecting a controller, you first need to assess the worst-case computational power and throughput rates.
Another factor is the ease with which you can scale or modify your system. This is especially important if your test system has the potential to change during the course of its lifetime. One example of this is if you are building a system to test a product family that is continually expanding. In such a case, you may need to add new functionality to the system without making significant changes that could force you to redesign your test rack.
The platform that serves as the core of your test system must be able to address a significant portion of your test system needs. Thus, if your system requires the ability to make low-level dc measurements along with high-speed rise time measurements, you must select a platform that can accommodate mixed-signal instrumentation. In general, you should choose a core platform that accommodates at least 80% of your test system’s measurement needs.
One example of an embedded controller for a functional test system is Agilent’s M9036A, a 2.4-GHz dual-core PXIe module (Fig. 2). The three-slot-wide module is designed for use with Agilent’s M9018A PXIe chassis and is capable of integrating legacy PXI instruments into the chassis’ hybrid slots while providing up to 8 Gbytes/s of system bandwidth with dual x8 express links.
A look at the broad modular marketplace will reveal the availability of multiple architectures, such as PXI, AXIe, and PCI Express. These open platforms provide test engineers with a broad choice of modules for assembly into a system. Moreover, each of these platforms addresses different price/performance spaces.
There are also a few proprietary modular architectures at different price points. You may have to choose between open and proprietary architectures, keeping the specific application in mind.
In addition to measurement needs and the nature of the tests you will be executing, other criteria come into play when deciding to go modular or not. One of these is connectivity. How will you make connections to the DUT? This leads into system topologies, which are an important consideration. This is especially critical when the objects of measurement are RF and/or microwave signals. In such cases, the lengths and layout of how the test system connects to the DUT are a factor that could influence the accuracy of measurements.
In many cases, engineers end up with a hybrid system consisting of a mix of modular architectures and benchtop instruments. A basic example of a hybrid approach involves a power supply. The DUT requires a power supply that may be different from one test to another. Traditionally, power supplies are either benchtop units or rack-mount types. They can, however, be found in modular architectures as well. That may lead to another variety of hybridization, where a modular power supply may be an LXI or GPIB type, while the rest of the modular test setup is PXI or AXIe.
As mentioned previously, each instrument bus and platform has its own distinct advantages and disadvantages. By building hybrid systems that are based on multiple instrument buses, you can take advantage of the strengths of several different test platforms.
A hybrid architecture also increases your test system’s flexibility by allowing you to choose from a larger pool of instruments. Such flexibility is especially important if you’re building a complex and dynamic test system that will change over time. The first step toward building a hybrid architecture is choosing a core platform that can communicate with instruments based on a variety of instrument buses.
If a system is a hybrid of modular and traditional instruments, it’s important to then settle on a software platform that can address this mixture of topologies. Here, each vendor of test equipment has differing strategies.
Because it’s in both markets, Agilent’s software strategy is to provide a platform that interoperates with all kinds of systems. “We know that most customers work with hybrid systems, so it is important to have software that can speak to multiple kinds of systems,” says Agilent’s Jean-Manuel Dassonville. “It’s also important that the software foundation can speak with instruments bearing different brands.”
Agilent’s Connection Expert software platform connects and identifies modules and manages inter-module communication as well as between modular and benchtop instruments. It also manages relationships between Agilent’s equipment and test gear from other vendors. Connection Expert is part of Agilent’s IO Libraries Suite 16.2.
PXI, or PCI eXtensions for Instrumentation, is a popular mainstream protocol for the backbone of modular test systems. It’s important to understand that there are different flavors of PXI available in the market. Traditional PXI modules are based on the technology developed in 1997 at National Instruments. The later generation of PXI based on PCI Express is supplanting them.
When assembling a test system, it’s important to know that modules may not all have the same backplane connector depending on which flavor of PXI they are based on. Ensure that the chassis for your modules is designed to accommodate each of the PXI specifications. Some chassis are dedicated to one technology, others can accommodate a mix, and still others are hybrids with a backplane that handles both traditional PXI and PCI Express.
According to Adlink Technology, its PXES-2590 is the first all-hybrid, nine-slot chassis (Fig. 3). Able to house both PXI and PXIe modules, its four-lane topology provides system bandwidths of up to 8 Gbits/s. Because all slots are hybrids, the chassis affords maximum flexibility in terms of module positioning, allowing modules to be installed in preferred locations.
The chassis’ slot lineup (one system slot, seven hybrid slots, and one timing slot) addresses flexibility in another important way. Many PXI modules are two, three, and even four slots wide. In the past, one of the only ways to accommodate these wider modules was with an 18-slot chassis. Having seven hybrid slots in play is likely enough for many users, and you gain the portability in the bargain.
Other features also set the PXES-2590 chassis apart from competitive offerings. For one, an optional integrated LCD and keyboard outfits the chassis for more convenient portable usage. For another, Adlink has addressed the cooling challenges of a compact PXIe chassis with a well-thought-out thermal management scheme. Intelligent chassis management includes automatic fan-speed control, chassis status monitoring and reporting, and remote chassis power on/off control.
Once you’ve selected your equipment, which is likely to be a mix of benchtop instruments and modules, the next step in constructing the system is control and communications.
“Say you’re using a digitizer to measure a signal and want to know that it’s being done properly,” says Agilent’s Jean-Manuel Dassonville. Some vendors provide soft front-panel tools that run on a PC that enable you to start making measurements without having to write any control code. The soft front panel aids in achieving quick time to measurement by acting as a troubleshooting medium.
Eventually, you will want to begin writing scripts for more precise control of the instruments in your system. Thus, the next step is to begin writing programs to control the equipment.
Scripts come in many flavors and can be written at various levels of abstraction. At high levels of abstraction, such as in the context of an instrument’s graphical user interface (GUI), there is a convenience factor in that the GUI helps to mask the intricacies of the syntax. More advanced users will want to work at command-line level, or a lower level of abstraction, which provides more specific control of the instrument’s features.
Often, users prefer a mix of fully automatic operation and manual control. Typically, the design environment will be at one given abstraction layer. But if you need more granular control over a given instrument, you’d then want to use the more specific and detailed commands that are available at lower layers of abstraction.
One wrinkle is that you’ll typically have to learn a different syntax for various pieces of equipment. A digital multimeter understands one set of commands and a digitizer responds to a different set. There are various ways around this issue. One is the basic command-line interface, but it requires you to know these various syntaxes.
A Better Mousetrap
A better way is through emerging platforms that enable you to control the instruments in your modular/benchtop hybrid system without having to know any syntaxes. Agilent’s Command Expert goes above and beyond the soft front-panel approach by helping the user write commands for the various instruments and ensuring that they are syntactically correct. The tool combines instrument commands, documentation, syntax checking, and command execution under one interface.
Command Expert works with instruments that use Standard Commands for Programmable Instrumentation (SCPI) or IVI-COM drivers. The software’s search capability displays commands and documentation that contain the search term. It also will provide example sequences of commands that can be used directly in Command Expert.
The bottom line is that the tool generates code that will control the test platform. If you’re working with Visual Studio, LabView, or Matlab, it generates code that is reusable in a hierarchical fashion. For each of the abstraction layers, you have an environment that helps you develop code at the appropriate level.
National Instruments’ approach to test development and management comes in the form of its TestStand software, which is a ready-to-run test management suite. You can use NI TestStand to develop, execute, and deploy test system software (Fig. 4).
In addition, you can develop test sequences that integrate code modules written in any test programming language. Sequences also specify execution flow, reporting, database logging, and connectivity to other enterprise systems. Finally, you can deploy test systems to production with easy-to-use operator interfaces.
TestStand includes the Sequence Editor, which users can employ to create test sequences that automate the execution of code modules. Each code module executes a test on the DUT and returns measurement information to NI TestStand. Users can log test result information in a report or database automatically. In addition, systems written in NI TestStand can integrate with source code control, requirements management, and data management systems.
After development is complete, the NI TestStand Deployment Utility helps create a distribution or installer of code modules, test sequences, and related files to deploy automated test systems to production. In addition, NI TestStand helps deployment by providing simple operator interfaces that can execute the test system and reduce operator error. And, users can create custom operator interfaces.
Hardware Vs. Software
A critical distinction in considering a test system’s topology is that in some cases, the underlying foundation for a given measurement may lie in hardware, and in others, in software. For example, you might be making a frequency-domain measurement in which you examine an RF signal’s demodulation according to a given format. Some aspects of the measurement may be more hardware-centric while other algorithms are on the software side of the equation.
In bench instruments, all of the hardware and software associated with any given measurement resides in the same enclosure. With modular instruments, the hardware and software elements required for a given measurement must be assembled. So, it’s equally critical to use both the correct hardware blocks and the proper measurement algorithms. Agilent’s strategy in this respect is to make the bench-instrument software available to modular systems. Users don’t have to write their own algorithms, then, which saves a lot of time.
Another means of lending flexibility and versatility to a modular test system is through the inclusion of field-programmable logic in the form of FPGAs. National Instruments chose this path for its hardware. “We find it exciting to provide FPGAs in our systems,” says National Instruments’ Matthew Friedman. “FPGAs allow us to open up new ways for users to create software-defined instruments.”
With FPGAs in the hardware mix, users can program the device to create customized instruments that fit their needs exactly. “In the past, modular equipment was vendor-defined. You got a box with predefined functionality,” says Friedman. “With the PXI architecture, you can have multiple places in which to program an FPGA.” In this way, users can create customized measurements, host-based processing, and analysis.
What About Calibration?
Even though a modular test system brings advantages in terms of flexibility and customization, it also can present some challenges. One of these is system calibration and support. Benchtop instruments are designed to be calibrated all at once, making the process relatively straightforward. But when using a modular system, it’s a bit more complex. At least to some extent, modular system calibration requires the equipment vendor’s services.
It’s often impractical, if not impossible, to simply unplug one module within a larger system and have it calibrated in isolation. In such cases, the whole system needs to be calibrated together. But there are ways around this. For example, you may have modules for which calibration can be done at a modular level as opposed to a subsystem level.
It’s Not All Or Nothing
There’s no need to be a purist about how you assemble a test strategy. The world of test is not going all modular, nor will it stay all benchtop. One form factor doesn’t fit for all test scenarios. Benchtop instruments are often better for some requirements while modular excels at others. Very often, the result is the hybrid system described above.
A key concern for designers working with hybrid systems is ensuring a unified approach to system management. The best case is a software framework that ties it together. It’s also not uncommon to have more than one type of modular element coexisting, most typically PXI, PCI Express, and AXIe.
If your requirements involve the generation of high-bandwidth signals, a PXI system may not fit the bill, whereas the other options would fare better. No matter what your mix of modules is, though, they need to be treated the same way from the point of view of the controller. Once you’ve achieved that kind of software framework, you gain the benefit of mixing modules that offer high performance with others that save cost and/or space.
The Future For Modular Test
Modular test systems are beginning to see applications in new areas these days. “We see the PXI space expanding in two directions,” says Agilent’s Jean-Manuel Dassonville. “The first is that there are some new domains which in the past were dedicated to benchtop instruments, and these are RF and microwave. With MIMO (multiple-input multiple-output) technology on the rise, modular test architectures bring advantages to the table.”
A second emerging use case for modular systems is in the validation stage of the design cycle, and even in earlier stages of design activity. Historically, modular instruments have seen their greatest utility in test automation, which implies usage in the final stages of the design cycle and in manufacturing test. “Using modular instruments in validation and R&D means a more interactive application and not so much automation,” says Dassonville.
Thus, even though a platform may be constructed for facile automation capabilities, it still may need to enable the sort of interactivity that’s required in earlier stages of the design cycle. In this space, Agilent offers the software from its benchtop instrumentation with its modular instruments. As a result, a modular or hybrid system user sees a software front end that facilitates that interactivity, enabling use of the modular architecture during the design phase.
National Instruments’ Matthew Friedman agrees that there’s growth in the validation space for modular systems. “I’d say it’s growing in validation, with customers bringing modular test to the design phase. A great example is Triquint Semiconductor, which performs some of the best characterization out there. Triquint has moved to a PXI-based architecture and has seen substantial speedups in characterization.”
PXI has become a dominant architecture for modular systems and is projected to be a $1 billion market by 2017 (Fig. 5). The PXI architecture makes for a powerful yet simple modular system, says Friedman. “You can program the way you did 20 years ago with standard drivers. But you can go one step further now and create specific processing and triggering. You get the simplicity you are used to but can dive in and make changes when necessary.”