In these times of economic uncertainty and generally tough business conditions, the cost of upgrading test instrumentation can be difficult to justify. Sometimes there’s no choice. Your current gear just can’t do the job. Often, however, the choice is not so straightforward. You might be able to squeak by with current equipment, but upgrading to newer or different instruments would provide benefits.
Many factors inform upgrade decisions, and their relative importance generally depends on the situation. For example, equipment decisions for production test applications are often driven strongly by test throughput considerations. Those for R&D applications may be driven more by resolution, accuracy, noise, and related factors.
Other applications, such as those in student laboratories, will likely demand equipment that combines modest performance, low cost, and high durability and reliability. Although factors such as ease of use and flexibility influence all equipment decisions, they can be difficult to weigh against other factors.
Measuring voltage, current, capacitance, and charge with accuracy, resolution, and precision under a variety of conditions is the foundation of modern electronic test equipment. Perhaps surprisingly, today’s instruments don’t show major improvements in these areas over instruments produced 10 or more years ago. Noise quickly becomes the limiting factor for sensitive measurements. Fortunately, newer instruments do provide many tools for minimizing the impact of noise.
The latest generation of instruments also provides real improvements over earlier generations in a variety of other areas. Many of these improvements are the result of greatly increased embedded computing capability and more sophisticated digital signal processing. Many of them can even save you money.
Of course, precision measurements don’t represent the whole equipment story. Many tests also require precision sources of stimuli. Here, too, state-of-the-art sources tend to provide dramatically higher performance than older models.
Separating Signals From Noise
Measuring tiny signals in ever-present electrical noise requires a variety of strategies and techniques depending on the test environment. Fortunately, today’s instruments are significantly better at dealing with noise than those of earlier generations.
Internal instrument noise has been reduced through the use of improved power supplies and components, as well as thoughtful circuit design and layout. Digital processing offers a variety of filtering algorithms, from simple averaging to sophisticated multi-pole filters that are useful for extracting signals from noise. Improved synchronization functions can increase rejection of line frequency or other periodic noise signals.
Often, the best strategy for dealing with noise is to keep it out of the system. New connectors and cable systems are designed to minimize signal loss and maximize noise rejection in the connections between the device under test (DUT) and instrument.
Inching Closer To The Ideal
An “ideal” instrument is one that has no effect on the circuit or DUT. But real instrument inputs (and outputs) and the cables used to connect them to the DUT have non-ideal characteristics that alter DUT behavior and affect measurement accuracy.
No one has yet produced the ideal instrument, but the latest models have inched far closer to that goal than their predecessors. For example, two generations ago, 10-MΩ input resistance was typical on voltmeters’ low dc voltage ranges. Now, 10-GΩ or 100-GΩ input resistance specifications are common.
New Measurement Types
Newer instruments often can support new types of measurements that can improve the user’s test results. For example, several technology trends are driving the requirement to test devices using narrow pulses rather than continuous signals.
One of these trends is the ongoing scaling of semiconductor devices. The characteristics of very small devices can change when test signals are applied, producing misleading results. Another trend is the proliferation of higher-power semiconductor devices, which when tested prior to packaging at near-typical operating power levels will quickly overheat.
In both of these circumstances, testing with very short pulses helps ensure greater accuracy. Unfortunately, older test equipment often is unable to deliver optimally short, precision pulses or may not be able to measure quickly enough to capture short pulses. Newer equipment is designed to support pulse testing.
New Instrument Architectures
Although source measure units (SMUs) have been available for at least a decade and a half, the performance and capability of the latest generation of these instruments continue to advance. An SMU can source or sink voltage or current while simultaneously measuring voltage and current, making it useful for testing semiconductor, nanotechnology, and other devices.
Combining source and measure capabilities in a single instrument provides a variety of advantages over the use of separate instruments. Instrument setup and configuration are simplified, and performance is often better due to the tight coupling and synchronization. Many newer SMUs also offer excellent pulse mode testing capability.
As mentioned previously, newer instruments benefit greatly from increased embedded computing power. More and more equipment manufacturers are leveraging this increased computing power to deliver advanced capabilities and improved performance.
Many modern instruments provide built-in data analysis functions that previously involved the use of external software and computers, such as calculating statistics on data as it’s acquired and comparing data to limits while providing go/no-go output signals. In addition to data analysis, many newer instruments provide data visualization functions, either using graphical displays or by generating Web pages viewable on a connected computer.
Programmability is another powerful advanced capability common in newer instruments. Keithley, for example, offers many products with built-in scripting capability. Users can extend an instrument’s basic functionality by loading scripts into the instrument to perform advanced functions, sometimes eliminating the need for an external computer and costly software. Often, performance is improved by eliminating the delays associated with transferring data and commands between instrument and controller during testing.
Cost Of Ownership
Instrument downtime for repair or calibration can be costly, especially in production test applications. Older instruments are often more unreliable than newer models, and they may be more difficult to have repaired when they do fail. Instrument manufacturers strive to reduce these ongoing costs of ownership, but it can be a challenge.
The same designs that allow more sensitive, accurate measurements and increased resolution place greater demands on the stability of internal circuits and require more areas of adjustment during calibration. The increased reliance on digital processing and associated firmware can lead to new types of failures caused by programming bugs.
Fortunately, the same strategies that have conquered these issues in other complex electronic systems are also effective for instruments. Widespread use of systems-on-a-chip (SoCs), ASICs, FPGAs, and other highly integrated components helps keep component count reasonable.
Flash memory allows firmware upgrades in the field to address firmware bugs. Embedded computing capability allows for built-in self test and auto-adjustment using precision internal standards. Modern instruments often have accuracy and other key characteristics specified for intervals of up to two years, making it easier to select an appropriate calibration interval.
So even though they offer higher performance and increased capability, many of today’s instruments can go longer between calibrations, are easier to calibrate, and have longer mean time between failures than previous generations of instruments.
Earlier generations of instruments were often designed primarily for standalone benchtop use. Now, instruments are almost always connected to a computer, and often they’re incorporated into larger systems with other test hardware. In addition to the long-standard GPIB protocol used for instrument-to-computer communications, most instruments now offer LAN (Ethernet) and USB interfaces.
Many instruments that provide a LAN interface comply with the LXI standard, which describes basic capabilities all LXI instruments must provide and a set of optional but standardized, advanced capabilities. The LXI standard helps ensure the interoperability and compatibility of LAN-based instruments. This expanded choice of communications busses offers test system integrators the flexibility to choose the best option for each system.
The complexity of modern instruments means ease of use is both more important and harder to achieve than with older instruments. The number of functions and parameters available has increased dramatically, and the impact of incorrect configuration or programming errors can be subtle and hard to detect.
Manufacturers struggle to maintain ease of use in the face of this increased complexity. Graphical displays with menus, forms, and online help are becoming more common ways for users to navigate the large number of options.
LXI-compatible instruments have standard configuration Web pages for common functions and often have extended Web pages for more complicated configuration and setup. Most instruments (and all LXI instruments) provide an instrument driver to simplify remote control from a computer and software.
It’s Not All Roses
In addition to the potential benefits of upgrading older test equipment, there are also some disadvantages, including the cost of new instruments. But for some applications, one significant impediment to upgrading an existing test system is correlating new data with old data.
Users need to be confident that measurements made before and after a test system change agree with one another, which rarely occurs without some adjustment. The testing and adjustment process can be time-consuming and difficult.
When upgrading an existing test system, new instrumentation typically isn’t backward compatible with that being replaced. There’s an inevitable learning curve associated with any new instrument, and if the old instrument was controlled via test software, the software will likely require modification to work with the new instrument.
There’s no simple formula for deciding if upgrading test equipment is worthwhile. It’s important to consider factors other than just instrument cost and performance. The best approach is to choose the potential advantages most likely to be realized and weigh their importance and value to the test situation. Do the same with the disadvantages and compare results.