Virtual Instrumentation Saves Money

Today, the buzzword in test is virtual instrumentation. Even in companies suffering from corporate downsizing, virtual instrumentation is a reasonable goal because it requires fewer personnel than other traditional test methods. And by taking advantage of virtual instrumentation, you can test more at a lower cost and in less time.

Virtual instrumentation systems are based on graphical and text-based application development tools, complete with features for code generation, instrumentation configuration, in-depth analysis and display of data in comprehensible formats. These systems use industry-standard hardware for data acquisition and signal generation, including traditional IEEE 488, RS-232 instruments, VXI and plug-in data acquisition (DAQ) boards.

Although virtual instrumentation is very appealing, it can also be very confusing. Where do you start among hundreds of system-configuration options?

First, you must develop a test strategy. Because of added flexibility, it is even more important that traditional design methodologies and forethought play a major role in virtual instrumentation-based system design. Then, you must understand each configuration option to determine the bottom-line effect it will have on product profitability.

To help you develop a test strategy, it is important to examine the components necessary to build a virtual instrumentation system: signal definition, hardware I/O definition, host controller definition and software integration definition.

Signal Definition

Begin your test-strategy definition by looking bottom-up at the signals your system will monitor or control. What information do you need to extract from your signals? How many signals will you monitor or control?

All signals are analog, time-varying waveforms, but the information they convey can have varying importance in different test scenarios. For example, a pacemaker generates a pulsating output signal similar to a square wave. In one test strategy, the signal’s duty cycle could be important; in another, its frequency and amplitude may be of little interest.

Signal information can be subdivided into one of five signal types: digital on-off, digital pulse-train, analog DC, analog time domain and analog frequency domain. Referring to Figure 1, you can divide digital signals into two types and analog signals into three types.

The first type of digital signal is the on-off signal. Digital on-off is relative because logical on or logical off may be 1 V or 1,000 V. A simple example of a low-voltage digital on-off input signal is sensing a switch closure. An important parameter for test specifications that include digital on-off signals is the drive current required by each signal line.

Pulse-train, the second type of digital signal, consists of a series of state transitions. Information is contained in the number of state transitions, the rate of occurrence, or the time between one or more state transitions.

For example, by placing optical encoders on a motor’s rotating shaft, you can gather important performance information from the generated pulse-train signal output. An important parameter for test specifications that include digital pulse-train signals is the maximum frequency of transitions required by each signal line.

Figure 1 also shows analog signals divided into three types: analog DC, analog time domain and analog frequency domain. Analog DC is a slowly varying or static signal, such as from temperature, flow or pressure transducers. DC signals convey information as a level or amplitude at a given instant.

Analog time-domain signals convey information in amplitude and show how the amplitude varies with time. In these signals, you are often interested in a waveform shape.

To measure the slope or peak of an analog time-domain signal, take a precisely timed sequence of individual amplitude measurements or points that are close enough together to reproduce the characteristics of the waveform shape adequately. In addition, the start and stop points of the measurements must occur within the region where important information resides. Test specifications for analog time-domain signals must include the required resolution of the signal to be sampled.

The last type of analog signal is the analog frequency-domain signal. Similar to a time-domain signal because information varies with time, a frequency-domain signal conveys information from the different frequency contents within the waveform.

An example where frequency-domain signals can reveal important data is in the design of better golf clubs. By placing transducers along a club shaft, you can measure of the fundamental and harmonic frequencies at each point along the shaft. Frequency and amplitude at each measurement location affect the distance a club can drive a golf ball.

You may need signal conditioning and/or isolation for each of your signals. Signal conditioning, advantageous on transducer inputs because of nonlinearity, is often performed in software or external hardware. Signal conditioning is also used to amplify small input signals and to filter unwanted signal components.

Signal isolation, on the other hand, is important in protecting the low-voltage analog-to-digital conversion circuitry of your input hardware. Isolation is common in systems where the differences in common-mode voltage affect input signal measurements.

For example, common-mode isolation is recommended to measure a small voltage riding on top of a 60-Hz, 1,000-V signal. You may also require signal isolation for digital signals. Using relays, you can use low-voltage digital output signals to switch high-voltage loads, such as motors or pumps.

Hardware I/O Definition

With a complete understanding of input/output signal requirements, the next component of your test strategy is your input/output hardware. There are many choices for input/output hardware; some are vendor proprietary, while others are based on IEEE or other industry standards. Probably the most common instrumentation architectures used for virtual instrumentation-based test are GPIB, VXI, PC-based plug-in DAQ boards and serial instruments (Table 1).

GPIB-based instrumentation has been the most widely used architecture for test since the late 1970s. With GPIB, you can control individual, function-specific instrumentation as a single multifunction system.

The advantage of GPIB-based instrumentation is its individuality. You can easily use the instrument, with its own built-in front panel and control knobs, off-line.

For virtual instrumentation-based systems, however, using GPIB instrumentation can pose problems. Some disadvantages include performance limitations because of protocol translations that occur for data transfer, hardware circuitry duplication within the instrument and host controller, and signal cabling because of each instrument’s own input/output connectivity.

Designed specifically for virtual instrumentation, VXI-based instruments combine GPIB-style capabilities with high-performance register-based communications. VXI eliminates much of the duplication in technology common in GPIB-based virtual instrumentation systems. It has a modular instrument-on-a-card topology designed for high-performance, direct memory-mapped communication. In addition to performance, VXI substantially reduces system size requirements.

The disadvantage of a VXI-based system traditionally has been integration. Software is much more important, because CRT or knobs are not present on VXI instruments. Not only must you write software to execute system tests, you often must write code just to find out if an instrument is operational. This presents a double challenge during development, because you question whether the problem lies in the software or the instrument. Recently, advances have been made to eliminate this difficulty.

The original VXI specification defined electrical and mechanical requirements for baseline operability, but did not impose restrictions that might narrow VXI system-level technology and applications. With a goal of making VXI easier to use, the multivendor VXIplug&play Systems Alliance was formed. This alliance has defined common standards for software, including instrument drivers, soft front panels, installation and packaging, and system frameworks to ease out-of-the-box operation and application development.

PC-based instrumentation is the next common architecture for virtual instrumentation-based test. This architecture presents the widest variety for hardware configuration, because no formal standard exists. The advantages of PC-based instrumentation include extreme cost-effectiveness and flexibility. The disadvantages include limited frequency measurement bandwidth and integration complexities.

Until recently, PC-based test was only considered for systems consisting of moderately low channel count analog DC signals, such as from thermocouples or analog time-domain signals under 25 kHz in bandwidth. Virtual instrumentation became a reality because of the newer PC microprocessors.

The bottleneck of the 16-bit AT bus for PC I/O, however, has sometimes limited data throughput to memory that is needed for plug-in virtual instrumentation systems. A new, 32-bit component interconnect bus, called the peripheral component interconnect (PCI), is beginning to replace the traditional AT bus and brings new capabilities for PC-based test.

First proposed in 1991, the PCI bus is now available in many computers and workstations. PC manufacturers with PCI capability include Compaq, Apple, DEC, Hewlett-Packard and IBM. PCI solves the major performance limitations of the AT bus with its 32-bit bus operating at 33 MHz, automatic configuration and bus master capabilities.

Portability is a substantial benefit of PC-based instrumentation. Early portable test systems required host computers which were hefty in size and weight and accepted standard ISA plug-in boards.

Today, portable computers weigh less than 6 lb and have a PCMCIA I/O interface. PCMCIA-based cards can be inserted into a portable without dismantling or switching off the power. PCMCIA cards offer sampling rates of 100 kS/s. With ASIC technology, higher performance cards will be available in the near future.

PC-based instrumentation vendors have not formed an effective alliance, as the VXI instrumentation vendors have, to solve integration complexities. However, many vendors have spent many R&D dollars on driver-level software to remove the complexities associated with register-level programming and have added sophisticated capabilities such as data and buffer management in high-level instrument drivers.

Host Controller Definition

The real power of virtual instrumentation is its capability to leverage off the general-purpose PC industry. It does this by moving the processor, memory and bus I/O out of the instrument. In essence, you are now free to choose only the components absolutely necessary to achieve the cost and performance requirements of your application.

When selecting your host controller, keep in mind that in a virtual instrumentation-based system, the processing performance of your host controller determines your application performance. If your selection does not align with the direction the industry is heading, your future opportunities to upgrade to the latest technologies diminish.

The final common instrumentation architecture is based on serial RS-232 or RS-485. Serial RS-232 is a point-to-point protocol that requires a dedicated serial port for each instrument. Serial RS-485 gives multidrop capability similar to GPIB.

Because each is serial-based, benchmark transfers are often stated in b/s. The advantages and disadvantages of serial-based instrumentation are the same as GPIB. There is one more problem. These protocols are so poorly standardized that you often spend more time with a serial bus analyzer debugging communication protocols than writing application code.

Component interconnect requirements are another factor to consider in your input/output hardware evaluation. Many development-cost overruns occur because of poorly designed UUT-to-instrument interconnects.

Finally, consider file I/O and network communication requirements in hardware I/O selection. Even though most are controlled through software, these I/O options must be an integrated part of the system.

To summarize, many hardware architectures for virtual instrumentation-based test are in use, each with trade-offs. The dominating factors you should consider in selecting hardware are reliability, safety, cost per channel, low-level drivers, availability and experiences in previous applications.

Appropriate host-controller selection is closely tied to your development tools and operating system selection. Many computer-based test developers originally used Basic to simplify programming of GPIB and serial message-based instruments. The C language, developed for systems programming, has equipped developers with modern concepts such as data structures, modular software design and register-level system programming.

Virtual instrumentation development tools build on the past by creating an instrumentation-specific overall framework that includes acquisition, analysis and presentation to the software-development task. These development tools use object-oriented techniques to maximize reuse of code. They take advantage of general-purpose computers and maximize profitability by giving you the opportunity to quickly and easily design your own application-specific instruments.

Many of the virtual instrumentation development tools work with mainstream operating systems such as DOS, Windows, Macintosh OS and UNIX. The popular UNIX operating systems in test are Sun OS and HP-UX.

Often the choice of a host controller is set by standards within the organization or group. Select a host controller that aligns with industry technologies and gives you the chance to choose integrated, consistent software development tools that make each option discussed in the hardware I/O definition easier to use and integrate into a system. Select an operating system that is widely available, familiar and reliable. Finally, look at software maintenance cost over the life cycle of the tester. You will need to stay current.

Software Integration Definition

The final step in your bottom-up test strategy definition is to understand how virtual instrumentation development tools will bring a test system together. With virtual instrument-based development tools, you can take a slightly different approach to code development that reduces time and increases code reuse capability.

Virtual instrumentation-based systems center around the end-user interface. Your application-code development should also center around the end-user interface. An example code-design approach that follows this methodology is listed in Table 2.

No definitive standards exist for programming with virtual instrumentation development tools. There are, however, commonly used practices, techniques and methods.

When evaluating virtual-instrumentation development tools, ask the product vendor about the availability of guides for software validation, programming style and training. In addition, request user references and the location of code-sharing bulletin boards. Each of these can go a long way in preventing you from reinventing the wheel.

Conclusion

Virtual instrumentation-based test reduces development time and increases profitability because it empowers you with the ability to design instruments that exactly meet your needs. Virtual instrumentation combines general-purpose computers with a variety of instrumentation hardware, so shared resources perform the acquisition, analysis and presentation of your data.

To maximize the power of virtual instrumentation in your next test system, begin by understanding the signals you will monitor or control. Establish or use existing test-system design methodologies to evaluate and define your hardware I/O, host controller and software tools. Finally, ask your product vendors lots of questions.

About the Author

Greg Crouch is a Regional Sales Manager for National Instruments. Previously, he worked at Lockheed, Fort Worth, as a design engineer of automated test equipment. Mr. Crouch has a B.S.E.E. degree from Texas A&M University. National Instruments, 6504 Bridge Point Parkway, Austin, TX 78730-5039, (512) 794-0100.

Table 2

Task

Method

End-User Interface Design

Use the graphical interface tools to develop the end-user interface that represents application requirements without entering any code.

Functional Design

Use formal design methodologies to design the text or graphical data flow architecture, division of labor and hierarchical structure of the application.

System I/O Design

Use existing or created prototype code to exercise bottleneck areas such as data acquisition, disk I/O or screen graphing output. Use this code to evaluate signal acquisition hardware components for determining performance requirements. Some of these modules may represent the first level developed for and used by your application.

Library Structure Design

Use internal or create your own disk organization of libraries for storing and managing your code modules.

Code Design

Code your modules from the bottom up, while accessing the user interface code as a visualization of where the bottom-up structure is going.

Test Design

Test your code modules from the bottom up. Encapsulated testing provides for dependable, reusable code.

Documentation Design

Document your source code thoroughly throughout the process.

Table 1

Architecture

Advantages

Disadvantages

GPIB

Instrument is easily used off-line.

Is an industry standard (IEEE-488).

Performance is limited because of protocol translations.

Technology duplication can occur within the instrument and host controller.

Excessive signal cabling can occur because of each instrument’s own input/output connectivity.

VXI

Eliminates duplication in technology.

Modular instrument-on-a-card topology specifically designed for high-performance, direct memory-mapped communication.

Substantially reduces system real-estate requirements.

Is an industry standard (IEEE 1155).

Is sometimes difficult to integrate (recently, advances have been made in VXIplug&play to eliminate this difficulty).

PC-based DAQ

Is cost-effective.

Is flexible.

Frequency measurement bandwidth limited.

Integration can prove complex.

Serial

Instrument is easily used off-line.

Performance limitations can occur because of protocol translations.

Technology duplication can occur within the instrument and host controller.

Signal cabling can be difficult because of each instrument’s own input/output connectivity.

Protocols are poorly standardized.

Copyright 1995 Nelson Publishing Inc.

April 1995

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!