Low-Frequency Data Collection Challenges Acquisition Envelope
Data collection is one of the most common instrumentation tasks, with applications ranging from capturing fast pulses in a research lab to monitoring engine responses on a dynamometer. This gathering of information from a sensor or UUT may occur in microseconds or continue for hours. Surprisingly, low-frequency collection systems often face the same challenging data-capacity demands as high-speed units.
Aggregate Data Rate
While the digitizer of an acquisition system may capture at a very high sample rate (up to 5 GS/s in today’s fastest VXI instruments), it typically runs only one or two channels with 8-bit resolution. Also, it normally acquires for only a few milliseconds; for example, observing the output of a pulsed laser in a laboratory.
The digitizer’s very high speed is balanced by its low channel count and brief acquisition time as well as its lesser resolution (compared to instruments having 12 to 16 bits). The volume of data per unit time is manageable.
In contrast, a low-frequency data-collection system may have dozens of channels and 16 bits of resolution and monitor test points for many hours or even days. Its maximum sample rate may be only hundreds of kilohertz, but the cumulative effect of all those channels and bits can produce an aggregate data rate similar to that of the faster instrument. In other words, the amount of data that must be moved and stored per unit of time is massive.
Continuous data collection at any sample rate requires either a large buffer memory or direct storage onto a fast hard disk. Fast acquisition systems employ a buffer memory to contain incoming data as it accumulates.
But there is a threshold at which buffer memory requirements become impracticably large. The buffer data must be offloaded periodically to a mass storage device, interrupting data capture. A well-designed low-frequency data-collection system, however, can keep going indefinitely, thanks to the huge capacities available in today’s disk arrays.
Elements of a Data-Collection System
Applications for data-collection systems are broad and varied. They include routine tasks like fault prediction monitoring in a power plant as well as more “glamorous” jobs in wind tunnels and engine test cells.
A data-collection system consists of five major functional blocks: the digitizer, a signal conditioner (which controls signal levels and bandwidths going into the digitizer), a mass storage controller, a fast storage medium (a hard disk or disk array) and control software (Figure 1). These functions may be partitioned in various ways and integrated to differing degrees, but every data-collection system shares the same basic blocks.
Thanks to its instrumentation-quality measurement environment, its modularity and its open architecture, VXI has become the preferred data-collection toolset. VXI is built on concisely specified standards for shielding, cooling and power supply conditioning.
Among all the platforms available, only the VXI architecture provides the synchronization features essential to many data acquisition tasks. A wealth of ancillary VXI hardware includes DMMs and switches often needed to carry out measurements related to data collection.
The architecture is supported by a host of advanced software packages for program development and data analysis. VXIplug&play technology ensures the platform’s ease of use and interoperability among diverse products.
Lastly, VXI can provide an efficient, optimized path to the data storage media. Using the fast data channel and a dedicated local hard disk, specially designed VXI mass storage controllers can easily surpass the disk throughput of conventional PC storage architecture. After all, DOS disk transactions were designed for use with office word processors and bookkeeping software.
A well-designed VXI data-collection system is greater than the sum of its parts. If the functional modules are designed to complement one another and are coordinated by a matching software package, then the interactions between the modules will proceed with the minimum overhead. Mismatches and redundant functions are eliminated, and with them excess cost and inefficiency.
Intelligent Data Collection
Even among VXI components, individual module architectures can affect the system’s ultimate data-handling capacity and effectiveness. Just as VXI mainframes, switches and other modules have benefited from on-board intelligence, data collection can be made “intelligent.”
Instead of using post-collection data reduction to trim test results to manageable proportions, the system can be programmed to acquire only certain narrowly defined events, even while a storm of other activity occurs on the monitoring point. The system only samples when it needs to, but retains the key information to reconstruct the entire history of the acquired event. The amount of information to be stored is reduced drastically.
Several characteristics distinguish the intelligent data-collection approach:
Conditional triggering.
Independent sample-rate allocation.
Trigger-event time tagging.
Conditional triggering allows the digitizer to identify mutually exclusive or inclusive conditions before acquiring data. For example, you might set up an AND gating of a threshold (voltage) trigger, an external event trigger and a VXI TTL trigger to enable an acquisition. Even though millions of UUT cycles might go by, only those events that meet all three conditions would be recorded. The more trigger variables a digitizer allows, the less data it has to acquire and store.
Independent sample-rate allocation is a fancy name for tailoring the system’s sample rates to use the minimum acceptable rate at each test point. If 16 test points must be observed, for instance eight at 10 kS/s and eight at 2 kS/s, there is no need to run all channels at the 10-kS/s rate required by the faster test points.
Some VXI digitizers can be partitioned into multiple banks, with each bank independently programmable for a sample rate. In this example, some banks would run at 10 kS/s and the other at 2 kS/s—an 80% savings in accumulated data points on the slower banks.
Trigger time tagging associates a specific time (typically the elapsed time from the initial trigger) to each packet of acquired data. Every captured event is identified by its time tag. This allows you to home in on an event and discard unwanted information that occurred before or after that time.
Intelligent Data Collection in Action
To illustrate intelligent data collection, let’s look at an automotive dynamometer test. Dyno testing is fundamental to the design of automotive fuel and ignition systems, cooling systems and drive trains.
The test is a microcosm of data-collection requirements. The system must acquire both high and low voltages including temperature and pressure sensor outputs and both analog and digital waveforms of differing speeds, not to mention the DC voltages in the electrical system.
In our example, a certain throttle position is suspected to be related to an infrequent but drastic drop in manifold pressure. The engine temperature also seems to be a factor. It might be necessary to go through many thousands of cycles of acceleration/deceleration to gather enough data to evaluate the problem.
The first step in data collection creates a test scenario. Here we define test parameters and boundary conditions. In a well-integrated data-collection system, this facility is built into the control software.
The software guides the user through the necessary steps: setting the sample rate and channel partitions, programming the signal conditioning, setting the trigger conditions and allocating storage space for the acquired data (Figure 2). Ideally, the test scenario should prevent you from setting up dangerous or meaningless conditions for the test. Successful test scenarios can be saved and reused.
In the case of our dynamometer test, we partitioned the digitizer into four independent banks, each monitoring a different point:
• Throttle position sensor (2-ms sampling period).
• Manifold pressure sensor (1-ms sampling period).
• Cooling system temperature sensor (10-ms sampling period).
• Crankshaft speed sensor (500-µs sampling period).
Remember that a wide range of voltages must be monitored. While the manifold pressure sensor might put out microvolt-level signals, the crankshaft sensor could put out voltages approaching 100 V, depending on engine speed. This underscored the need for stringent conditioning of the signal going to the digitizer.
While these modest sampling speeds wouldn’t accumulate much data in the short term, remember that this dynamometer test could go on for weeks—even months. By sampling at the lowest feasible rate on each test point, we conserved disk space, maximizing the length of time available to run the test.
To increase our sampling efficiency, we set up conditional triggers to ensure that only relevant data was captured. In this case, the temperature sensor threshold trigger was ANDed with the throttle position sensor. When the AND condition was satisfied, we acquired the manifold pressure reading for a period of time. Again, intelligent data collection—in this case, conditional triggering—reduced the amount of data that must be captured.
Figure 3
shows the trigger gating setup and presents an overview of digitizer activity. Remember that the acquired waveforms are synchronized with one another, meaning any point on any waveform can be correlated with an equivalent point on the others. This is of great value in revealing cause-and-effect relationships.
After storing several weeks of uninterrupted data, potentially thousands of individual records, we can evaluate trends and recurring phenomena to distill a solution for the manifold pressure problem.
Summary
Low-frequency data collection looks deceptively simple. But when high channel count, high resolution and long sample times are accounted for, a data collection system’s aggregate data rate approaches that of any high-speed acquisition system. Special measures must be taken to manage and minimize the data from long-term test and monitoring cycles.
About the Author
Marvin Speer has been employed by Tektronix for 11 years and currently is the VXI Product Manager. He has undergraduate and graduate degrees from the University of Northern Colorado. Tektronix, P.O. Box 500, Beaverton, OR 97077, (800) 426-2200.
Copyright 1996 Nelson Publishing Inc.
March 1996
|