Electronic Design

New Synthetic Instrumentation Methods Solve Tough System-Level Test Problems

Today’s electronic components and products are evolving faster than ever, with design- to- production life cycles shrinking to just six months in most commercial applications. In addition, device content and topologies are migrating from single to multi functional components, and then to entire subsystems and systems, often as a single assembly solution (such as for smart phones and iPhone-type devices).  

Furthermore, d irect software control and device configuration is now commonplace for multi- carrier power amplifiers (MCPAs) and software-d efined radios (SDRs). And d evices like RFICs can operate and be tested only in a mixed- signal environment, often under real-time conditions.

When all of these considerations are taken into account, the characteristics and requirements of the ideal design for test and manufacturing (DFT&M) solution emerge quite rapidly. These characteristics and requirements can be easily provided and satisfied by “synthetic” instruments. Quite simply, unlike static rack-and-stack test systems, synthetic instruments can systematically evolve in concert with the device under test (DUT) .

What is synthetic test?
Traditional test system providers take a combination of bench top instruments or instrument-specific modules and rack them up with the appropriate interconnect cabling and connectors between the instruments and the product. They then add software that makes calls to the functional capabilities embedded in these instruments. This is better known as the “rack-and-stack” approach to test system development. Traditional instruments employed today in standalone configurations, or as part of a test system, include oscilloscopes, digital multimeters, spectrum analyzers, and frequency counters.

Synthetic instruments “synthesize” the stimulus and measurement capabilities found in traditional instruments through a combination of software application programming interfaces (APIs) and measurement algorithms, hardware modules, and system-level calibration software based on core instrumentation functional building blocks. The concept of synthetic instrumentation finds its roots in the well-accepted technologies and techniques behind radar and EW transmitters and receivers, SDRs, mobile/handset devices and phones, wireless infrastructure, components and sub systems, and other communications systems designed and fielded today.

The synthetic architecture also enhances the ability to upgrade the test system as well as the systematic handling of obsolescence issues. When an upgrade or obsolescence situation arises, it’ s only necessary to add or replace the functional blocks directly impacted, not the entire instrument suite or the associated measurement and test applications. This reduces the cost of handling obsolete instruments, in addition to reducing the technical risks associated with the effort.

With the synthetic system, an organization can create a wide array of signal types, including digital, analog, power, RF, and microwave. This is accomplished by using modular hardware components, systems software, and modular measurement and applications software. The architecture of the synthetic system provides the unique ability to exploit the hardware, measurement, and applications capability separately and together. Hardware- agnostic software measurement libraries are also protected from the risk of re-development as hardware capability evolves.

The synthetic system addresses multi-industry test issues. It is defined by combining modular hardware and software components to form a powerful new class of test instrumentation that offers distinct advantages over the one box—one measurement function capability of traditional rack-and-stack instruments. Through the synthetic system architecture, it’ s possible to utilize multiple parallel paths to realize improved testing time and throughput from four to 10 times that of traditional rack-and-stack configurations.

Key benefits of synthetic test systems include:

  • Reduced cost per unit of test
  • Improved test time and throughput
  • Reduced test equipment needs and test system configurations
  • Faster and more accurate measurements
  • Simplified system-level calibration
  • Reduced capital, maintenance, and ownership costs
  • Reduced product obsolescence and upgrade issues
  • Future-ready platforms for next- generation measurement algorithm development
  • Platform independence and system re use model
  • Abstraction of test applications and measurement software from systems hardware and software configuration.

The synthetic system supports reduced upgrade issues and product obsolescence issues. Hardware, system software, application, and measurement functional architectural blocks can be replaced as required, completely and independently. This assists in reducing upgrade expenses and system-integration risks that can compromise a test application configuration.

The synthetic system resolves the re calibration issues that happen when calibration and functional test loops are programmed directly into the system functional blocks. This allows calibration routines to be executed at run-time and on a continuous basis. Thus, it eliminates the need for pre-scheduled down time that’ s often required to re calibrate a given system. In addition, the overall integrity of the test system is enhanced due to the continuous and embedded calibration capability.

The synthetic architecture also enhances the ability to upgrade the test system and handle systematic obsolescence issues. When an upgrade or obsolescence situation arises, only directly impacted functional blocks need be added or replaced—not the entire instrument suite or the associated measurement and test applications. This reduces the cost of handling obsolete instruments, in addition to reducing the technical risks associated with the effort.

Continue to page 2

The concept of synthetic instrumentation treats the hardware in the same way that object-oriented programming treats software “modules.” Consequently, object- oriented software matches perfectly with the synthetic instrumentation approach. Advanced synthetic implementations utilize software objects that correlate one-to-one with the functional hardware modules. In addition, they implement stimulus and measurement algorithms as software objects.

Encapsulating all required information about a functional hardware module makes it easy for intelligent software to combine modules in different configurations and determine the final stimulus or measurement capability. Simply put, if you know the transfer function for each of the modules, then you can combine them to create complex stimuli or measurements. Since the modules are treated as objects, then one or more sets of calibration coefficients can also reside in the object. Thus, modules can be combined in many different ways while still maintaining the high quality and sensitive system-level calibrations all the way out to the interfaces of the unit under test.

Further benefits of this object-oriented approach include the ability to provide integrated diagnostics and fault prediction for the test system and the DUT. Integrated diagnostics are very difficult to perform in traditional test systems that use sequential programming techniques. With the object-oriented synthetic system approach, intelligent software can easily monitor the operational state and status of the hardware and software objects to provide real-time diagnostics. Adding trending to the system can also provide fault prediction capabilities.

The synthetic systems environment is agnostic to the choice of hardware platform and interfaces. The solution provider can mix PXI, LXI, PCI, PCI-X, VXI, GPIB, Ethernet, USB, or any other communications bus, including switch-fabric architectures, because the interfaces between hardware and software modules are also treated as plug-in objects.

The hardware configuration is abstracted from the software components and modules, providing the ability to reconfigure the test system at will. The synthetic system’s approach provides an extremely powerful systems architecture making reuse and scalability possible.

An approach to a synthetic test system
The ideal DFT&M solution provides all necessary data and control interfaces, plus simulation, operational, and emulation characteristics (i.e., the test environment) required by the DUT to function and communicate while it’ s being tested, as if it were embedded in the actual system environment (Fig. 1).

As the level of integration and complexity of the DUT increases, the operational behavior, communication modes, functional aspects, and types of data handled by the test environment (TE) must be able to adapt through software reconfiguration and dynamic test resources reallocation.

Given the multitude of both internal and external I/O (interface) processes taking place simultaneously in the DUT, the TE has to be able to perform multiple complex measurements that are accurately controlled and finely synchronized. Often this occurs under real- time conditions, on the same block of sampled and processed data, without losing continuity of the overall stimulus, response, and data- streaming process.

This concurrent and finely synchronized process ensures that the cause/effect relationship of the results, and behaviors of the parameters being controlled and analyzed, are consistently maintained. Synchronization also provides accurate, low-uncertainty, and highly correlated data/information to the DFT&M engineer.

With time-to-market pressures continuously shrinking the DUT’s development cycle , the test system should provide all test environment capabilities for the feasibility assessment, development, integration, validation, production, and maintainability phases of the DUT. Having multiple/segregated test equipment and test groups for each of the various stages of modern DUT life cycles is impractical, both from the standpoint of economics and availability of test resources (i.e., equipment and human aggregates). What’ s needed then is a design for test and manufacturing test environments composed of synthetic solutions and resources that can adapt dynamically, seamlessly, and cost effectively to the various life- cycle phases of the DUTs (Fig. 2) .

Rising DUT integration and complexity coupled with decreasing development phases will also make it harder for traditional test equipment to provide highly specialized, useful, and completely up-to-date test functions and procedures and associated results (e.g., push button and data display type on a traditional instrument). Instead, test solutions will include software- configurable measurement consoles (Fig. 3).

Such consoles utilize libraries of fundamental measurements that the DFT&M engineer will be able to modify, integrate, sequence, and augment through simple scripting, graphical, and other human- interface techniques to best suit the latest test requirement for the new DUT. Following this line of reasoning, an increasing level of functional and control/data I/O interaction will most certainly occur between the DUT and the TE. This will require even more common and integrated road maps and a higher degree of cooperation between the DUT developer and the TE provider.

It’ s important to realize that even though we usually don’t state it explicitly, DUT developers have been embedding IP t est cores (IPTCs) in their devices for many years. These IPTCs can be rather simple in nature when they support low-level built-i n-t est (BIT) functionalities. They can be more complex when supporting externally stimulated and controlled boundary-t est functions, where multiple vectors and combinations can be used to verify the overall performance of a DUT through interactions limited at the I/O level.  

This scenario is becoming even more common in software-defined devices, in which the test interface is enacted mostly through intelligent software interaction between the DUT and the TE. Here, the optimal performance of the DUT’s built-in test capabilities will only be achieved when the TE can “see beyond,” and effectively communicate with, the DUT’s/TE’s interfaces. Again, this implies close collaboration and coordinated road map and development activities between the DUT developer and the TE provider.

From these considerations, one could argue that in this rapidly changing environment, traditional test solutions based on multiple, individually tailored instruments won’t be able to keep pace with the rate of change in design, test, and maintenance requirements, as well as test ensemble complexity. In fact, engineering roles are rapidly merging.

There’ s an urgent need for a DFT&M solution that can adapt rapidly to the performance requirements of new devices. This new test system will also enable custom measurements that aren’t rigidly bound by specific functionalities and human- interface characteristics associated with traditional test equipment.

In this scenario, truly synthetic solutions allow the DFT&M engineer to work seamlessly from inception to maintenance of products. This highly integrated, modular, software-driven synthetic architecture also favors proactive and cost-effective management of obsolescence by limiting impact to single, integrated modules, not the entire test system.

As synthetic test solutions gain acceptance and rapidly expand their foothold, it’ s necessary to ensure that the DFT&M engineer selects truly synthetic systems, with hybrid integration capability to accommodate various standards (Fig. 4) .

In general, proven synthetic test system solutions are provided by companies that also routinely develop and integrate system components. These companies have experience and expertise in integrating all system elements within a fully calibrated, synchronized, and traceable-to-standard test environment. Only in this case, a truly synthetic (i.e., measurement-based) system will perform accurately, reliably, and consistently, delivering a truly superior DFT&M solution.

Ultimately, we all know that fully synthetic, measurement-based, highly integrated, software-configurable, adaptive test environments are the way of the future.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish