The Value of an Optimized Engineering Test Station

Engineering productivity has a profound impact on IC product development, affecting time to market, cost of test, yield, and quality of results. To successfully produce the next generation of devices, IC developers face several practical engineering and business challenges, such as reliably testing higher-performance designs, keeping the test cycle short without impacting quality, speeding design turns, and characterizing and maximizing device performance.

To meet these challenges, there is a growing need for engineering test, the process of verifying and characterizing new devices prior to high-volume production. This process not only is important to ensure ICs will work, but also to improve yields and promote greater reliability.

New devices also are running at rapidly increasing clock and data rates and with far greater complexity than in the past. This is the result of multiple-bus architectures that allow different sections of a device to effectively run at different speeds. Current test technology is hard-pressed to meet the timing demands of these new devices, and programming the equipment to handle the tasks can be difficult. As a result, the design community is relying more on dedicated engineering testers to get the job done quickly and cost effectively.

Why Engineering Test?

Examining a design’s performance on real devices rather than through simulations offers many advantages. First, it eliminates simulation/device discrepancies. Secondly, analyzing the behavior of several prototypes or preproduction versions allows designers to better understand and predict differences among production copies of what theoretically are identical devices.

Testing real devices is particularly valuable when the goal is to push the performance envelope of an existing device design by increasing its speed and possibly lowering its operating voltage. Higher speeds, higher circuit densities, and lower voltages all tend to aggravate the consequences of crosstalk and other noise effects that are difficult to model with available EDA tools.

Engineering Test Confirms Design

Engineering test stations are designed to meet a very different requirement than production test systems, commonly referred to as automated test equipment (ATE). ATE is targeted at the high-volume testing of devices where the designs already have been verified. These systems can perform comprehensive go, no-go tests on production devices at the rate of hundreds or even thousands of devices per hour. But for these systems, test throughput is far more important than test interactivity. Often, setting up an ATE for a new device requires extensive tester knowledge and programming skills. Even once this is done, the system is restricted in the richness of device debug information and the ease of control available to the design engineer.

An engineering test station complements production ATE by quickly getting the device ready for production. Engineering test stations are designed to allow easy setup of tests and quick change of test parameters and patterns.

Once a device design is thoroughly debugged, the critical test information is passed on to production ATE so that the production ramp-up can occur as quickly as possible. Most importantly, the design is confirmed and reliable before starting production.

Device Timing Challenges

Only a few years ago, a majority of digital devices operated at data rates of 20 Mb/s to 50 Mb/s. More recently, we have seen these data rates increase, with 66 Mb/s and 100 Mb/s as the norm and some pushing further still.

But devices have been running at much higher speeds internally. To improve performance, microprocessors and other complex devices have been equipped with circuitry that multiplies the external clock to achieve a higher internal (core) clock speed.

Initially, these clocks ran at twice the data rate, but soon core-to-bus ratios of 3:1 and higher became common. Today, core clock rates of 300 MHz to 400 MHz are common in high-performance consumer devices, while I/O data rates, which have remained below 100 Mb/s, now are a significant bottleneck to performance.

As a result, devices are moving to multiple bus architectures, with some buses running at far higher speeds than in the past. For example, a microprocessor may use a dedicated high-speed bus to communicate with an external cache at I/O rates up to the full core speed.

New devices may perform multiple I/O transactions simultaneously, with as many as four or five groups of pins working independently, each at different data rates derived from the core clock. Figure 1 illustrates timings on a device with two buses running at 2:1 and 3:1 ratios to the core clock.

Most test systems available today are not well equipped to tackle the complex timing requirements of devices such as these. They are not designed to support multiple data rates, and most run at a maximum base frequency slower than the data rate needed to fully test the device at speed.

Test engineers often resort to tester pin multiplexing or use special test modes to provide fast data to the device. But pin multiplexing can be prohibitively expensive, and 2× or similar modes often place severe restrictions on I/O control or per-vector timing flexibility.

Often, it takes the skills of an expert test engineer to manage the special tester modes and work with the test patterns. Modifications or what-if scenarios, often run by designers in an engineering environment, can be very difficult to set up and require the skills of the test expert who designed the test program. In other cases, these changes are not possible at all.

Cycle-by-Cycle Timing Control

It would be beneficial to step back and examine a different approach to tester timing. If the tester were allowed to run at the highest data rate required by the device, part of the problem would be solved. For example, a data pin could provide a drive data edge or a sample (compare) edge to examine device outputs at a rate of 500 Mb/s. A clock pin could drive two edges, creating a return-to-zero or return-to-one pulse every tester cycle, at frequencies to 500 MHz. No special modes or multiplexing would be required.

Also important for engineers performing device debug is the capability to have cycle-by-cycle control over edge placement and tester cycle time (device frequency). This capability is essential for performing speed-path analysis using the cycle stretch- and-shrink technique (see sidebar).

But what can be done about the added complexity of multiple buses running at different rates? In general, these can be thought of as running at ratios based on the core clock. If the tester already is operating at the full core clock rate, then each transaction on a bus can be viewed as a waveform made up of one or more tester cycles. In this way, it is easy for the tester to create the timings required by the device because the tester is operating in much the same way as the device.

Timing Partition Approach

To enhance timing test techniques, you can group independent bus pins into timing partitions. Then these pins can be managed as a group using waveforms created for those pins. Each pin continues to have unique, settable edge timings, while the length (in number of tester cycles) of each waveform determines the data rate seen by those pins.

For example, pins on one bus may be controlled by a waveform that is two tester cycles in length, while pins on another bus may use a waveform three cycles in length. This would correspond to the 2:1 and 3:1 core-to-bus ratios seen in Figure 1. Waveforms need not always be the same length, so the ratios used for a group can even change in the course of a pattern when necessary.

The engineer’s view of a pattern can be greatly simplified using this approach. The concept of a device cycle now is separated from the more restrictive tester cycle. Now, the source pattern data, which is viewable and editable through graphics tools, represents actual transactions on a device port, rather than tester-cycle-based events (Figure 2). This timing system interprets each pin’s data and applies it in the tester according to waveforms defined for that pin.

If a waveform spans several tester cycles, you can specify the behavior each pin should follow in each tester cycle. Using this capability, it is possible to create traditional data formats, such as return-to-complement or surround-by-complement, where desired (Figure 3).

Understanding and properly applying the timing system of any test system can be the greatest challenge in determining test requirements. In an engineering system, demand the capability to provide I/O data rates that meet any bandwidth requirements you expect to encounter within the next several years.

Look for a system that can provide high data rates and support multiple bus ratios without requiring special modes that limit flexibility and hamper debug. Look for a system that represents device timing and pattern information in device-oriented formats to ensure that data is easy to interpret and modify.

Deep Capture Memory

Engineering test is all about results. The actual response obtained from the part is important to understanding why a device is failing. It is not sufficient to merely know which pins failed a test or where the first failing vector was as in production test. Product and design engineers want to see a detailed pattern display showing the forced data, the expected response, and the actual response captured from the device at each vector. ATE may offer capture memory but can only capture a few hundred or few thousand vectors. While this seems like many, test patterns can be far larger, which means that it may be impossible to find out what failures occurred during the majority of vectors in a pattern run.

Look for a system with a very large capture memory, ideally, as deep as the system’s force/expect memory. This allows complete device response data for most large patterns to be captured in a single run, in less than a second.

Capturing data in a single run guarantees that the entire failure image is self-consistent. If the pattern must be re-run to collect a failure signature, there is a risk the response will not be identical in subsequent runs, especially if failures are intermittent or marginal.

Once test data is captured, it can be reviewed using interactive tools. Then it is saved to files so that it can be reviewed later or processed by off-line utilities to aid in fault diagnosis. One common analysis technique calls for different failure signatures to be stored and compared. Then, software tools can compare the results and help pinpoint problem areas.

Mobility for Interface Needs

Engineering test activities often require that the tester be used with a variety of analysis and handling equipment: microprobe or e-beam stations, temperature forcing systems, or even production probers or handlers. To meet these requirements, an engineering test station should be mobile, without the requirements for chilled-water plumbing, hard-wired power, and compressed air that often restrict placement of some test systems. This mobility also makes it easier to move the test station to where it’s needed during different stages of a project or among multiple projects.

Device interfacing—delivering the signals to the device—is a challenge that must be recognized as well. An engineering test station must be able to interface to a wide variety of equipment types and still provide extremely high-bandwidth transmission lines between the tester and the device.

The signal delivery system must offer a precision impedance-matched environment from both the driver and receiver of the test system to each terminal of the device under test. To aid in this, the number of connectors along any path should be kept to a minimum, and where they are required, matched impedance connectors should be used.

Another essential feature of any high-speed test system is support for time-domain reflectometry. It precisely measures each signal path and automatically adjusts all timing edges for the best accuracy, as seen by the device.

Test-Setup Software

In engineering test, your most important view of a test system is through its software. The suite of software provided with a test system determines how you interact with the system and what it can do. Equally important, the design of the software determines who is able to use the system: what skills are required, how much training is necessary, and how difficult it is to perform interactive changes to the test setups and interpret meaningful results.

Engineering test software helps you easily and rapidly gather valuable information about a device. Simple, interactive test software tools should respond in real time, allowing you to quickly change testing parameters to meet each new problem-solving challenge. Software tools encourage you to experiment and ask what-if questions, giving you the opportunity to probe deeper and push the design envelope.

Most engineering test systems are set up, rather than programmed, to test a device. The ideal software tools should allow device engineers to build tests by creating tables of information resembling a device’s spec sheet. The software then should quickly and easily execute these tests and provide immediate feedback. The tool suite must provide a user-friendly graphical user interface, enabled by such current software technology as the Java programming language.

Once a set of tests is developed, a test sequencer should allow you to build and modify a sequence of tests to run, cut and paste to reorder tests, and insert other tests to create new flows, all without requiring programming. Shmoo plots and even external utilities may need to be sequenced to complete full device evaluation or characterization.

Occasionally, you will want to share components of a test plan, such as a table of timing parameters, with others. An engineering test station should be able to export a device’s complete setup and pattern information in a clear, readable ASCII format that easily can be parsed by other tools. On import, it should read files in this same format, facilitating the interchange of setups between different test plans and providing a clear target for auto-generation tools.

User-Adaptable Software

An engineering test station’s software should not limit itself to file interchange for communications with external tools. Sometimes, manufacturers develop special test techniques that they would like to automate and integrate with their engineering test methodology. Or they may use custom software packages that help auto-generate test setups, move tests to production systems, or operate across a network to centralize data collection. Many times, these packages are site-specific and proprietary.

A flexible client/server software architecture, supporting client utilities written in Java or C++, can ease integration with any site-specific software. External utilities can hook up to the test software, even to a system running a live device, and work in conjunction with the system’s standard graphics tools to provide a very flexible and powerful suite of tools to the designer.

Client programs can immediately receive updates when a test condition or test result changes. Languages like Java even provide an easy way to build graphical interfaces for specific tasks.

Again, the importance of true interactivity and ease of use of an engineering test station’s software tools cannot be emphasized enough. Expect the tools to provide clear, device-oriented presentation of data and immediate feedback of parameter changes and test results.

Look for a package that easily adapts to customization and links to other software resources. Most of all, remember that an engineering test station should be thought of as a powerful instrument that can be used by many engineers on a design and debug team, not just specially trained test engineers.

Cooperative Engineering Environment

Increasingly, a design team for a new IC includes multiple groups located at separate sites, often geographically distant. Moreover, the locations where silicon is designed, produced, and tested are often not on the same continent.

To support a cooperative engineering environment where users can share data and even interact on a single system testing a device, an engineering test station should provide a client/server architecture. Using this technology, multiple live clients can be updated immediately when a change is made. An engineer at one site could set up a test, then allow another user to connect and view the same pattern data, including captured errors. Control could be passed to the remote user, who may wish to change some conditions or run a shmoo, for example.

Conclusion

Incorporating an optimized engineering test station into a product’s development cycle can help IC designers more effectively develop and release new devices. Test stations designed for engineering test can deliver the critical capabilities needed for this task—speed, flexibility, and ease of use.

The chosen system should have a timing system capable of accommodating devices with high-data-rate requirements and multiple independent buses. It should offer powerful analysis features, such as deep capture memory, to pinpoint failures anywhere within a pattern run.

The software needs to be very interactive and easy to learn. Then, the tester can be viewed as a powerful tool to help device engineers accomplish their goals, not a programming challenge that has to be mastered. However, the software should provide flexibility and extensibility for those applications where customization can provide added benefits.

Finally, the system should facilitate cooperation between work groups by allowing results and even tester control to be shared over worldwide networks.

With a well-planned engineering test strategy and the proper choice of equipment, manufacturers will be able to detect and eliminate design flaws sooner, increase performance and yield, and bring products to market faster.

About the Author

Peter M. Bego is a senior applications engineer in the Advanced Product Development Marketing group at Integrated Measurement Systems. He has 15 years of experience in the test industry. After receiving a degree in electrical and computer engineering from the University of California/Davis, Mr. Bego held senior applications and management positions at Teradyne and Megatest. Integrated Measurement Systems, 9525 SW Gemini Dr., Beaverton, OR 97008, (503) 626-7117.

The Stretch-and-Shrink Approach

As the operating frequency of a device increases, a point will be reached when the device begins to fail. At this frequency, a signal path within the device no longer will be able to keep up, generating faulty data. This incorrect data will propagate to the device outputs and will be seen as a pattern failure at a certain vector location. Your goal is to determine what circuitry inside the device caused this problem and what circuitry should be redesigned if the device is to operate faster.

In today’s pipelined parts, many cycles may be required to propagate the bad data to a device output. As a result, studying device operation at the failing vector may not be sufficient to find the problem. You need to find out which clock cycle actually induced the failure and what operation within the device was the most sensitive to that frequency.

One technique becoming popular involves stretching or shrinking particular test cycles until the device passes or fails, a practice that spotlights areas in the design most susceptible to speed-path problems. The frequency of the test is changed on individual cycles by modifying the clock period on a cycle-by-cycle basis.

Using an iterative approach, the cycle or group of cycles responsible for the speed-path failure can be isolated. Figure 4 shows a sequence of test vectors with one shrink cycle and the resulting failure in a later vector.

The first step in this process is to tighten the timing of a device working at a nominal frequency until it fails. Armed with a passing and failing clock frequency, you then can modify the clock period of individual cycles.

Figure 5 illustrates how a pulse ripple technique can be applied to a sequence of test vectors. In this example, starting from a passing condition, you can shrink the first clock pulse and execute a functional test. If this test passes, restore the first pulse to its normal width and shrink the second pulse. This process is repeated until the test fails.

A similar technique does not restore the modified pulses to their original timing, but leaves a sequence of modified pulses as shown in Figure 6. In some cases, this domino technique makes it easier to isolate a region of the pattern responsible for the fault before zeroing in on individual vectors.

These techniques can be used to modify a single core cycle or sequences of two or more core cycles, if desired. To find a speed-path fault, some manufacturers begin with a test pattern that passes completely, then speed it up until the device fails.

Others start from a failing condition and slow individual vectors until the test passes. Because this technique is applicable with either approach, it is commonly referred to as the cycle stretch-and-shrink technique. You always can apply this technique by performing each step manually. But in many cases, you will want to consider an automated process that can quickly apply this iterative approach to a large pattern or more than one pattern.

Copyright 1998 Nelson Publishing Inc.

November 1998

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!