Path to Widespread Adaptive Test Revealed at CAST Workshop

Adaptive test has been identified as the most essential test process and methodology change needed to achieve lower test costs by the latest release of the International Technology Roadmap for Semiconductors (ITRS). Unfortunately, achieving that vision will require a massive transformation of the fragmented test ecosystem where diverse data formats, transport protocols and database accessibility currently amount to a vast “Tower of Babel.”

At a recent workshop organized by the Collaborative Alliance for Semiconductor Test (CAST), a SEMI special interest group, however, a path to enabling adaptive test (AT) was presented that is based on industry-wide standards for datalog formats, test cell communications, and HTTP-based representational state transfer, or REST, architecture. While early in development, the standards-based approach to industry-wide adaptive test was met with optimism by the diverse audience of chip manufacturers, fabless companies, test equipment providers, and AT solution providers.

According to the ITRS, AT is “the practice of going beyond the use of fixed limits and invariant test flows and operations, by using data that has been collected during the process of IC manufacturing to influence, change, or “adapt” how a device or system is tested, or to alter its manufacturing flow.” The concept of AT is that all parts don’t need to be tested in the same way and that cost-effective, quality-conscious test flows must identify and screen for variation by designing new test methods capable of constantly adjusting test conditions, test content, and test flows. The SEMI CAST Workshop, entitled “Implementing Adaptive Test,” explored the barriers to AT—primarily the lack of data format functionality and data communications standards—and presented the progress of several CAST efforts to overcome these obstacles.

Peter Maxwell, from Aptina and a member of the ITRS AT subgroup, began the workshop with an overview of AT. Interest in AT is high, he said, because of its proven ability to lower test costs and achieve better quality and reliability. AT is a critical concept required for more effective and rapid yield learning and can enable optimized multi-chip packages. The ways in which AT can modify production test processes include changing test conditions such as voltages or clock frequencies, test content such as adding or deleting specific tests or test patterns, and test limits, altering pass/fail specifications. AT can also alter manufacturing flows, adding or deleting test insertions such as burn-in, and modify test outcomes by changing the binning of some die based on post-test analysis of the die’s test results.

The difficultly in implementing AT comes primarily from the lack of a common data model. Collected data must be organized into a structured model that captures the identified and meaningful variations necessary for analysis and control. The data model must account for various decision algorithms that address response time and latency, appropriate scope (part, wafer or lot), test variables, and database types. The fastest data latency and accessibility starts with real-time requirements in the tester, followed by local database needs in the test cell requiring one to two seconds of latency. Beyond this, production databases require accessibility in minutes, and worldwide access follows via cross-company databases consisting of design, foundry, OSAT, and even end-user data from boards and products.

At the workshop, representatives from ST Microelectronics, Qualcomm, Texas Instruments, Optimal Test, Nvidia, Maxim, and others discussed the status AT tools, AT implementations, and the difficulties in achieving full deployment. Historically, local test cell decisions have been the sole responsibility of the test floor, but AT decisions are made throughout the manufacturing process. Data must be carefully structured to make it available when it is needed to make a decision. Algorithms will be owned by multiple involved parties, including wafer fab, design house, and test floor. Some algorithms may originate from commercial providers—others, from the involved parties themselves. They must be executed smoothly next to each other in a real-time environment. The underlying data infrastructure must support mission critical data collection and access that would be required throughout the entire supply chain, likely involving multiple companies, geographic areas, and cultures.

Enabling Industry-Wide Adaptive Test

The SEMI CAST group has been addressing key aspects of AT since 2010 through workgroups organized to address the standard test data format (STDF) and test cell communications. The CAST STDF work group has been working on creating a schema for the existing STDF standard to enable efficient access to data for new applications, including adaptive test. Ajay Khoche, leader of the work group, introduced his group’s efforts to propose a new STDF Standard called Rich Interactive Test Database, or RITDB. The proposed standard recommends SFDF be converted to an SQLite database design to permit queries and enable ATE data logging to serve in a comprehensive data model for adaptive test and other database applications. RITDB allows data from any data source in the test cell to be easily used and collected. SQLite was chosen because it is scalable and flexible, based on the public domain SQL database, is widely available, and has a wide range of supported software tools and libraries.

In support of the standard’s development, a STDF data translator has been developed to test and verify the applicability of RITDB for data logging. The translator is a standalone JAVA GUI that allows the user to convert one or more STDF files into a corresponding RITDB (a SQLite file is created for each STDF file) and is available free of charge through the work group.

The other key foundation for adaptive test was presented by Teradyne’s Lorenzo Simonini, leader of the CAST Test Cell Communications Work Group. That group's recommendations focused on ATE interface standards that would not disrupt legacy data applications such as prober/handler communications, but provide an additional data path to accommodate data-driven AT and other applications. The recommendation was to build an architecture based on REST using HTTP commands between clients and servers. RESTful architectures conventionally consist of clients and servers. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. Other features of the proposed architecture are the use of HATEOAS, an abbreviation for Hypermedia as the Engine of Application State, for customization of optional data, and JSON as the MIME format, for brevity, human readability, and to eliminate the need for special data formats. In support of the proposal, Simonini presented a comprehensive “use case” document that summarized the target needs of the test cell and begins to define the message list for standard test cells.

A new workgroup was proposed at the workshop that would use many of the same concepts for database-to-database accessibility. The new work group would focus on developing a data-sharing method for the purpose of enabling adaptive test using a standardized RESTful API that could be used across the spectrum of data sources from design, through production, to field data such as RMAs. Wesley Smith of Galaxy Semiconductors proposed the RESTful solution for semiconductor test, similar to those now widely used by many of the largest e-commerce and cloud-based services on the Internet. Like the Test Cell solution, data access is done entirely through easily-applied HTTP, and the results contain a map to the data hierarchy. The architecture supports standard API development where data are easily consumable by every major programming paradigm and allows any level of hierarchy to be displayed. Data can be highly compressed using gzip so bandwidth requirements are minimized.

Attendees at the workshop agreed that the combination of RITDB and RESTful test cell and database architecture could provide the optimum foundation for industry-wide data communications supporting adaptive test and other big data or cloud-based applications in semiconductor test. This standards-based architecture is now planned to be supported by glossary of terms for common test attributes and objects that can be applied to databases and APIs for AT and other applications. The approach also allows for extendable, custom, and proprietary labeling for nonstandard and future applications. While additional work is required, and more industry input is needed, based upon the workshop response, it seems that comprehensive adaptive test as envisioned by the ITRS across the full test ecosystem could be realized.

Next steps in validating this vision for implementing AT will immediately occur through the CAST work groups during ongoing teleconferences and meetings. An additional workshop on implementing adaptive test is anticipated for the second quarter of 2014. Once industry inputs have been obtained, industry-wide standards are anticipated to be developed through the consensus-based SEMI Standards process. To participate in work group discussions and standards development, visit and contact work group leaders and SEMI staff at [email protected].

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!