Industry Happenings Badge Tom

ITC again ascending

A generally upbeat tone highlighted ITC 2014 in Seattle. The technical sessions were well-attended, and this year’s firing-line questions from industry experts added a great deal of interest.

In his keynote address, Synopsys’ Aart de Geus covered test topics ranging from finFETs to silicon brain augmentation, the underlying theme being systemic complexity. According to de Geus, yet another 10x gain in test efficiency is needed. He said that in providing lower cost and better ICs, you don’t just differentiate your product, you add value by changing your customer’s opportunity space.

The Optimal+ customer presentation this year was presented by Carl Bowen from AMD, who discussed the application of adaptive test technology to AMD production. Both companies underestimated the amount of work required—and also the nearly immediate gains that would result, which helped to change attitudes among detractors, Bowen said. The need for a champion was stressed as well as knowing who supported and who resisted the new idea. Where you can show that the new approach truly delivers better results, the detractors have little to fall back on.

The test compression session began with Global Foundries’ Brian Keller’s description of merged test patterns for different types of cores that allowed multiple types to be simultaneously tested. Lively Q&A concluded that wrappers were central to the method’s success.

Janesh Rajski from Mentor Graphics presented an improved compression technique called isometric test compression. This scheme includes a template register that controls the points at which the data is allowed to change state—it doesn’t have to change, but it can. Within a test cube, even less data than previously thought is absolutely necessary. The template register is configured to retain this data while the ATPG provides the rest, supporting a reduced toggling rate as well as high test compression.

In the “Big Data” session, Ali Ahmedi from University of Texas Dallas expanded upon the good-die/bad-neighborhood idea by giving more weighting to die close to a given die. He also proposed including data from other wafers in the same lot, assuming that they too would have similar properties. His method retains a die’s X-Y location but more accurately represents the die’s performance because it results from considering more data.

T. M. Mak from Global Foundries discussed silicon interposer testing as the first paper in the “Not Your Dad’s Board Test” session. Functionally, interposers are similar to double-sided PCBs but with top-side track density up to 500 traces/mm—about 100x the density of a fine-trace PCB—and too tight to probe, although the bottom-side microbumps can be probed. Using a conductive elastomer sheet on the top side provides continuity between bottom-side microbumps and the top-side traces. One type of fault that might be exposed is a TSV with defective insulation—because silicon is semiconducting, each TSV must be insulated. Adding active test circuitry and determining the extent of any cracks are ongoing investigations.

In the “Validation: Pre-Silicon, Emulation, Post-Silicon” session, Mentor’s Kenneth Larsen discussed the role of hardware emulators in the development of very large ICs. Where software might run a scan test at a 1-Hz rate, emulation operates at megahertz speeds—a 64-hour simulation was run on Mentor’s Veloce emulator in about one minute. Emulation also helps to verify the quality of tools and processes used in post-silicon test, Larsen said.

And in an analog mixed-signal session, ON Semiconductor’s R. Vanhooren addressed analog fault model use in test coverage and component quality issues. He explained that mixed-signal IC ppm defect improvement had slowed considerably around 2010 and claimed that undiagnosed analog faults were the root cause of increasing automotive electronic system problems. Using the example of a power-on test circuit, Vanhooren showed that defect-specific masking combined with a genetic algorithm could improve test coverage.

In his keynote address, Patrice Godefroid from Microsoft Research presented software test techniques and trends. Dr. Godefroid is credited with developing the scalable automated guided execution (SAGE) test approach, also called whitebox fuzzing, which is based on a combination of fuzzing and dynamic test generation. In contrast to static test generation, dynamic test generation modifies the input test data in response to the received output. Using seed data is called directed automated random testing (DART), and fuzzing is the random and gradual modification of input data to eventually find a combination that yields the desired output.

To verify Windows security, the SAGE test application has been running on a dedicated group of 100 test PCs for the last five years, Godefroid noted. Not all faults have the potential to cause serious and costly problems, but faults do represent possible entry points for hackers. Within limits, it’s OK for users to find and report small performance issues—the customer is the ultimate tester, he said.

The role of test has changed and continues to change. With approaches such as Agile, software test becomes a continuous part of each developer’s job. Godefroid compared the near 1:1 developer-to-tester ratio prevalent at software manufacturers years ago with the 10:1 ratio achieved today.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!