Rick Green 200

Test Vision 2020: From train-station clocks to AI’s threat to test engineers

July 16, 2018

San Francisco, CA. In conjunction with SEMICON West, Test Vision 2020 took place July 11-12 with the theme “The Next Step in Intelligent Test.” The event covered automotive, big-data, DUT-interfacing, RF, and memory applications. Presenters leveraged topics ranging from factory lunch-whistle clock calibration to the potential for artificial intelligence to obsolete test engineers. Participants also weighed in on the original meaning of the “2020” in Test Vision 2020.

General chair Stacy Ajouri of Texas Instruments welcomed attendees Wednesday morning, and program chair Derek Floyd of Advantest followed up with a description of the event. The technical program got underway when Ben Brown of Xcerra presented the “2017 Best ATE Paper Award.” He noted that past nominees and award winners indicate significant collaboration across companies and continents. While lowering cost of test is always an issue, he said, the nominees and winners indicate that industry prized performance.

This year’s nominated papers covered topics including advanced jitter characterization in SerDes and 80-GHz and above low-loss devices. The winning paper described a method for low-cost dynamic error detection in linearity testing of SAR ADCs.

Nimot Jain of the Indian Institute of Technology accepted the award on behalf of his coauthors at IIT and Texas Instruments India. He summarized their work, noting that the uSMILE algorithm is ineffective if SAR ADC linearity is predominantly affected by dynamic errors resulting from incomplete settling of the output voltages of the DACs employed in the ADC’s successive-approximation architecture. “We extended uSMILE to detect dynamic errors as well,” he said.

Keynoter draws on test history

Gayn Erickson, CEO of Aehr Test Systems, delivered the Test Vision 2020 keynote address. He asked rhetorically why a burn-in guy might be addressing a test workshop. But he said he still considers himself a test guy, tracing his experience in memory test, for example, back through Verigy, Agilent, and Hewlett-Packard—a history that many in the audience, including yours truly, remember well.

Erickson presented several questions facing the industry: are we providing enough test coverage, what failure rate is good enough, how long do semiconductors and sensors need to be reliable, and what are the market impacts of failure rates on critical applications including security, health-monitoring, and biosensing?

He commented on the life expectancies of cars employing internal combustion engines or electric motors. An internal-combustion engine, he pointed out, is constantly blowing itself apart but nevertheless can power a car for 200,000 miles over maybe 15 years. In contrast, an AC induction motor can last as long as 40 years in constant use. Can semiconductors, sensors, and batteries keep up?

Should you own a 40-year-lifetime electric vehicle, perhaps boredom will prompt you trade it in long before the semiconductors, sensors, and batteries give out. But with the rideshare model, with fleet owners determining when to trade in the vehicle, the electronics might be required to keep pace with the reliability of the electric motor. And whereas semiconductor makers talk of defects in parts per billion, Erickson said the key spec for drivers and riders is “walk homes per million.” People who have had to walk home after a breakdown are unlikely to purchase the same brand again. In addition, should a failure affect an autonomous vehicle of an ADAS system, walking home might turn out to the best outcome.

Erickson extended his talk beyond automotive applications, commenting on topics from secure electronic payments to biometrics. As to the former, he said, many restaurants in Shanghai no longer accept credit cards—the city is going walletless, and you need your smartphone to pay. You may engage with an online delivery service that can electronically unlock your front door to leave a package in the foyer. What are the semiconductor reliability consequences for such applications?

Any test and measurement process, Erickson alluded, has inherent limitations. He recounted an anecdote, apparently from many years ago, of a plant operator in a Texas town who had the key responsibility of blowing a whistle to signal lunchtime each workday at noon, based on the reading of his watch. The operator’s engineering-student son asked how he knew the watch was accurate. The operator said he reset his watch based on the clock at the train station when he passed through each morning. The engineering student then asked the obvious follow-up question, how do you know the station clock was accurate?

The operator was nonplussed and didn’t respond for a couple of days. Finally, he admitted that he had queried the stationmaster, who explained he reset his clock every day at noon on hearing the plant lunch whistle.

The calibration drift experienced by the ostensible timekeepers in this Texas town may have been mitigated by a Fermi solution, or maybe not. It’s possible that the test problems of the future may involve smears of probability on a stochastic fabric, in which the vagaries of timing variations (the plant whistle vs. the train station) will blur probabilistically with what in the old days we considered well defined and deterministic corner cases involving parameters like voltage and temperature.

But for the present, Erickson suggested, it might be best to optimally deploy the tools that are currently well understood: wafer-level and package-part test, testing at hot and cold extremes, functional and parametric ATE, performance grading, system-level test (SLT), and all of the above. Success, he said, will involve optimal selections of sampling, test vehicles, lot-acceptance test, statistical process control, just enough test, and the optimal percentage of burn-in. He concluded by noting the industry will move forward to increase confidence in test results.

The role of AI and machine learning in test

A Thursday event at Test Vision 2020 offered the audience an opportunity to weigh in on the potential role of AI and machine learning in test. The session, hosted by Paul Berndt of Microsoft, allowed attendees to vote by smartphone or laptop on various questions.

Almost a foregone conclusion in this event was that the semiconductor test industry will need more data-science and machine-learning expertise. The key question was, where should this expertise come from? A solid majority of participants thought the test-engineering community had to add data-science skills. The alternative would be competing with industries ranging from banking to medicine, insurance, and advertising—and data scientists in those fields would not have the domain expertise necessary for the semiconductor test industry.

Participants suggested many applications for AI in test. For example, automotive semiconductors will be continuously running BIST on themselves. AI could possibly facilitate immediate feedback (including failure type and die ID) across the entire supply chain.

The good news is that an overwhelming percentage of participants expressed skepticism about autonomous test—AI, they predicted, would not make test engineers obsolete.

Participants suggested several possible topics for next year’s Test Vision 2020: counterfeit chip and hardware Trojan detection, faster yield learning, test flow optimization, test avoidance at wafer level, field failure prediction, self repair, load-board maintenance, photonics test, over-the-air test, and AI processor test.

What does “2020” mean?

An additional question posed to the audience was, to what does the “2020” in Test Vision 2020 refer? The possibilities are the state of the test industry in the year 2020—an answer preferred by at least one participant, who said the name needed to change, as the year 2020 is fast approaching.

The other alternative? It refers to the “2020” spec optometrists use for perfect vision. There is no definitive answer, but history buffs might want to read my report on the original version, which occurred at ITC 2007 under the banner “ATE Vision 2020.” Here is the relevant excerpt from my interview at the time with then workshop program chair Scott Davidson:

Q. The title of ATE Vision 2020—does that refer to the year 2020 or the “2020” spec of perfect vision?

A. The latter. The year 2020 is a little bit too far out to predict. The goal of the workshop is to look five to 10 years out.

And by the way, when asked in 2007 whether a similar event could be repeated, Davidson said, “I think there may be enough material that we don’t cover this year that we could hold a similar workshop in the future.”

About the Author

Rick Nelson | Contributing Editor

Rick is currently Contributing Technical Editor. He was Executive Editor for EE in 2011-2018. Previously he served on several publications, including EDN and Vision Systems Design, and has received awards for signed editorials from the American Society of Business Publication Editors. He began as a design engineer at General Electric and Litton Industries and earned a BSEE degree from Penn State.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!