Electronic Design

What's All This Quality Stuff, Anyhow?

WELLLL - I have been figuring for years that someday I would write about "quality." But aside from saying that we're all, of course, in favor of the HIGHEST POSSIBLE QUALITY, what else is there to say? Wouldn't that make a very short column? Yet despite the truism just listed above, very few of us readers drive a Rolls Royce.

So perhaps we should admit that we're all interested in very good quality at a reasonable price - a Volkswagen or Ford price, maybe even a Cadillac, Acura, or Mercedes price - very good quality, appropriate quality, but not necessarily the highest possible quality.

I still recall very plainly that VW used to advertise and brag that they had more QC inspectors than they had cars, and those old VW beetles had pretty good quality. Of course, it might be that the inspectors forced the production engineers to go back and solve the design and manufacturing problems. Maybe that's a back-door way to achieve good quality. Anyhow, I still like my old Beetle, running reliably at 287,000 miles. It's not the highest quality, maybe not as good as a Rolls Royce, but it is appropriate quality.

I recollect the story of one of the pioneering transistor companies, back in the '60s. They had agreed to ship to their customers transistors with an AQL (Acceptable Quality Level) of 2%, which was pretty good for those days. So the tester would test 98 good parts and put them in the box. Then, following her instructions, she would add 2 bad transistors to finish off the box, thus bringing the quality to the exact level desired. This went on for some time, until one of the customers got suspicious, because the two bad transistors were always in the same corner of the box! Then things were changed ....

Now let's get serious. I propose that not everybody really wants to buy parts with the highest quality and the smallest AQL. Let's consider a few practical examples. If I'm testing some gates or flip-flops, the output had better be about right, with a very high confidence, on every test. If I make 20,000 bad dice in 1,000,000, and I only reject 19,700 of them, you as a purchaser might be quite displeased to get an incoming quality rate of 300 ppm. So, it is pretty clear-cut that for ordinary commodity ICs, the parts and the testing should be very good so that no bad parts get to the customers. Many IC manufacturers do well at this.

Ah, but what about precision analog components? Precision op amps, 16-bit DACs, low-noise references? Can they always be tested with full precision and confidence?

What about a high-gain op-amp, such as OP-07? If the output swing is a 10-V step (either + or -), then the maximum input error (delta V) must be less than 50 µV. Further, it must be less than 50 µV with a VERY HIGH confidence level every time you measure it. What if it is 34 µV p-p sometimes when you test it, and 38 µV p-p other times, and 30 µV another time? Then if you see 47 µV on another part, it might change to 51 µV p-p if you test it again, and that would be a fail of the test. So, your testing must be done with good repeatability and low noise.

Is it realistic to try to meet an AQL of 0.01% on such a test? Is it reasonable? Is that what you want to pay for? Would an AQL of 0.2% be just as reasonable? Let's look at another spec. The OP-07A is guaranteed for 0.60 µV p-p of noise voltage, max, in a bandwidth of 0.1 Hz to 10 Hz. When you test an amplifier for noise in a fairly small bandwidth like that, you must not expect to get exactly repeatable results. So if you measure 0.42 µV one time and 0.48 µV another time, you have to be very careful about the repeatability. If there is much noise, the guard-bands must get bigger and bigger until the spec is a LOT more conservative than what you're actually measuring. As it is, the OP-07s have 0.35 µV p-p typical, yet we're barely able to guarantee 0.6 µV p-p.

What if we said that this spec was only guaranteed at an AQL of 0.5%? Could we then guarantee 0.45 µV p-p? Would you prefer to buy that as a spec? Would that part be of any significantly lower "quality" than the one with 0.60 µV max? I myself would sooner buy the part with the looser (0.5%) AQL and the tighter spec, as I think it is a more realistic kind of spec with more realistic testing. The yield to the manufacturer might be better, the testing might cost less, and the actual noise performance might be better. What do you think? Comments invited.

At one time, one of the major suppliers of precision analog parts used to guarantee reference ICs with a certain long-term stability. In the fine print, they said that 90% of their production would meet that spec, even though they did not test for this. I dunno, but 90% seems a little rough as a quality level. Still, at times, that may be an appropriate quality level. But the company has stopped saying that ....

Now, I will get into the MEAT of the topic. I recall reading the writings of such people as W. Edwards Deming, Genichi Taguchi, and several other promoters of "quality." They argued that we should not rely on testing for high quality. We should not be trying to "test high quality" into our products, but instead we should build them with such high quality that testing is superfluous. That's what Mr. Deming said, and he was considered "the dean of the quality movement." When I first started writing this column in August of 1993, Mr. Deming was very much alive and kicking, and I was hoping to start a dialog with this feisty old-timer. However, Mr. Deming unfortunately died on December 20, 1993, and I'm disappointed that I cannot duel with him verbally. Still, maybe some of Mr. Deming's colleagues will be in a mood to hold up his end of the arguments, or explain what Mr. Deming thought.

NOW, Mr. Deming's philosophy seemed to say to the entire semiconductor industry: "Do not rely on testing. Just build everything perfect." Well, when we build good op amps, we may get a yield of 80 or 90 or maybe 93% at "wafer-sort," when we probe and test the chips on the wafer. We ink any dice that fail any test, and lots of good dice remain. After we assemble all of the good dice, we test the packaged circuits again, repeating many of the tests to weed out anything that may have gone sour, or worsened after assembly, or that may run poorly at a hot or cold temperature. Now we have a yield loss of only 1 or 2 or 3% at this "classification test" after assembly. Still, this is infinitely far away from Mr. Deming's philosophy. Why are we semiconductor manufacturers so thick-headed? Why can't we get anything right? Hmmmmm.

If you buy LM324 quad op amps in moderate volume from NSC, you pay about 38 cents and you get 4 good op amps with a quality level of about 99.9996% per package (about 1-ppm reject rate per amplifier). That may not be the best op amp you can buy, but it's pretty darned good at the price. And I claim that it's the testing that makes these amplifiers so good. If you looked in our "scrap pile," you would find some amplifiers that literally meet the published spec - but we have installed tests to make sure they're rejected. Hey, we reserve the right to reject op amps even though they meet the published specs, because we determined that there's something not quite right about them. Do we get this excellent quality by doing no test, or minimum testing? Hell no.

Ten years ago, we were shipping some of the highest-quality amplifiers in the industry, with an AQL perhaps 99.92%. Pretty good in its day. But we're doing better now - by a factor of 100 or 200. WHY? Because we have better tests. If you took some 1984 op amps and ran them through our 1994 testers, we might pass some that we failed before - because our tests are better. And we would surely fail some that were passed back in 1984 - because our testing is better.

Over the past few months, I've asked several friends who work in Quality functions, test engineering, etc., and looked around myself trying to find out the answer to these questions: If the whole semiconductor industry is acting in contradiction to the "no testing" philosophy, has anybody made statements specifically rebutting it? And, has anybody explained that Deming's advice does not apply to the semiconductor industry.

As of yet, I haven't found anyone who has ever heard this stated. SO, you're going to hear it right HERE for the first time: There may be many kinds of manufacturing where testing to achieve high quality is the wrong way to do it. But in the semiconductor manufacturing business, testing is EXACTLY the right thing to do to get high quality.

NOW, I will not say that testing is exactly the cause of the good quality. We design these circuits to be of very high quality, very manufacturable. Our wafer fab usually has excellent quality, so these ICs usually do work well with high yields. But if a run has fewer parts than normal that meet all specs, we don't let that affect the quality of the parts we ship. We do our testing, and the test results tell us what's causing us yield loss.

When we go back to solve a basic design or manufacturing problem that hurts the yield, that may also help the real quality, too. Yet I claim that it's the testing that drives the quality. What kind of IC, or car for that matter, would you buy if you were assured it was so good that no testing was needed?? How would the manufacturer know that the quality really was very good if he did not do tests?

If a Taguchi expert told you how to change your design, but decided not to check his results because his engineering was of such high quality that he didn't have to do any checking, would you buy that? NOTE - almost all of our customers are now accepting our testing as quite adequate. So when they buy circuits from us, they do not have to retest them for conformance to the guaranteed specifications at their "incoming inspection." That seems reasonable.

But of course when the customer assembles 22 or 46 or 79 components onto a board, then it's time to do some more testing for that assembly. I recently heard several friends complain about the poor quality of some electronic stuff they bought recently - some made in the U.S.A., and some imported, too. Right out of the box, the equipment failed to work, and you could tell that it had never been tested. Are you readers seeing that, too? Are some manufacturers starting to take "no testing" literally? If they talked themselves out of final test, a final functional test, that's pretty scary ....

I had a talk with Melodie McClenon, manager of our Data Acquisition Test Engineering group. She agreed that we do find it important, occasionally, to add tests. They're necessary to screen out the few parts with bad quality which customers occasionally reject and gripe about. Often, a test is added to guard retroactively against some kind of a "Quality Accident," where a customer finds quite a few parts out of spec. Later, some manager may propose to delete that test as "superfluous." Melodie says she sometimes has to fight like heck to keep the test in, and she can usually do this by proving that the test, while usually unnecessary, is keeping a 1-ppm failure rate from being shipped.

I agreed that, if on most days, the test takes a few milliseconds, and cuts out a 1-ppm reject mode from being shipped, and it's ALSO able to help avoid the possibility of some disastrous bad parts from being shipped on 1 day out of 1000, that sounded like good judgment to me. I told her that I was on her side, and if she needed anybody to help her, I would fight like hell for her. It's always good to find a friend with ideas worth fighting for.

I have heard some guys argue that we should improve the yield of our op amps to 100%. I'm skeptical of that idea. At present, we have optimized for the lowest total cost for a good die, including the cost of testing. If we try to optimize the yield, with special wide spacings, special coatings, special processes, special redundant circuits, and so on, we might get a much lower percentage of bad dice, and a slightly higher percentage of good dice. But there would be fewer good dice per wafer - and fewer per dollar. Would these fewer dice be of higher quality? Higher reliability? How much better? And, if they are a little better, how much extra would you like to pay for them? Would you like to buy some with no testing? Just think about all of the money you would save by cutting out the testing ....

Other people have argued that it's bad to have a low yield, because the quality must be bad. Nobody would want to buy an IC made with a yield worse than 50%, right? Well, almost every computer in the world has a processor chip that came out of fab with a wafer-sort yield much lower than 50%. That low a yield would sound pretty dumb for an op amp or gate or counter. But for a big processor, a yield of 10 or 20 or 30 or 40% may be quite rational, and very reasonable compared to making smaller dice with higher yield. It certainly doesn't mean that the dice which pass all tests are of poor or marginal quality. So far, we're doing exactly what Mr. Deming says not to do - yet our quality and reliability keep getting better, and cheaper. And everybody in the Semiconductor industry is doing the same.

One engineer pointed out that to plan to ship a part with the "correct" amount of quality, one might want to know if the intent is to ship a "Yugo," or a "Rolls Royce," or perhaps a "Honda Civic." If we wanted to sell the LM324 as the Yugo of the industry, we could surely increase the yield by shipping parts that just met every published spec - but that might be wrong. The LM324 may have a Yugo level of specs, but we want to make sure it's tested out to a high grade of quality, a "Honda" grade of quality and reliability. Therefore, when you buy one, it comes with some awfully good quality. Do you know anybody who actually builds in worse quality on cheaper parts? Our customers, such as HP, Delco, or IBM, would never stand for that ....

Recently, I was surprised to learn that most of the early NSC linear ICs did not have any protective coating of Vapox or Nitride over the die. So, 24 years ago, LM108AHs came with a metal wire just flying through the air above the die, above the aluminum metallization, with no Vapox passivation over the die.

Were those old LM108s and LM101As less reliable than what we make these days? Maybe. But they have shown us a lot of proven reliability. Are new LM308ANs in epoxy mini-DIP packages less reliable than dice in hermetic packages? Well, the new mini-dip ICs are all lead-bonded with automatic machines, with much less variability and better repeatability than the old human-controlled lead-bonders. These ICs are ALL awfully reliable nowadays, and have excellent quality.

(Note, technically we must avoid confusing reliability with quality. Yes, a part that fails after a customer puts it to work can make the customer just as unhappy as a part that never met spec in the first place. But we have to treat those two problems separately. We want to provide parts with excellent quality and excellent reliability, and we have to be careful not to hurt one while helping the other. For example, if we do a burn-in test, we must make sure it doesn't harm a part's reliability.)

Now, a great majority of what Mr. Deming said makes excellent sense to me, and I'm pleased to recommend his ideas as an excellent legacy of his career. He observed, for example, that when making cars or airplanes or lawn mowers where the quality is randomly poor, and you're forced to go back and rebuild or replace the engine or the transmission (or the paint) before you can ship the product, it's a LOUSY way to do business.

So, do any readers have any opinions on testing? Have you seen anybody else's good convincing statements or arguments on this topic? I'd love to hear from you. I do know that NSC has always done some testing on every component they shipped, and it's unlikely that we'll change that statement. In fact, more testing usually helps provide higher quality, compared to less testing.

As a final note, I sent an early letter on this topic to Mr. Deming at his home. He did not reply at that time, but it's not possible to draw any conclusions from that. I just wish I had been able to find a good excuse to talk to him earlier!

Comments invited! / RAP
Robert A. Pease / Engineer

Originally published in Electronic Design, February 7, 1994.

RAP Update: This column generated a LOT of interest and fan mail. About 62 letters. A number of readers wanted to argue that I was wrong. "Bob, if you just increased your quality a little bit more, you wouldn't have to do testing ..." WRONG. I will be publishing "QUALITY II" in a few months ....

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish