Thank you for your questioning series on Fuzzy Logic and the many claims that have been made for it. Maybe F.L. has a lot going for it, but I have yet to find a clear case of where apples have been compared with apples. I hope a proponent of F.L. can clearly show one. I have read three of the recommended texts on F.L., numerous magazine articles and still remain unconvinced. (Though I must admit the articles by James Sibigtroth of Motorola are persuasive.) The Sendai subway train is often quoted as an example of how good F.L. is. But where is a complete and analytical comparison with a good proportional controller implemented in conventional logic?
What has especially clouded the case for F.L., for me at any rate, is the puerile, insulting, and trivial treatment of its supposed advantages. Let me give you a couple of quotes:
- "Fuzzy Logic lets you assign values instead of crude categories. For instance, if we arbitrarily defined old as over 40, a woman who was 41 would be offended by the old label. With F.L., we can assign membership values to the young and old sets." Really! I'd just call her 41 and avoid any fuss. It's not that I'm a feminist, it's just my analog upbringing.
- "F.L. does not constrain us to use one or zero, instead we can use intermediate values." Oh great, here I've been thinking that 8-bit micros can support 256 discrete values. Oh well, I learn something new each day.
With this sort of crappola being quoted as reasons why we should embrace F.L., I can get a tiny bit cynical about it. Please, can we see some serious proof from a F.L. proponent?
BARRY T LENNOX
Barry, we haven't stopped looking for good Fuzzy comparisons. When we find one, we'll let you know.—RAP
Dear Mr. Pease:
I myself might have had an encounter with Roy McCammon's truck-stabilized riometer (March 21 issue) at McMurdo Station, Antarctica a couple of years ago.
I was asked to look at the instrument in question, which was indicating random anomalous bursts of RF energy once or twice a day. The chart recorder indicated only increases during these episodes, so the problem has to be noise or RFI, right? We couldn't use the "diagnostic truck" because some genius had measured the gasoline in the tilted main storage tank at the deep end only, so we were running out of this fuel, and it was three months until the next tanker was to arrive.
So we checked the power supply, walked the cable lines, wiggled all the connectors, took the remote receiver and antenna apart, tried to blame the Navy's 7-kW HF transmitters, tried to blame the power system, scratched ourselves in various places, had a beer, and went to bed.
Next day, out of ideas and probably out of instinct, (I grew up with vacuum tubes), I started pounding on the equipment racks. Lo and behold, the bursts appeared.
The problem was found in a matrix switch for the output of the riometer that was the product of some helpful fellow who had never gone to soldering school.
Why did the output rise during the fault? Two devices shared the low-impedance signal, one high impedance and the other with a substantial positive input bias current. I don't think it was one of your parts. The lessons? 1. If you haven't proven the problem, you probably fixed the symptom. 2. If the problem is smarter than you are, kick the damn thing.
Just as I said in my book on p. 143—when working with intermittents, apply a little FORCE!—RAP
Thank you for the voice of sanity on the QUALITY stuff. We are a semiconductor division of a large power-supply company. Ever since we first started, we have been using our own, practical, appropriate standards of quality to ship the best parts we could. And we have had the right to fight people, quality experts even, who want to impose rigid standards on us that would actually cause us to ship more bad parts to customers, even though we would then conform to a "high quality program."
(Personally, I am of the opinion that it is impossible to create a documentation and specification system that makes a process idiot proof. I confess to having picked up a phrase of yours as an existing operating motto: Thinking is required.)
Our quality philosophy comes down to this: If it smells fishy, don't ship it. How do you know when it's fishy? Smell it, taste it. In other words, test the hell out of it.
- And look at the data.
- And use your intelligence.
- And be skeptical.
There are parts that pass all of the data-sheet specs, but have a parameter that falls outside the usual distribution (note that I do not say "normal" distribution—more later). If you look at the histogram of the parameters of a large number of units, these are the parts in the "tails" of the distribution that trickle away from the main body of the material, yet remain within specification. Those are fishy. They're not the same as the others. Get rid of 'em. Common sense, and many years of analyzing customer returns and field failures, tells me that these are likely to be the parts that give customers conniptions, even though the data sheet tells me there's nothing wrong with them.
Historically, we built product exclusively for our own mother company. But one of the reasons a tiny little 40-person division like ourselves is now generating sales to major outside customers is that our philosophy hasn't changed. When you have only one customer, it's in your best interest to make damn sure that the customer will like every part you ship, and "like" is not an objective criterion by any means. "Liking" our product includes, but is not limited to, complying with the data sheet. "Liking" our part also means, "Does it function?" Bizarre as it may seem on the surface, I bet any field applications engineer will be able to tell you about cases where the failed part met every data-sheet parameter, yet didn't function.
More importantly, "liking" our product means, "Is it the same as all of the other material we've been shipping?" The IC engineers in this division wish in their hearts that our customers would actually design with data sheet min and max tolerances in mind. But the fact is, most of the time they just look at a few typical sample pieces and see if their circuit works. Often the IC is just one of dozens of components on a board, and a pretty insignificant one at that, and if they were to actually try to design around a full-spec tolerance of our chip, they couldn't build their product. Yet, in reality, they do build their product, and a lot of it, to a high-quality level, and make a profit on it along the way.
So we take the view that, to consistently receive our paychecks, we have to protect the customers from themselves to a large extent. We have to understand their designs thoroughly, from the point of view of the chip, so that we can anticipate where normal variations in our process will cause their designs to fall out of bed. And usually this means that we have to put intelligent guard bands around our typical distributions. We have to test the parts to know that what we ship is the same as it's always's been.
There's another benefit to this philosophy. It produces de facto control limits on our process that reflects the actual "quality" our customers require. And we also find that there are parameters that, while spec'd in the data sheet, really don't matter to the customer. So then, an intelligent guard band on testing is simply the test resolution within the data-sheet limit.
This brings us back to parameter distributions and the so-called quality programs. We have a number of parameters, like any analog IC manufacturer, that have to be trimmed. Offset voltage of amplifiers and the output voltage of references are the obvious examples. Acceptable process variations create product parameter distributions that do not meet specification, so the product must be designed to allow this parameter to be trimmed. We have had customers and their quality experts insist we "fix" our fab process so that this wasn't necessary, or redesign the product so that the parameter in question becomes "independent" of process. Well, when your products have a bandgap reference, it's a little difficult to design the reference value to be independent of Vbe, since it is that very dependence which forms the basis of the circuit in the first place! If you want a tight reference, you're going to have to trim it.
Now, trimming produces a little oddity that really confuses quality people. When you get done trimming a reference, for example, you have taken all of the parts and modified their circuitry so that their reference value falls within a very narrow window. The final distribution looks like a box, with absolutely vertical sides and an almost uniform distribution of material within the limits. Then, generally, we get one of two questions:
- "What happened to the rest of the parts?" In other words, they think we've artificially lopped off the sides of the distribution, which is actually true, but not in the way that they think.
- "How do you expect to stay within tolerance?" In other words, they calculate the mean and standard deviation on the histogram we sent them, set the 3-sigma limits, and complain that we are going to be out of spec.
Both questions indicate, of course, that they think our quality is going to be lousy because they don't see a NORMAL distribution nicely centered within large process guard bands. And we have to explain, patiently and concisely, that they aren't going to get, ever, any parts that fall outside the spec window, due to the very nature of the trim procedure.
An important footnote to trimming: It is absolutely critical to lop off the "tails" of the distribution, as described above, before trimming the parameter. Designers often include enough range in their trim scheme to allow these tails to be trimmed back into the main distribution, at which point you'll never find the anomalous parts to get rid of them. This is definitely one of those places where testing is critical to maintaining the quality level.
Senior Design Engineer
Astec Semiconductor Div.
Marc, I agree with you on ALL points. Any time you have to re-educate a "Quality Expert," you're doing us all a favor, and I'll support you on that. I hope I have been helpful in deflating the infallibility of "Quality Experts."—RAP
I am responding to your Feb. 7 column regarding Dr. Deming and the testing of semiconductor products.
While I am by no means a student of the history of quality control, I believe that you are mistaken in your assertion that Dr. Deming spoke of a need to eliminate all testing for a product and manufacturing process to be deemed "good quality."
Dr. Deming's claim to fame was the application of statistical methodology in the direct management of production processes by the operators and first-line supervisors themselves. It was he who advocated putting the quality assurance into the hands of the operators, as opposed to separate inspectors.
Now, it shouldn't take a PhD to realize that an application of statistical methodology implies the collection and analysis of data. What data? Why the results of in-process testing, of course.
In-process testing can be as simple as the visual inspection of a molded coffee cup or as complicated as the complete parametric analysis of a semiconductor product.
Deming, Juran, and the lot all advocate the liberal use of production process feedback, applied as early in the process as possible. Their gospel has always been one of pushing the testing "up the supply chain" and to eliminate the need for redundant testing, as epitomized by the classic model of an incoming inspection department that examines everything coming in the door.
J DAVID EGNER
Compliance Engineering Manager
PowerHouse Systems Inc.
Menlo Park, Calif.
David, we do lots of testing early in our fab, and intermediately, too. But we still depend on Final Test, and so does every other semiconductor maker. If our tester is down, we can't ship!—RAP
Re: Your Feb. 7 column on quality. Stripped of its prose, the argument against end-of-line (EOL) testing invariably boils down to the idea that given 100% confidence in the raw materials and the manufacturing process, you need not test the finished product.
I gleaned from your article that the combination of a 40% yield manufacturing process with a 99.995+% confidence EOL test is an economically favorable way of producing high-quality goods in the semiconductor industry, despite the high rejection rate and the cost of testing. This is so because the rejected goods have little intrinsic value, and the tests are quick and cheap compared with the cost of increasing the confidence in the production line.
This would tend to support the notion that the world is not black and white, but a shade of gray. For any given manufactured good, the economically optimal balance between production-line confidence and EOL is not 100% vs. 0%, but somewhere in-between.
You have paraphrased my points, setting them in a new light very nicely. A little practicality beats platitudes almost every time. Deming knew that.—RAP
All for now. / Comments invited! RAP / Robert A. Pease / Engineer
Mail Stop D2597A
P.O. Box 58090
Santa Clara, CA 95052-8090