A couple of years ago, a couple of friends suggested I join them in a seminar on “the Taguchi Method.” It’s the hot new method to optimize a manufacturing process, they assured me. So I hiked over to building 27 and watched the lecture on a video session. At first, I was just making notes in the work-book and on scratch paper I had brought along. But soon I found myself getting suspicious and angry.
The first example was a special “widget” amplifier in which the basic design, we were told, did not meet its specs with a good yield. The professorial lecturer assured us the problem was because the betas of the transistors were too low and too variable, leading to sloppy tolerances on the finished product. What to do? The lecturer claimed that the solution was to specify higher beta (by just a factor of two) on the transistors, so that the sensitivity of this circuit to beta was greatly decreased to virtually zero, and the production spread got narrower. Ah, I thought, very nice, but what exactly is that circuit? WHY, exactly, did doubling the beta cause fewer problems and a tighter production spread? Why did tripling or quadrupling the beta cause problems to recur? I vowed to find out.
Then the video lecturer went on to study the problem of a high-pressure molding machine that could make little plastic spoons. There were several variables—the temperature of the plastic, the pressure, the mold time, etc. Now if you’ve never studied the Taguchi method, I think I can safely say that its analysis method is primarily geared to design several tests that combine changes of the variables. Then you can analyze the test results to see which variables are causing the most harm, and which ones you should tweak to optimize the centering of the output and achieve the best, tightest distribution. In theory, you can’t argue that a plausible, logical, methodical approach like this can go wrong.
But I began to get more and more suspicious. Why did all the data have nice, simple round numbers as results? Why was there no NOISE on the measurements? Why did all of the data come out nice and LINEAR? Where I come from, I do not expect the data to be perfectly linear and noise-free and to come out in round numbers.
It looked to me as if somebody were laying down some funny data to make it easy for us to analyze. That made me pretty nervous and suspicious, so I was on my toes looking for any other suspicious statements.
The lecturer then belittled the operators of the machine—“Joe thought he knew how to run the machine better than anybody else, but the Taguchi method found an optimum point that Joe never thought of…” Again, I was suspicious, but maybe Joe knows something that the computers don’t. For example, it’s easy to prove, by math or by computer, that in many cases you can get the best acceleration and performance from a car if you slip the clutch a lot. All very true, until the clutch burns out.
As soon as I saw the full glory and simplicity of the Taguchi method, I spotted its first, deadly serious weakness. The advantage claimed for the Taguchi method is that it proposes a MINIMUM number of tests, the analysis of which will indubitably lead to an optimum solution. But these methods only check the design-center zero point and the design-center full-scale point once.
The method implies you can check the design-center zero offset in the morning, and the design-center full-scale of the mater at noon, and take data all day long. And you’ll never have to recheck the design-center zero reading or the gain, because that would be wasteful and redundant.
Mr. Taguchi, pack up your “methods” and get out of my lab. DO NOT TALK to any of my engineers or technicians. Do not suggest any other ways to be more “efficient” or “optimized.” Just get out.
When I have my technicians taking data, they all know to record some design-center or “reference” zero data and gain data, at the beginning and at the end, and a few times in the middle. And when they’re just sitting around sipping a cup of coffee, they tend to keep their eyes on the experiment. If it has any drifty tendencies or “jumpy” habits, they can spot that. And, as a “sanity check,” they normally repeat some of the early tests, down near the end of the testing, to make sure the data are more-or-less repeatable. IS that “efficient?” Well, it’s the right way to take data, and that’s the important thing. One of my old supervisors, Tom Milligan, used to tell his technicians, “If you see something funny, Record Amount of Funny.” And we call that Milligan’s law, in his honor. So, I suspect Mr. Milligan would have thrown Mr. Taguchi out of his lab, too.
Back to Taguchi, the lecturer pointed out that they would manipulate the variables they could control—the temperature of the plastic, the pressure, the molding time. But they would not worry about the viscosity of the incoming plastic material, because they didn’t have any control of this. So they would ignore it. Good Heavens!! If there is a big variable, (like the sun coming up in the morning?) and you have no control over it, that doesn’t mean you should not monitor it and try to compensate for it. Let’s say you’re trying to mold little plastic spoons, and the Taguchi method has told you an optimum setting for the time, temp, and pressure. Suddenly a new batch of plastic material comes in with a much lower viscosity. Are the optimums going to change? You bet! And if I keep monitoring the viscosity, I can look up in my “cookbook” to see where I should start searching for a new optimum. The Taguchi method sets a very poor example—there are no plans for keeping track of the viscosity and its changes, or compensating for it.
Early on in the lecture, we were told that the Taguchi method tries to minimize the “loss to society.” We’re told that widows and orphans will suffer if we don’t make things correctly; that is, with the highest reliability. But then every example we were given was simply a procedure to maximize production yields and lower the production cost. Suddenly it began to rankle me that we were first motivated to be nice to widows and orphans, but at the end of the day, it turned into another pitch to improve manufacturing costs. I’m in favor of both objectives, but I don’t like to see the approaches muddied with unclear thinking.
As I went along with the rest of the lecture, I spotted other discrepancies. I noted them all in my workbook and prepared to ask the lecturer some serious questions. First, I had to find his address—apparently nobody expected anybody to ask any real questions. But I did find his address, at one of the respected eastern technical schools. I sent him a nice letter. A month later, I sent a second copy in hopes of getting a response. I really did want to find out about that Widget circuit that allegedly worked better with moderately high beta.
But the professor responded with platitudes, just congratulating me on my perceptions. He answered none of my questions. So I wrote again, requesting that I would like to know the circuit of the Widget amplifier, and the answer to several other questions.
The professor claimed I had asked so many questions that he could not answer them all. So he answered none of them. I wrote again, asking just one question—what exactly is that Widget circuit?
Again he responded in platitudes. He observed from my business card that my title was “scientist,” and scientists don’t have to follow the teachings of Mr. Taguchi as engineers should (hogwash). And he still evaded my question, and he still didn’t include a schematic.
I wrote back politely, stating that if he expects us engineers to accept all this Taguchi method on his say-so, on faith, and to never question anything, he was probably going to find some massive rejection. I explained that maybe in other parts of the world, you can sell engineers simplistic solutions as profound enlightenment. But around here, we don’t just buy something on faith—we compare it to other methods that have worked well in the past.
I explained that I could not possibly let my engineers or technicians use the Taguchi method. Our measurement methods are much more skeptical, and they have a chance for working despite nonlinearities and noise, something I would never trust the Taguchi method to do. I also explained that if I found anybody else pushing the Taguchi method around my area, I would sit them down and explain the weaknesses and disadvantages of the method. And I would specify that I believed the Taguchi people were trying to sell them an optimization technique for a “Widget” amplifier—one that apparently never existed. After that, I never heard from the professor again.
So, I caution you—if any new whiz-bang method promises results that sound too good to be true—maybe they are too good to be true. Recently, the announcers on National Public Radio promised that there would be a special program at 11:00 P.M. with “incredible” new political developments. At 11:00, John Hockenberry told about how Richard Nixon was entering the 1992 political races… Yes, it was so late at night that it took me several minutes to realize that only 23 hours of April 1 had passed by, and there was still one hour yet to go! Yes, the announcer was right—it really was “incredible”…absolutely unbelievable!
One of the people who reviewed my final draft of this subject, Dan Callahan, had some experience with Taguchi Studies, and he agreed that I wasn’t just imagining my complaints. He suggested a book by William Diamond, Practical Experiments Designs for Engineers and Scientists*. I immediately hiked over to our library and checked it out, and it’s quite thoughtful indeed (especially by comparison). This book allows for the possibility of different types of noises, nonlinearities, errors, and glitches in the data. It advises the experimenter how to deal with many of these aspects of reality. “The best experiment design results from the combined effort of a skilled experimenter who has had basic training in experiment design methods, and of a skilled statistician who has had training in engineering or science. The statistician alone cannot design good experiments…” That makes sense to me. I can’t say I understand everything in it—I’m a little rusty on my statistical mathematics—but I like everything I understand. Much better!
All for now. / Comments invited! / RAP / Robert A. Pease / Engineer
(The opinions expressed herein are those of the writer and do not necessarily reflect the views of National Semiconductor Corporation.)
*Practical Experiment Designs for Engineers and Scientists, William Diamond. New York: Van Nostrand Reinhold, 1981 (347 pages).