In an earlier column I discussed Attribution Theory, which emphasizes the importance of the meaning that people attach to events. Their perspective can influence behavior more greatly than the objective reality of the incident itself. A nice example of this occurs when an engineer tries a design approach that misses the mark.
Our reaction to failure depends more on what we believe that failure means, rather than its actual consequences. Consider three possible meanings:
- Clever engineers are undermining the program to make management look bad.
- Stupid engineers are making foolish mistakes.
- Courageous engineers exploring unchartered territory are learning about unforeseen obstacles.
If we believe some sly engineers are sabotaging the program, the simplest solution is to motivate them to do the right thing. We can do this with severe punishments, generous rewards, or both. For example, incentives might align the financial interest of management and the engineers. If motivation is the problem, let's encourage those who are capable but misguided.
On the other hand, if we believe that absent-minded engineers are making ridiculous mistakes, then motivation will not solve the problem. We need courses showing them how to "do it right the first time." The process should be designed to prevent mistakes. We need checklists for everything and tripwires everywhere. Our plan will be so brilliant that even a monkey would excel by following it.
Both of these interpretations include embedded judgments about the competence and motivation of engineers. They also contain an underlying assumption that all failures are bad. The danger of such a belief is that it causes engineers to conceal all but their triumphs. Unfortunately, this creates the illusion that mistakes don't exist. The logic that leads to them therefore goes undocumented, and instead, only successes are reported.
Our third interpretation is obviously the most self-serving and appealing to engineers. Because failure is a necessary path to learning, all errors are viewed as positive. After all, we cannot venture into the unknown without experiencing some mishaps. From a positive point of view, this interpretation values the learning aspect of failure. On the negative side, it establishes a tolerance for failure that may prove dangerous.
Where does the truth lie? I believe it's dishonest to assert that engineers are never stupid, stupid to assert that they're usually dishonest, and inaccurate to classify all failures as unavoidable. In reality, we have both good and bad failures in a design process. Bad ones come from rediscovering that we need to connect two wires together in order to make current flow. Such failures create no new learning for the organization, and should be eradicated as quickly as possible.
On the other hand, good failures occur if we generate useful new data. The information content of these failures is high when a sensible approach doesn't work for subtle, unexpected reasons. If we don't record and publicize such outcomes, we'll forget them, leaving future engineers to try the same unsuccessful strategies.
An overly simplistic view of failure will make you label mistakes as always good or always evil. Either way, you're likely to be mistaken.