Snrg, the product manager at the Bison Valley Ax Works, was slaving late one night on a specification for the Mammoth Whacker II. He carefully chipped out the words, "During use, the head must stay on the ax," on his stone tablet. (Specification changes were less frequent in the days before word processors.) While this level of brevity and precision may exceed that of modern marketers, it falls short of what a design team needs to know.
Six months later, the designers had finished and Tbzf, the test engineer, looked at the specification. (In those days, product development was conducted VERY sequentially.)
"What test can I use to see if the ax head stays on?" he wondered. "Perhaps I can spin around rapidly in circles holding the ax in both hands. Perhaps I can get Marketing and Engineering to spin around in circles with other axes, too." So, Tbzf wrote his test plan. The Mammoth Whacker II then passed its tests with flying colors, and it was launched in the market with great fanfare.
Several months later, the Mammoth Whacker User Group (MWUG) gathered for a meeting. One could not help but notice the angry MWII users waving headless axes at the emcee. Apparently, as the green vines that secured the axheads got older, they became brittle and failed at rather inconvenient times—such as when a user inspired by fear of death began swinging his ax with adrenaline-powered zeal. Clearly, the head did not "stay on the ax."
Even today, developers assume that their tests validate that they have met the product's requirements—but this isn't true. Tests merely validate that the product passes tests. The product that the user receives is less a result of its specification than it is a result of its test plan. Many requirements aren't measurable as they appear in specifications. They only become measurable when a test plan is defined.
With this in mind, what should you, the designer, do?
First, whenever you write a requirement, ask someone, "How will we test this?" If you can't define a test, your requirement is probably too fuzzy.
Second, get a test engineer involved when you write the specification. If you don't have a test engineer on the program yet because it is too "early," complain to management. You'll be operating with a poorly defined requirement until you know how you will measure it. There is no better use of a test engineer's time than making sure that a new product's requirements are specified in a testable form.
Third, when you interview customers, don't simply discuss requirements. Discuss testing. When the customer says that he wants the product to be reliable, explain how you plan on testing for reliability. You might discover that your definition of this term differs substantially from the customer's.
For example, I recall a company that was testing its product to determine if it was waterproof through a standard extended immersion test. It used this technique because the product requirement said that the product must be waterproof. In fact, the customer simply wanted the product to work if it got wet. This is a totally different design problem, and a much easier one to solve.
Remember, the product you ultimately get is the one you test for.