For decades I had heard the expression "garbage in, garbage out" and nodded my head reverently. Of course, I thought, no answer could be more accurate than the accuracy of its input assumptions. Only recently have I realized that this isn't always true.
Let me provide a simple example. Estimate the square root of an uncertain number ranging between two and four. The input might be estimated as three plus or minus 33%. To what accuracy can you estimate the answer? 1.73 plus 15% minus 18%. Interestingly, the answer is more accurate than the inputs. A more extreme example is estimating the number one to the nth power. Even if the number n is highly uncertain, the answer is not. Though simple, these examples illustrate that there's little truth to the notion that the accuracy of an answer can never be higher than the accuracy of its input assumptions.
This might be irrelevant if it didn't have a direct application in the world of product development. We always counsel development teams to estimate their project at a high level of detail. They groan and explain that it's silly to do a detailed estimate on something they have never done before. But, there are several important reasons for performing such a detailed estimate.
First, by doing a detailed estimate, we are more likely to carefully examine the task structure of the project. This enables us to identify dependencies between tasks, and it increases the likelihood of identifying all of the tasks.
Next, we are more likely to find opportunities for overlap, which enables us to reduce cycle time. If you treat tooling as one big 16-week task, it can't be any shorter than 16 weeks. But, if you break it into eight smaller tasks, you begin to ask if tool steel could be procured before drawing release, or if rough machining could begin before final tolerances are known.
Furthermore, by breaking the project into many pieces, we can take an estimate of the overall project that's inherently more accurate than the accuracy of any of its component parts. The improvement in accuracy occurs because the noise in each component task is never perfectly correlated. When we aggregate many tasks, their variances add as vectors, instead of as scalars (identical to the problem of combining uncorrelated noise sources in communications system design).
For instance, if I break a project into 100 equal activities and estimate each of these activities with an error of plus or minus 50%, then the accuracy to my overall estimate of the project will approach plus or minus 5%.
This improvement in the quality of rolled-up schedule estimates has another subtle but important implication. It makes it very dangerous to track the detailed accomplishment of project activities with the expectation that they should be accomplished according to their original plan. Because variance is much higher at the very detailed level, conformance to the plan will be inherently low. This can trigger you to waste time trying to explain normal and healthy variances.
A better approach is tracking the aggregate accomplishment rate of project work because it smoothes out the variances in individual activities. Subtasks with low individual schedule conformance can aggregate into an overall schedule conformance that's quite high. So remember, garbage in does not always lead to garbage out.