WYSIWIG helped define visual, "What You See Is What You Get" word processors. It also applies to debuggers. We see individual threads execute with debuggers. But what we see is not what we get: the programs may not work. Yes, we have the threads down cold. We've been doing thread-level debugging since Borland's and Microsoft's IDEs. We have it all: watchpoints, soft breakpoints, hard breakpoints, trace buffers, and so on. Some of the latest debuggers even add data displayed in graphical form.
Unfortunately, the traces often work when the program doesn't. This isn't unexpected. We create programs by the classic "divide and conquer" technique—breaking an application down into smaller and smaller subsets, down to the basic blocks, and then implementing from the bottom up.
Nice technique. But then we debug the small pieces and assume that the overall program works. Bad assumption. We do top-down design with bottom-up implementation and debug. Shouldn't we check at the program level? In fact, once the basic code runs, shouldn't we debug at the macro level, and then if something goes south, go down to the micro and find out why?
Years ago, Mitch Kapour, who created the Lotus 1-2-3 killer app, devised another program. It was a presentation package for prototyping program interfaces by defining screen sequences. Similarly, many applications can be represented as a black box displaying the program's major events.
Most applications can be characterized by their execution events. For a car motor controller, the event sequence is TDC, open the input valves, inject fuel, fire plugs, open exhaust valves, and so on. Given an RPM, the program execution can be presented as a sequence of events—in effect, a visual spec. We can use it to verify code execution at the application level. We can also use it for debugging purposes.
Today, we can see our code threads execute, but the application can still go belly up. Let's up the tool ante and debug at the program level too, so what we see is what we really get.