Virtutech ran a survey at Embedded Systems Conference West 2005 to see what developers were most concerned about, and debugging was at the top of their list. Other recent surveys have reported similar results. Chris Lanfear, Venture Development Corp., has noted that there are often as many as 100 errors for every thousand lines of code.
These numbers are not that different from results I have heard about for the last thirty years. Likewise, the cost of fixing errors seems to remain constant. If the cost of a programmer fixing an error during development is one unit, then it takes ten units when the application moves into the hands of testers and 100 units when it gets out in the field (assuming it is not already on the way to Mars).
This brings me back to a common gripe about development: debugging tools are still in the dark ages. Yes, symbolic debugging is as old as the hills and the latest debuggers have more bells and whistles than anyone can appreciate, but the approach and tools for debugging are essentially the same as they have been for years. For example, complex breakpoints and scripting support are common in today's debuggers. Analog Devices' VisualDSP++ adds the ability to graph data from the unit under test to provide a developer with better insight into a system's activity. Texas Instruments' Code Composer Studio Tuning Edition displays DSP pipelining information.
Tracing is another debugging technique that has seen incremental improvement. For example Lynuxworks Spyker (see the figure) can hook into an application without changing the executable. It is based on the Linux Trace Toolkit created by Karim Yaghmour of Opersys. Tying hardware with tracing makes a trace tool more powerful as well.The Green Hills Software TimeMachine can be used with hardware trace capture and lets you dial back in time to see what an application was doing.
While all of these features are needed and useful, they do not really change the way debugging and development is done. Essentially, a developer writes some code, tests it, and then pulls out the debugger or trace program when something does not work.
Test-driven development (TDD) is a procedural improvement that has grown out of the agile software movement and extreme programming (XP) techniques. TDD starts with a test that essentially defines the requirements of a system or function so that it can be built to address the test. Of course, multiple tests are used for more complex systems, but the idea scales well.
Development tools like those based on Unified Modeling Language (UML) take this to a graphical extreme—which is good. Development environments like Java incorporate support such as JUnit to formalize what a test is and how it works.Improving Testing So, what's missing?One thing is providing better debugging feedback about an application's state. Hardcore coders sometimes like binary dumps and the ability to navigate a symbol table or symbolic stack, but this requires the developer to repeatedly navigate to the data they want. Programmers need preformatted, preferably graphical, displays of information within an application. For example, temperature data might be presented more effectively as a slider or thermometer and may even require conversion from data that a microprocessor sees. The ability to set limits that would change the color of the display might also prove useful in some instances.
While this type of environment might seem extravagant, the same was true for other debugging advances like symbolic debugging and tracing. The idea behind these improvements is to highlight problems easily and make it easier to determine the cause sooner.
While many graphical programming environments, such as National Instruments' LabView, have begun to provide this ability, only rudimentary data presentation is available when users are querying specific data. It is possible to extend an application to provide this more extravagant presentation, but then the debugging code is part of the application. We all know PRINT statements can be added to help debug an application and we know how annoying it can be to extract, comment out, or conditional disable them.
This added code is often included in a more formalized way, from ASSERT macros in C/C++ to Eiffel's "Design by Contract." Eiffel is a programming language that includes a much more advanced version of ASSERT-like specifications that provide compile and runtime checks. A free version of EiffelStudio is available as a download at Eiffel Software's website. Check it out if you have not tried this approach to application design and implementation.
The main point is that programmers are willing to invest a good deal of time learning and using debugging tools if they have long-term benefits. For example, Eiffel's Design by Contract methodology requires a bit more typing and thought, but Eiffel programs have a much higher success rate and fewer bugs than a the typical C program. Especially as the size of the application grows.
I think that it comes down to how the development and debugging tools work for the developer. I agree with the adage: test first, test often. I also think that it should be second nature. Part of this is provided by tools like the Eclipse JUnit plug-in for Java, but this is just a start. Testing and debugging need to be closely tied, and presentation of data during debug is key. It must be manageable and it should be the starting point for development, not the tool that gets pulled out of the box when a problem occurs.
Eclipse, the open source integrated development environment (IDE), provides the best place for experimenting with enhanced debugging because its plug-in environment is well documented. We will just have to see if someone will take up the torch.
Overall, the debugging process is improving incrementally, but it is one area devoid of useful, cross-platform standards. I would love to see improvements in the way testing is incorporated into the development process. Let me know if you implement some.
Green Hills Software
Venture Development Corp.