These days, bringing a product to market is a frenetic experience no matter how it's done. The combination of a narrow market-opportunity window and tight development budgets dictates severe constraints. So it's no surprise that software testing suffers from neglect.
Today's wireless systems have to cope with an increasing number of communication protocols and standards. At the same time, they have to maintain a very high level of reliability. System reliability now depends on software to a larger extent than ever before. As the complexity of software grows, so does the need for testing to identify any of that software's potential defects. Clearly, a more efficient method of detecting bugs is needed.
Currently, the typical method of software testing is known as "system-level" testing. With this approach, a testing team waits for the software developers to deliver all of the code for a product. The testing process then begins exercising the software through some sort of user interface. The problem with this approach is that the system-testing team spends a substantial amount of time tracking down low-level software bugs. These bugs should have been detected (and fixed) before the code ever left the developer's desk.
In addition, a large number of defects go undetected. It's simply impossible to simulate all potential error conditions through the product's user interface. To properly test product software, one must perform more than a system-level test on the product as a whole. The testing process needs to start with the unit testing of individual software modules.
Unit testing is familiar to most software developers. With this approach, each of a product's individual units is tested in isolation from the other parts of the program. To assure product reliability, this testing should be conducted throughout the entire software-development cycle. It is simply inefficient to wait until the end of development to conduct such testing. After all, software bugs, errors, and omissions can easily go undetected until it's too late.
When these problems occur late in the development cycle, a product's introduction can be severely hampered and delayed. Because revenue generation rides on the timely introduction and successful reception of a new product, unit-level testing is the best way to red-flag any software design problems up front.
Consider, for example, the wireless devices that are used by soldiers in the field. Soldiers depend on their equipment to report their current location, transmit target coordinates, and broadcast requests for support. Their devices have a battery as the power source. Somewhere in each device is a section of software that monitors that battery and makes sure that power is available to properly operate the device.
For safety, the software also monitors several alert conditions, such as insufficient battery charge, overcharge, or hastened power drawdown. To guarantee that the software will perform appropriately, each of these conditions can easily be tested during unit test. If one waits for the system test to perform such alert checks, this step might mean purposely developing batteries with each of these faults. Developers then run the risk of damaging the device so that it can't be reliably tested.
For any software design, unit testing is critical for the initial run of the development cycle. Yet this same kind of testing must be repeated whenever software is modified or used in a different environment. Aside from unit testing any new features that have been added, regression tests must be performed to verify that all existing features are still intact. It doesn't matter whether those features are few or numerous. They all must be tested to ensure that the software continues to meet specifications and—ultimately—performs error-free once the device is in use.
To achieve successful unit-level testing, the developer must have specific, unambiguous, and testable requirements for his or her software. Adhering to these requirements is the only way to verify that the software is doing what it's supposed to do. The code implementation also must comply with the following specific rules:
- Settability: the ability to set initial conditions before executing the software
- Controllability: the ability to control the path of software code execution during testing
- Visibility: the ability to actually see and verify the output data and results
- Repeatability: the ability to obtain consistently repeatable results
Unit testing can add to the workload of the software developer. This extra effort can be minimized, however, by using commercially available unit-test-automation tools. These tools assist in the development and execution of tests. They also provide impartial documentation regarding the completeness of the testing. Given the same amount of development time, a better and more robust product can then be delivered. To paraphrase a familiar TV commercial, developers can ensure, "You really can hear me now."