Automated Testing of Functional Software

A well organized approach to automated functional software testing can achieve significant cycle-time and quality improvements. By integrating automated testing into the software development program, you gain a number of benefits:

Reduced cycle time by decreasing product and integration test time.

Improved quality by increased test coverage for regression testing (retesting to detect faults introduced by modification).

Standardized testing and reproducible results.

These points were illustrated in a pilot project implemented on a two-way radio communications base transceiver at Motorola’s Design Center in Toronto, Canada. The challenge was to develop an automated test system that could become an integral part of the software development process.

An MC68356 integrated DSP and microprocessor comprised the host platform for signal processing and control of the transceiver. Hardware test instruments included a communications analyzer, a digital oscilloscope, a multifunction signal source, and a signal-switching control unit.

The test system simulates combinations of 16 digital inputs and outputs and two RF and 10 baseband signal paths and measures the product response to them. The test equipment was connected via the IEEE 488.2 instrument bus and controlled over the Ethernet using an IEEE 488.2-to-Ethernet controller.

Test System Requirements

At the time the software testing project was initiated, no new features were planned for the completed base-station firmware. The main objective of test-case automation was to eliminate manual regression testing for future maintenance releases. A test case is a sequence of testing steps required to test a single function in the firmware. A test script provides the instructions for each of the steps in the test case.

The system was designed to provide a standardized library of low-level functions that supported a modular test platform. The four hierarchical layers include the test manager and front end, the test-case definitions, hardware and software utility functions or tasks, and a hardware abstraction layer that interfaces with the test hardware (Figure 1).

The test-software development environment is based on National Instruments’ LabVIEW software. The LabVIEW virtual instruments (VIs) are managed using the Clearcase Multi-Version File System. This Clearcase revision control system is essential for providing a controlled test-software development environment.

The test manager and front end consist of a set of VIs that controls the test selection process, computes the estimated completion time, calls hardware initialization and diagnostic routines, manages different station codeplugs (configuration or identity data files), and provides a framework for executing the user-defined test cases.

Tests are divided into three categories relating to the product mode of operation: stand-alone transceiver, system mode, and miscellaneous. For each test script, the test manager must ensure that the appropriate codeplug is loaded, opening an HTML-based test description and launching the test script if it has been selected. The VIs were designed so that test cases can be added to the test-manager framework with little difficulty.

The test-case definitions determine the actual sequence of events that performs the functional testing. These modules are responsible for initializing the appropriate test equipment, routing the audio signals, sending serial interface commands to the base station, setting source levels and frequency, calling hardware or software subtasks, defining expected measurement results and specifications, and recording the measured data.

Each test case may contain multiple subtests. These VIs provide the instructions to test a specific functional requirement.

For example, a test case to measure audio delay in the over-the-air receive path would perform the following functions:

Initialize equipment.

Write header information to the output file.

Set the base station to a predefined channel.

Read the station’s RF receive frequency.

Set the RF generator frequency to this frequency and set an RF level.

Route the audio output from the station to the audio analyzer.

Call the hardware task that measures audio delay.

Record the output.

Reusable tasks that provide a modular library of functions are grouped into hardware and software tasks in the test-software hierarchy. A hardware task is a VI that interacts with the test equipment. A software task is a VI that does not interact with the test equipment.

Some examples of hardware tasks include VIs that find the audio level to achieve a desired transmitter modulation, measure the receiver audio frequency response, find the RF level that opens the receive audio path, or measure the receive strength signal indication response of the station. Some examples of software tasks include VIs that write data to an output file, generate a list of numbers based on start-stop-increment inputs, or open an HTML description of a test specification using a web browser.

The test equipment interfaces with the test software through actions. Essentially, an action is a mini-driver that performs a single hardware instruction. The actions have only one input or one output.

This hardware abstraction layer model does not make any assumptions about the capabilities of the test equipment. Some examples of actions are the SetPrimarySourceLevel.vi which sets the primary RF level in decibels referenced to 1 mV, the MeasureAudioLevel.vi which reads the audio level in volts, or the GenerateDPLSeqeuence.vi which modulates an RF carrier with a digital access code.

Each VI contains the appropriate GPIB command to remotely control the test hardware on the GPIB bus specific to the equipment on the test rack. The hardware abstraction layer interfaces with the test equipment on the basis of the desired action as opposed to making explicit reference to the type of test hardware.

Cycle-Time Improvement Analysis

Some basic development metrics were collected and used to quantify the cycle-time improvement impact of the automated testing system. Using the data as an input to a simple mathematical model, not only can the effectiveness of the project be evaluated, but also some important pitfalls and strategies for automated testing can be highlighted.

Cycle-time improvement is a ratio of the staff hours potentially spent manually testing to the development effort needed to create the automated test platform. The model assumes two phases of development:

An initial release of the automated test software created by average to expert users, providing the test-software architecture and a substantial library of modular tasks.

Subsequent incremental test-case generation by novice test-software users, factoring in a learning curve and a decreasing rate for creating lower-level modular tasks.

The cycle-time improvement resulting from automated testing can be represented by Equation 1:

X_I = (S+M_O+M_I· N_I) N / (T + D_O + SUM[ 1+C_1· exp(-C_2· j)]) (1)

where: N = the total number of times the test system is used for automated product

and integrated testing.

N_I = the number of test cases developed after the initial release.

C1 and C2 = curve-fitting coefficients which model learning-curve and code-reuse effects.

Figure 2 shows a plot of Equation 1 using the metrics of Table 1 as inputs. The cycle-time improvement is a function of how often the system is used and the number of opportunities to add test cases using the existing test-code foundation.

If no test-script development occurs beyond the initial release, the system would need to be used at least 10 times before a cycle-time improvement begins to be realized and up to 100 times before a 10× improvement is realized. For the scenario where 40 test cases are added to the baseline library, the cycle-time impact improves to seven times and under 70 times, respectively.

Cycle-Time Improvement

What are the benefits, and how can these systems be used most effectively? The simulation results in Figure 2 show that significant cycle-time improvement is not achieved unless automated test cases are used early and often in the product development cycle. Also, with a good test-software development framework, new cases can be added easily.

Increasing the cycle-time improvement can be accomplished by reducing both the initial development and the incremental development efforts. Both of these goals can be achieved by planning the architecture of the test-hardware configuration and setting these guidelines for test-software development:1

Understanding the requirements of the system.

Adopting an architecture that facilitates the efficient development, integration, and maintenance of the test software.

Adopting development standards and being disciplined.

The emphasis is on test-system software development because the effort required to implement the hardware platform is fixed and, if properly planned, remains largely unmodified. It is apparent that the organization and planning of the test software have a greater impact on long-term cycle-time reduction.

Although an obvious benefit of such an automated system is the capability to easily perform regression testing, the results also show that it is important to take advantage of the short-term benefits of test automation to realize a significant cycle- time improvement.1 For example, while the automated tests are run once per release for regression testing purposes, testing in the product-firmware development phase might require the same tests to be executed dozens of times. Neglecting to plan for test automation during product development will result in missed cycle-time reduction opportunities.

With an automated test platform available, development test plans should be designed for and exploit the capabilities of the test system. Test plans can be defined after the product requirements have been created. With the definition of a test plan, the test cases are easily translated into test script code.

Designers responsible for creating a software module also would implement a product-level test script for these functions. And embracing a design-for-auto-test paradigm may promote the addition of diagnostic features in the product code to further facilitate automated testing.

Conclusions

In general, formulating an automated strategy provides more efficient, robust, and repeatable testing. This allows designers to place more emphasis on design by spending less time testing, increasing quality and reducing cycle time.

This effort was a pilot project for future product development as well as a means of creating a baseline library of test software. Aside from the value to future products from the experiences gained, this pilot project possibly will be cost-justified, yielding at least 3× cycle-time improvement, based solely on regression testing demands.

Reference

1. Kaner, E., “Pitfalls and Strategies in Automated Testing,” Computer, Vol. 30 (4), April 1997, pp. 114-116.

About the Authors

Anthony Gerkis is a design engineer at Motorola’s Toronto Design Center, working in system architecture and design for wireless digital integrated networks. He holds B.A.Sc. and M.A.Sc. degrees. Motorola Design Center, 3900 Victoria Park Ave., Toronto, Canada M2H 3H7, (416) 756-5893, e-mail: [email protected].

Ajay Arora is working at Motorola through the company’s industrial internship program. He currently is completing a degree in electrical engineering at McMaster University.

Table 1


Software Testing Development Task

Time (hours)

Symbol

INITIAL RELEASE DATA

Perform Tests Manually

40.0

M_O

Initial Development Effort

390.0

D_O

Training (usage and development)

26.0

T

Setup and Execution of Test Software

0.2

S

SUBSEQUENT RELEASE DATA

Average Time to Manually Perform Individual Test

1.7

M_I

Initial Incremental Development Effort per Test Case

26.0

f(C1,C2)

Minimum Incremental Development Effort per Test Case

6.5

f(C1,C2)

 

Copyright 1999 Nelson Publishing Inc.

October 1999


Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!