Analyzing software metrics is an important facet to predicting quality in test system software as it helps to reduce complexity. Metrics analysis provides a quantitative assessment of how well the code is structured, which directly relates to the capability to maintain and enhance the application.
Additionally, there are well-used visualization techniques to rapidly analyze code modules throughout the development process. These techniques provide the basis to re-engineer code where necessary and avoid costly rework and maintenance efforts. Where complexity cannot be reduced through redesign, the metrics analysis indicates where concentrated inspections, debugging, and testing should be directed.
Why Software Metrics Are Important
When schedules are compressed, a natural tendency is to iterate into the final software release. This usually is done by coding, then executing the application, and then fixing the code where apparent issues are present. This process is repeated until there are few apparent issues to address.
For even moderately complex applications, this process is unacceptable for many reasons. First, it is costly in terms of both time and money to repeat a manual testing process several times. It also is likely that software created in this fashion is difficult to maintain and is not scalable, making it very difficult to create the next revision.
While there can be no substitute for adequately designing and testing software, given the need to keep development time short, it is desirable to have predictive measures of quality and performance early in the development cycle. These provide project managers with the information necessary to estimate the degree of project completeness, adequately assign tasks and resources, and allow necessary changes to be made when it is less expensive and time efficient. Software metrics are predictive measures of quality because they identify where complexity can be reduced or where more testing should be applied.
Visualization Techniques
Applying visualization to software metrics is a valuable technique for rapid analysis and decision making. A popular method of visualizing software metrics on a given code module is the Kiviat diagram. This diagram allows multiple components of varied ranges to be shown on the same chart. Each software measurement (metric) is indicated by radial axes. The upper and lower limit for each metric are indicated by the outer- and inner-concentric circles, respectively. The measured value for each category then is plotted on each axis, and the connecting lines form a visual pattern of code complexity.
The key to analyzing complexity with the Kiviat diagram is pattern recognition and the use of proper limits. The metrics should be grouped by similar function on the diagram. For instance, all the metrics relating to memory usage should be adjacent to each other so this category is easily identifiable.
As various code modules are analyzed, patterns will develop that indicate if the code is too complex and where the complexity can be minimized. Appropriate limits can be established by gathering data on past projects, using accepted practices, and continually building an internal reference.
The metrics that are analyzed and the limits being applied should vary depending on the type of code. A typical test application comprises several types of code modules such as:
- User interfaces.
- Functional test modules.
- Instrument drivers.
- Analysis routines.
- Common utilities.
Each module type has dissimilar sets of requirements and should be analyzed separately. For instance, the amount of memory in a user-interface module will be very different than that of a low-level utility.
Inspecting individual code modules with the Kiviat diagram is useful for an in-depth analysis. For moderately sized and large applications, it becomes unreasonable to inspect each code module independently so it is important to visualize software metrics from the project perspective.
One method is to define an additional category of metrics that is used as test criteria. These may be duplicates of metrics in other categories or new ones altogether. The key is that this set of test criteria is scanned automatically for all modules in the project, and an overall pass or fail status is provided for each code module. This at-a-glance information can be used as a rough inspection that filters out which modules need further inspection with the Kiviat diagram.
How Metrics Relate to a Graphical Language
While software metrics visualization is well defined for conventional, text-based languages like C++, it is a new methodology for graphical languages such as LabVIEW. The graphical language is not an exact, visually syntactical equivalent of C++. It is a different paradigm with some similarities and some differences. Consequently, it is unreasonable to map the identical set of metrics from the text-based paradigm to the graphical paradigm.
The most obvious example is the lines-of-code metric. In the text-based paradigm, this is a common and very useful metric. In the graphical paradigm, it has no meaning.
This does not prevent a useful methodology of metrics visualization being applied to the graphical language. The same benefits that have proven to be successful for the text-based languages can be realized in the graphical languages as long as we create and define the appropriate metrics and categories. An appropriate set of metrics might include Table 1 (see below).
Table 1. Graphical Language Metrics
Category
Individual Metrics
Memory Usage
• Total Memory
• Data Space Memory
• Code Memory
• Diagram Memory
• Front-Panel Memory
Front Panel
• Number of Inputs
• Number of Outputs
• Total Objects
• Width of Panel
• Height of Panel
• Percentage of Screen Area
Icon Connector
• Total Number of I/O
• Number of Inputs
• Number of Outputs
Wiring Diagram
• Number of Structures
• Number of Nodes
• Number of Diagrams
• Depth Level
• Width of Panel
• Height of Panel
• Percentage of Screen Area
• Number of Attribute Reads
• Number of Attribute Writes
Data Coupling
• Total Number of Read/Writes
• Number of Global Reads
• Number of Global Writes
• Number of Local Reads
• Number of Local Writes
External Calls
• Total Number of Calls
• Number of DLL Calls
• Number of CIN Calls
Applying Metrics Visualization
Our example focuses on a simple analysis routine used as an elemental component within a test application. This analysis routine finds y for a given x value in a polynomial equation (y = a0 + a1x + a2x2 + a3x3 + …). The project listing showed that this module was one of several failing the test criteria metrics, indicated by an easily recognizable fail mark.
This prompted further inspection of the polynomial-analysis routine using the Kiviat diagram (Figure 1a). It showed that both memory and wiring diagram components were exceeding the acceptable limits. Table 2 (see below) shows the specific metrics of interest that exceeded the limits for the polynomial function.
Table 2. Failures of Original Design
Memory
The analysis prompted a code redesign to make these areas more efficient. The resulting function met its objective by reducing the values of the metrics that had previously failed (Figure 1b).
It is important to note that the limits are somewhat arbitrary, based on internal coding guidelines and best practices or past performance. In this case, the limits prompted a change in the code. Table 3 (see below) shows the relative comparison of the metrics values with the original and revised code.
Table 3. Metrics Comparison
Metric
Value – Original
Value – Revised
Improvement
To understand why there was a benefit with the revised code, we first must analyze the approach taken to produce the intended algorithm. The design of the function is simple. An array of coefficients and a value for x are passed into the function representing a polynomial equation (y = a0 + a1x + a2x2 + a3x3 + …) as shown in Figure 1c (see March 2001 issue of Evaluation Engineering). If only two coefficients are passed in, then it is a 2nd order equation of type (y = a0 + a1x ). In this regard, the function adapts to the order by the number of coefficients.
In the original algorithm, this adaptation was done explicitly for each possible order between 0 and 7. The incoming array of coefficients first was sized to determine the order. A unique case existed for each possible order between 0 and 7, and the logic flow then was exercised in one of these unique cases to extract the coefficients from the array. The resulting coefficients were passed out of the case to the mathematical equation that determined the Y value. Since there were eight unique cases [0 .. 7] and each case provided the resulting coefficients, the number of nodes was exceedingly high (99), and the number of diagrams also was high (10).
The number of nodes and diagrams (isolated sections of code) in a graphical language are the best representatives of lines of code or function points in a text-based language. These text-based metrics typically are the gauge for determining the bug count within a function or within the overall application.
It has been proven there are fewer bugs with fewer lines of code and fewer function points. As a result, we can safely say that the original algorithm in our example was much more likely to produce errors (60% to 75%) than the revised code.
Another important advantage of the revised code is that it is scalable to any number of polynomial orders, such as coefficients, whereas the original algorithm supported up to only seven coefficients (Figure 1d, see March 2001 issue of Evaluation Engineering). This has ramifications for both testability and maintainability.
One way to trap the errors in any code is to perform sufficient testing before release. Unit testing is a very comprehensive method of exercising an individual function. For the original algorithm in our example, we would need to make at least eight separate tests to properly cover the explicit cases that processed the coefficients. In the revised code, we could be confident with two or three tests since the algorithm was scalable and did not have explicit cases to handle the coefficients. Also, the metric called diagrams is directly proportional to the proper number of test cases, as it was 10 for the original algorithm and four for the revised code.
From the perspective of maintainability, the original algorithm is limited to seven coefficients. While this may be sufficient to cover a wide range of applications, it reduces its flexibility as a generic analysis module that can be used in any environment.
To make changes to this algorithm without a redesign, additional cases would have to be added to cover a larger set of coefficients. In addition to adding cases, each of the original ones would have to be modified as well. The result would be an even more complex function, having a higher likelihood of errors and requiring additional tests to be created.
Most importantly, this cycle would continue the next time a user exceeded the limit within the fixed-number of coefficients. In short, complexity, time, and cost associated with the fix all increase exponentially, and the problem still is not solved.
We chose to analyze a simple function for several reasons. First, it points out how the right tools and analysis can identify issues that need to be addressed in a large and complex software application. An automated inspection tool, OverVIEW™, was used to scan the metrics against user-defined limits. From the project level, we were able to determine which modules needed further analysis. A peer review may not catch these items, and even if they do, it is less expensive to apply automation and solve the issue before it goes to review.
The second point of this example highlights how the simplest of code modules can be improved and how this relates to other modules within the overall project. Since most large applications use several hundred and sometimes thousands of individual modules, imagine how much complexity can be reduced if this example is extrapolated to include all the modules in the entire project that fail the metrics inspection.
Metrics and Project Management
There are many facets to project management as it relates to a software application. The primary task is to understand and manage the complexities to keep the project on schedule and within budget. Software metrics analysis is a vital component of this task because it provides the quantitative measure of complexity related to the project.
Resource Allocation
One dimension of managing a complex project is to assign tasks and allocate resources for development activities. Visualizing software metrics from the project and individual module perspectives provides the necessary information to make critical decisions early in the development process when it is less expensive and easier to make changes.
Standards Compliance
In addition to allocating resources, the project manager must deliver a product that meets or exceeds expectations. It may be part of an internal coding standard to assure that all code is compliant to a given set of metrics and a given set of limits. The project-level metrics analysis applied to this user-defined set gives confidence and proof that the entire project is compliant according to department standards.
Conclusion
The importance of having quantitative information related to a software application is key to properly managing the project and vital to assuring quality. Metrics visualization has been used within text-based languages and now can be effectively applied to graphical languages such as LabVIEW.
There are several aspects to metrics analysis. Some relate to optimization, and some relate to performance. In all cases, metrics analysis allows issues to be identified earlier in the development cycle where it is less expensive to redesign, making code more maintainable, easier to test, and more reusable. This analysis also can be used as a valuable project management tool to quantitatively assess the development process and properly assign resources to keep projects on track and within budget.
About the Authors
Gregory Swanson is responsible for product and service marketing at TimeSlice. He has been in the PC automation industry for the past eight years in various development and marketing roles. Mr. Swanson holds B.S.E.E. and M.S.E.E. degrees from the University of Minnesota.
Lee Globus is responsible for strategic product development at TimeSlice. He has been involved with production testing and software engineering for the past 13 years and holds a B.S.E.E. degree from California State University and a master’s degree in software engineering from St. Thomas University.
TimeSlice, 5012 Upton Ave., Suite 200, Minneapolis, MN 55410, 612-270-1890, www.tslice.com.
Published by EE-Evaluation Engineering
All contents © 2001 Nelson Publishing Inc.
No reprint, distribution, or reuse in any medium is permitted
without the express written consent of the publisher.
March 2001