Electronic Design
Making Unit Testing Practical for Embedded Development

Making Unit Testing Practical for Embedded Development

The idea of unit testing has been around for many years. "Test early, test often" is a mantra that concerns unit testing as well. However, in practice, not many software projects have the luxury of building and maintaining a decent and up-to-date unit test suite. This may change, especially for embedded systems, as the demand for delivering quality software continues to grow. International standards such as IEC-61508, ISO-26262, and DO-178B require module testing for a given functional safety level. Unit testing at the module level helps to achieve this requirement. Yet, even if functional safety is not a concern, the cost of a recall—both in terms of direct expenses and in lost credibility—justifies spending a little more time and effort to ensure that our released software does not cause any unpleasant surprises.

Nevertheless, testing embedded system software presents a unique challenge. Since the software is being developed on a different platform than the one it will eventually run on, you cannot readily run a test program in the actual deployment environment, as is possible with desktop programs. There are several explanations for this disconnect. Engineers may not yet have target hardware, hardware may cost too much to give software developers access to it, the full environment may be difficult to replicate in a development shop, and so forth.

This article provides an overview of how unit testing can help developers of embedded systems software address this challenge. In a nutshell, it recommends leveraging stubbing to perform a significant amount of testing from the host environment or on a simulator. This allows you to start verifying code as soon as it is completed—even if the target hardware is not yet built or available for testing. As a result, the majority of the problems with the application logic can be exposed early—when error detection and remediation is easiest and fastest—and target testing can focus on verifying the interface between the hardware and the software.

Article Sections:

Related Links

Unit Testing Basics

Unit testing is a well-known concept. Essentially, it involves taking a single function or method of a class (a unit) and invoking it with a given set of parameters. Then, when the execution finishes, an outcome is checked against the expected result. Code that accomplishes this is called a test case. Checking the outcome is usually done with a form of assertions. For example, assume you have the following function "foo":

int foo (int a, int b) \{ return b – a -1; \}

A test case might look like this:

void test_foo () \{ int Ret = foo(1,2); assertTrue(ret = 0, "Wrong value returned!"); \}

Often, "unit testing" refers not only to test cases invoking a single function or method, but also to test cases invoking an interface to a module or library. In other words, the terms "module" and "unit" testing are commonly used interchangeably.

Article Sections:

Related Links

Unit Testing Benefits

There are a number of benefits to unit testing. When creating a unit test case, the developer tests at a very low level. He is able to drive execution to parts of the code that are normally not covered by high-level functional tests. That way, he can test "corner cases" and the handling of abnormal situations.

The second important benefit stems from the fact that doing unit testing forces the developer to write "testable" code. This usually results in code that is better decomposed, not overly complex, and all-around better designed.

Another benefit is that suites of unit test cases establish a great safety net for your application—so you do not have to be afraid of modifying it. That is especially important if you want to refactor your code, or when you deal with old, legacy code that you do not know well any more. Typically, in such situations, developers are afraid to touch anything for fear of introducing errors. With this safety net, you can modify code with the confidence that if you break something, you will be alerted immediately. That translates to better productivity and better code.

Last but not least, unit test cases expose errors very early in the development cycle. According to well-known studies, fixing an error early is much cheaper than fixing that same error late in the integration test phase or in the system test phase. The above reasons led to the invention of Test Driven Development (TDD). TDD promotes that the developer is supposed to create a unit test case for each piece of functionality—before he starts to implement it.

Article Sections:

Related Links

Unit Testing Obstacles

If unit testing is so great, then why isn't it done on every project? Probably because it inevitably involves a certain amount of work—even for simple cases.

Recall the simplistic example from above. First, arguments to the function do not have to be simple types. They may be complicated structures that need to be initialized properly for the test to make any sense. Second, the function under test does not have to return a simple type. It can also refer to external variables, which again do not have to be simple types. Finally, the function "foo" may call another one, "goo", which for example talks to a real-world hardware sensor/file/database/network socket/USB port, receives user input from a GUI, etc. and thus will not operate properly in separation.

To prepare a useful unit test case for this non-trivial "foo" requires a lot of work: proper initialization of all variables that the function under test depends on, stubs/drivers for functions that we do not want to call (like "goo"), intelligent post condition checking, and so on. Then all of this has to be built, run, and recover gracefully if a problem occurs. The final steps involve preparation of a nice report that shows what the test execution results were and also which lines / statements or branches were covered during execution. And all of this must be maintained as the code evolves.

Sound like a lot of work? It is. This is probably the #1 reason why unit testing is so rare in real-world software projects.

Article Sections:

Related Links

Why Unit Test Embedded Systems Software?

In the context of embedded software development, unit testing is an even greater challenge. On the one hand, it is simpler because often only C code is used—and when C++ is used, it is only a simplified subset of it. However, on the other hand, unit test cases need to be deployed on a target board, or at least on a simulator. The code prepared for testing, together with all the test cases and test data, must be transferred to the target board, then executed. Finally, test outcomes must be collected and transferred back to the host, which is where they can be analyzed. This adds additional overhead on top of the work described in the previous section.

Despite this overhead, there are significant benefits. Applying unit testing in the host environment or on a simulator allows testing to start much earlier (concurrent with code development) and largely decouples the testing task from the availability of target hardware. One of the premises of unit testing is that code is tested in isolation from the rest of the system, which is emulated by stub functions in such a scenario. In this manner, most of the functionality of the code under test can be verified independent of the rest of the system, without running on the target hardware. This has two extremely important benefits for embedded developers.

  • Unit testing lets you start the test cycle before the hardware is available, and you can perform the initial testing directly on the development platform (rather than on the target). Early testing gives the team more time to find and repair defects. In addition, early testing distributes test efforts across the product-development cycle and helps prevent the 11th-hour testing rush.
  • Unit testing promotes a "divide-and-conquer" strategy that lets you partition complex systems so they can be tested in quasi-independent modules. Test tools manage any module-to-module software dependencies and use stubs to simulate them (Fig. 1).
  •  

    Article Sections:

    Related Links

    Protecting Code Integrity as Applications Evolve

    One of the most significant worries for developers of complex systems is that code modifications might change or break existing functionality. To address this concern, you can create a baseline unit test suite that captures the project code's current functionality. To detect changes from this baseline, you run your evolving code base against this test suite on a regular basis. Because unit tests can test parts of the system's code in isolation, such a regression suite can be continuously executed without having access to the target hardware. This type of testing does not exclude separate application regression tests, which test the overall application.

    The resulting test suite serves as a change-detection safety net; you can rest assured that if you accidentally break existing functionality, you will be notified immediately. As new target hardware becomes available, you can leverage your host-based tests to validate that the code will operate properly on the target hardware under realistic conditions. Running the regression suite on a host system does not diminish the need to automate the system tests on the target hardware to the equivalent degree.

    Article Sections:

    Related Links

    Verifying Error Handling

    System test scenarios are further complicated by requirements for reliable error handing in consumer products. Since the use scenarios cannot be reliably predicted, systems cannot be designed and tested just for the nominal case. Rather, the system must be verified to handle a broad range of incorrect and unexpected inputs.

    Error testing is also much simplified by utilizing unit testing with stubs. In general, testing error conditions at the application level can be very time consuming because putting the application into the "proper state of error" may require preparing relatively complex input data and putting the application into the appropriate state, out of a large state space. In contrast, it is very easy to test error handling for a given function using an approach of "error simulation." For example, a function that has error handling connected with its inputs is easy to test:

    float signalToNoiseRatio(float signal, float noise, MODE mode)
    \{
    if (MODE_MEASUREMENT == mode)
    \{

    if (signal < 0 || noise < 0)
    \{
    handle_bad_data();
    \}
    \}
    \}

    In this case, it is simple to test the call to handle_bad_data() in context because the expression of the corresponding if statement is controllable from the inputs to the function.

    More often, however, control conditions are not directly controllable from the function interface, but rather depend on a specific system state, as in the example below. Putting the system in that error state may be quite complicated, or may even involve polling the status of a device interface, so this condition needs to be simulated in a test case.

    float shutDown()
    \{
    if (uploadingData())
    \{
    userMessage("Cannot execute shutdown while uploading data");
    recoverShutDown();
    \}
    else
    \{
    // shut down indeed
    \}
    \}

    Using advanced test tools that support "smart stubs" (for instance, Parasoft C/C++test), unit testing for complex error conditions is no more difficult than the previous case. "Smart stubs" allow code execution for both the original function being stubbed, as well as implementation of specific behavior necessary for testing. Practically speaking, while the application being tested is clearly not in the error state, the specific function being tested is invoked as if the specific application error actually occurred – hence the term "error simulation." In the above example, testing the error handler requires the stub for uploadingData() to have at least one case when it returns TRUE.

    Additional Considerations

    Testing on a host system may imply that the compiler used to build the code differs from the compiler used to produce code for the target hardware. If the cross-compiler vendor also supplies a compiler for the development platform (for example, the native compilers from Green Hills Software), take this route. Alternatively, you can freely use the GNU Compiler Collection (GCC) available for many host systems. Although keeping the code portable between the host and target compilers may slightly increase the software-maintenance cost, the benefits of early testing outweigh this expense.

    Unit testing is unlikely to uncover error conditions caused by synchronization errors at the application level or errors that occur at the interfaces with real devices. However, in the development of embedded software, unit testing helps you identify many types of defects much earlier, thus improving the overall efficiency of their system development and removing test bottlenecks.

    Article Sections:

    Related Links

    Functional Safety Relevance of Unit Testing

    The issue of certification in relation to functional safety is one of the key issues of today's and tomorrow's electrical/electronic/programmable electronic systems. New functionalities increasingly touch the domain of safety engineering. Each function that is required to keep a risk at an accepted level is called a safety function. To achieve functional safety, these functions need to fulfill safety function requirements (what the function does) and safety integrity requirements (the likelihood of a function behaving in a satisfactory manner). Future development and integration of the functionalities containing safety functions will further strengthen the need to have safe system development processes and to provide evidence that all reasonable safety objectives are satisfied. With the trend of increasing complexity, software content, and mechatronic implementation, there are rising risks of systematic failures and random hardware failures. An international standard, IEC-61508, includes guidance to reduce these risks to a tolerable level by providing feasible requirements and processes.

    Safety Integrity Levels

    Safety Integrity Level (SIL)—as defined by the IEC-61508 standard—is one of the four levels (SIL1-SIL4) corresponding to the range of a given safety function's target likelihood of dangerous failures. Each safety function in a safety-related system needs to have the appropriate safety integrity level assigned. An E/E/PE safety-related system will usually implement more than one safety function. If the safety integrity requirements for these safety functions differ, unless there is sufficient independence of implementation between them, the requirements applicable to the highest relevant safety integrity level shall apply to the entire E/E/PE safety-related system. According to IEC-61508, the safety integrity level for a given function is evaluated based on either the average probability of failure to perform its design function on demand (for a low demand mode of operation) or on the probability of a dangerous failure per hour (for a high demand or continuous mode of operation).

    The IEC-61508 standard specifies the requirements for achieving each safety integrity level. These requirements are more rigorous at higher levels of safety integrity in order to achieve the required lower likelihood of dangerous failures.

    Other Safety Standards

    IEC-61508 is not the only functional safety related standard. Some of the others are derived from it to address particular industry specifics, while others were developed independently. Some are more strict (for instance, those related to airborne systems) while others are more relaxed. The underlying concepts, though, are similar, so unit testing would prove indispensible almost everywhere. Discussing all functional safety related standards is far beyond the scope of this article, but we briefly mention few below, just for reference. For more details on a particular standard, see the related reference documents or contact specialists in that domain. ISO-26262 – This is the adaptation of IEC-61508 to comply with needs specific to the application sector of E/E systems within road vehicles. As for September 2011 this standard is still under publication.

    ASIL (Automotive Safety Integrity Levels) – This is an equivalence of SIL defined by the ISO-26262 standard. It specifies the necessary safety measures for avoiding an unreasonable residual risk, with D representing the most stringent level and A representing the least stringent level.

    DO-178B - "Software Considerations in Airborne Systems and Equipment Certification" is a standard for software in airborne systems and equipment used on aircraft and engines. It is an industry-accepted guidance for satisfying airworthiness requirements. IEC-60880-2 – This is the adaptation of IEC-61508 used in safety systems of nuclear power plants. EN-5012X/EN-50128/EN-50129 – This is the adaptation of IEC-61508 used for rail transportation.

    Wrap up

    Admittedly, unit testing is not free. Work is required to set it up properly, and time and effort are required to maintain it effectively. For embedded systems software development, unit testing presents additional challenges, which can be overcome in the ways discussed in this article. You need to understand this before you start; otherwise, you're likely to be disappointed. On the other hand, unit testing can give you huge benefits, such as helping you to create better code, build a regression test suite, achieve a desired Safety Integrity Level, or obtain DO-178B certification.

    Article Sections:

    Related Links

    Hide comments

    Comments

    • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

    Plain text

    • No HTML tags allowed.
    • Web page addresses and e-mail addresses turn into links automatically.
    • Lines and paragraphs break automatically.
    Publish