System security moves up the "must-have" scale

It's essential to balance reliability, security and time-to-market for today's embedded applications.

Embedded systems designers often believe they must make difficult tradeoffs between security, reliability, and time-to-market. Most usually decide to prioritise time-to-market, with reliability second and security a distant third. But times change. Security is emerging as a requirement for all devices with network connectivity, so third isn't so distant anymore.

Advances in microprocessors, operating systems and development tools also mean these tradeoffs are no longer as difficult to balance. It's possible to design for higher reliability, build the necessary foundation for security, and even get to market faster.

This can only be done by choosing the right architecture—a combination of processor, operating system, and software components—and design from the outset for high levels of reliability. Some amount of reliability can be "tested in" to a product using proven architectures and development techniques. But high reliability—the kind you must build a bulletproof solution—has to be designed in from the start. It can't be added on or tested into existence.

It's engineering common sense to design an application in a modular way, dividing the application into software components and controlling their interaction through well-defined interfaces. But delivering the highest levels of reliability takes more: the ability to partition, isolate, and separate, not just modularise.

For the highest levels of reliability, modules must be placed in memory regions that are isolated from one another. In addition, the processor's memory-management hardware, along with a suitable operating system, must be used to control communication and machine resources as well as enforce separation.

Hardware separation can, unsurprisingly, provide higher levels of reliability. What may be surprising is that it can also get complex products to market faster.

Key points

There are six key points an engineer should consider. The first two steps are to modularise and then partition your design so that each module carries out a single function, is isolated from all others, and (optionally) can be restarted by a watchdog process in another module. If the module fails, its watchdog restarts it. The result is that your system is more reliable than its least reliable component. In other words, you can meet system reliability goals with less effort.

It's also essential to keep your modules simple: complexities create vulnerabilities (a security concern) and non-repeatable behaviour (a development and reliability concern). Designing for a heterogeneous multicore design adds complexity. If possible, avoid this source of complexity by procuring the necessary processing power and functionality in a single processor.

The third step is to make sure you're using a processor and operating system with non-bypassable memory management. Building a non-trivial reliable or secure product without memory protection is impossible. Memory protection lets you partition your system and practice "defence in depth." A vulnerability in one part of the system can't cascade to affect the rest.

Fourth, keep everything out of the kernel. The kernel is the only component of your system that every other part of the system depends on, so it has to be rock solid. Anything that you run in kernel space can only lower reliability and open the door to security problems.

Fifth, choose a kernel architecture that's met recognised standards for security and reliability. Anyone can say that their operating system is reliable and secure. Few have independent verification of their claims. In the security domain, choose architecture based on the Separation Kernel Protection Profile being developed by the National Security Agency (NSA). It's the emerging standard for secure architectures, and requires Common Criteria assurance EAL6+.

Finally, make sure you use tools that will enforce a high-security coding standard. For example, when writing code in C, it's easy to use bad practices that can leave your system vulnerable. MISRA C—a safe subset of C—was developed by the Motor Industry Safety Research Agency for the automotive industry. That because in this sector, much of the code is reliability- and safety-critical. It's best to use the MISRA standard internally and choose tools that enforce the standard.

Things not to do

There are certain things to avoid when trying to ensure a secure embedded network. Something to consider initially is the value of reliability and how it should not be underestimated. Not only does it make products more popular and more profitable, but it can reduce development headaches as well.

It's essential not to put off testing and fixing bugs until the product is "feature-complete." Your goal is that when it's feature-complete, you should be able to ship it. Products get to market faster when they're reliable from the start, and they're kept reliable as you develop them using tools like memory leak detection, run-time error checking, and hardware protection that can help you find latent bugs early. Such testing should be running throughout development. When an engineer adds a bug to a flawless product, the bug is spotted and can be fixed immediately, because the offending code change is still fresh in the engineer's mind.

It's also important not to think that you can re-design for security or reliability later in the cycle. If security is even remotely over the horizon and if you know reliability will become important in the future, you need to design for it now. Reliability and security can't be added later, so think about it now.

Finally, if you're building a network product, don't think that your product can't be a target. It may not contain valuable data now, but just by being on a network it may provide access to information outside of your product. And as your product grows up, it may have information that makes it worth attacking for its own sake—with threat technology evolving so quickly, previously uninteresting targets like your product may be become worthwhile targets.

Of course, systems with the very highest levels of security—the kind that fly airplanes and will someday drive cars—need a reliability and security-oriented architecture, as well as a very rigorous development process. But any products with high functionality, including consumer products and industrial control products, can benefit from the same underlying technology, along with a more economical development process to deliver reliability, security, and time-to-market together.

Wayne Meyer is Blackfin product manager for Analog Devices.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.