Without software, life as we know it would grind to a halt. There would be no Web, no e-commerce, and no way to manage today's incredibly complex business and manufacturing environments. In biological terms, software has become a cornerstone species: Everything else benefits from it, everything else depends on it.
The fact is, software is like a teenager on the cusp of adulthood. It has grown immensely in recent years, but we are just beginning to glimpse its potential. To realize that potential, it must clean up its act. Developers must assume that in today's increasingly complex and highly connected environments, the unexpected will occur. From day one, they must embed the appropriate safeguards into their applications.
The situation today is reminiscent of the 1930s, when France completed a masterpiece of military engineering called the Maginot Line. Bristling with over 50 forts, it provided the country's eastern frontier with a virtually impregnable line of defense—until one day, the German army simply walked around it.
Sadly, when it comes to software reliability and security, the "Maginot mindset" reigns supreme. Applications, even OSs, are still being designed with the tacit—and erroneous—assumption that bugs and malware won't get around the verification efforts, authentication protocols, and other protective measures that constitute software's Maginot Line.
The reality is very different. Hard-to-detect programming errors make their way past test and verification teams and into the final product, as anyone who has experienced the Blue Screen of Death will attest. Viruses and hackers, meanwhile, can infiltrate a networked system, using tactics that the system's designers didn't, or perhaps couldn't, anticipate. As systems everywhere become more software-intensive and more connected, the potential for such vulnerabilities will increase—and not just on desktops and servers, but also in billions of embedded devices as well.
What's at stake here isn't simply the protection of applications or data. Rather, the very ability of software to usher in the next wave of innovation hangs in the balance.
Take the Web services industry. On the one hand, it holds immense potential for simplifying the task of monitoring, configuring, and provisioning remote devices, from industrial controllers to telematics systems to HVAC control units. At the same time, this connectivity opens the possibility that such devices will be infiltrated by potentially malevolent parties or applications.
Fortunately, solutions are at hand. There are approaches to persistent storage, for instance, that can place "bubbles" around files and memory, preventing unauthorized access. Likewise, there are approaches to partitioned scheduling that can prevent poorly written or malicious code from starving critical tasks of CPU time.
Don't forget protected-mode computing, a critical first step to ensuring the reliability of virtually any software-rich device. Many device designers and application developers, especially in the embedded space, fail to embrace memory protection, even though it can contain faults and limit errant processes from corrupting the code or data of other processes. With the proliferation of lowcost, MMU-enabled embedded processors, such protection is becoming more affordable. In fact, developers of connected devices must ask if they can afford not to use it.
Of course, these techniques won't serve as a substitute for best development practices. Developers must employ every tool and methodology at their disposal to ensure that their code is clean, modular, efficient, thoroughly tested, and well-protected. The problem is that no one has developed a method to create 100% bugfree code. And no test suite can exhaust every scenario that a complex software system may encounter, partially because the number of potential scenarios in such systems can be almost limitless.
So despite all reasonable precautions, faulty code or disgruntled hackers can find their way into our systems. Rather than pretend that this won't happen, I suggest that software developers, designers, and managers adopt a mission-critical mindset and build systems to contain—and intelligently recover from—such problems. Never assume the fortifications will hold. First, do everything possible to prevent problems. Then, assume they will occur anyway, and take appropriate measures.