Formal Verification: Vital for Safety- and Security-Critical Software
What you'll learn:
- Why testing alone cannot assure correctness in complex safety-critical software, and how edge cases and undefined behavior are able to evade validation efforts.
- How formal verification is used to exhaustively analyze all possible executions, enabling engineers to prove absence of defect classes and produce defensible assurance evidence.
- The role of formal verification in meeting certification standards, reducing security vulnerabilities, and boosting reliability in aerospace, defense, and other mission-critical systems.
In space platforms, defense systems, avionics, and other mission-critical environments, functions that were once implemented entirely in hardware are now governed by increasingly complex software. This brings a lot along with it: higher performance, easier adaptability, and faster system evolution.
However, it’s ushered in new challenges, too, particularly when it comes to establishing confidence that software will behave correctly under all operating conditions.
In safety- and security-critical domains, failures are not only inconveniences — they can result in mission failure, which could mean loss of expensive assets or, more importantly, direct risk to human life. Thus, assurance requirements mean more than just showing that software will work under expected conditions. Engineers must actually establish that entire classes of failures have been systematically identified and eliminated.
But as software continues to swell and get more complex, assurance practices have had to evolve as well. Traditional approaches based primarily on testing and review struggle to scale as systems incorporate concurrency, timing constraints, hardware interaction, and long operational lifetimes. Formal verification has emerged as a complementary technique that addresses these limitations by providing mathematical guarantees about program behavior.
The Practical Limits of Software Testing
Testing remains a key part of any development process, particularly in regulated environments. Unit testing, integration testing, and system-level validation are all indispensable for confirming functional behavior and performance. However, software testing has an inherent limitation that becomes more pronounced in critical systems. It can reveal the presence of defects, but it may struggle to prove their absence.
Even extensive test campaigns exercise only a fraction of possible execution paths. Rare timing interactions, extreme input values, and unexpected environmental conditions may never be encountered during testing yet still exist in deployed systems. In complex embedded software, these rare conditions often trigger the most serious failures.
Many high-consequence defects are non-obvious logic errors. Instead, they arise from subtle issues such as incorrect assumptions about initialization order, boundary conditions that fail only at extremes, or improper handling of arithmetic and memory operations. These defects can remain dormant for long periods, surfacing only under specific operational circumstances.
In aerospace and defense systems, where deployed software may operate for decades and updates are costly or infeasible, discovering such defects after deployment isn’t an acceptable risk.
Formal Verification: “Exhausting” All Possibilities
Formal verification approaches software assurance from a fundamentally different perspective. Instead of executing code with selected inputs, it models program behavior mathematically, and it reasons about all possible executions with respect to defined properties.
This exhaustive reasoning allows engineers to determine whether specific classes of failures can occur at all. When a violation is identified, it corresponds to a concrete execution scenario. When a property is proven, it holds across all modeled conditions. This level of certainty is particularly valuable in safety- and security-critical contexts, where assurance claims must be explicit and defensible.
This approach routinely exposes defects that remain invisible to testing and traditional static analysis; not because they’re rare corner cases, but because they emerge only when all possible behaviors are considered. Rather than relying on probabilistic confidence derived from coverage metrics, formal verification provides clear answers about software behavior. It enables engineers to move from assumptions about correctness to demonstrable guarantees.
When these issues are addressed, engineers gain assurance not by increased coverage, but by proving that defined classes of failures are impossible within the analyzed code.
Undefined Behavior and Latent Risk in Critical Software
Low-level software written in languages such as C and C++ is especially vulnerable to undefined behavior. Language standards intentionally leave certain operations undefined to enable compiler optimization. While this flexibility improves performance, it also introduces uncertainty that’s difficult to manage through testing alone.
Undefined behavior can result from signed integer overflow, invalid pointer arithmetic, or the use of uninitialized memory. These issues often escape detection during testing and code review because they may not manifest consistently. Yet they undermine both reliability and security, and in adversarial environments they can become exploitable weaknesses.
Formal verification is particularly effective at addressing this class of problems. By analyzing every feasible execution path, it can identify all instances of undefined behavior in a codebase. Because this analysis is exhaustive, it doesn’t depend on runtime coverage or specific test inputs. Entire categories of low-level defects can, therefore, be eliminated at the source.
Certification Expectations for Safety-Critical Software
Many static-analysis techniques rely on heuristics or pattern-based detection. While useful for early feedback, these approaches often trade completeness for scalability. This tradeoff can result in false positives that consume engineering time or missed defects that undermine assurance objectives.
Sound formal verification techniques take a different approach. They’re designed so that reported issues correspond to real behaviors and proven properties provide guarantees rather than likelihoods. This distinction is critical in regulated environments where assurance arguments must withstand independent review.
Certification frameworks such as DO 178C in avionics, ISO 26262 in automotive systems, and comparable defense standards increasingly emphasize systematic defect elimination and traceable assurance evidence. Testing is still required. But it’s no longer sufficient on its own to justify claims of safety and reliability. Formal verification supports these expectations by providing objective evidence that specific classes of runtime errors cannot occur.
Relevance in Space and Defense Systems
Space and defense systems impose some of the most demanding requirements on software assurance. These systems often operate in extreme environments, have long service lifetimes, and cannot be easily modified once deployed. They’re also subject to rigorous certification, accreditation, and audit processes.
Formal verification supports a proactive assurance strategy in these environments. By identifying and eliminating defect classes early in development, engineering teams reduce reliance on late-stage testing and post-deployment mitigation. Formal analysis results can also contribute to safety cases and compliance documentation, strengthening the overall assurance argument presented to certification authorities.
As systems become more autonomous and interconnected, informal reasoning and testing alone don’t scale to the complexity involved. Exhaustive analysis makes a difference in maintaining confidence over long operational lifetimes.
Security, Reliability, and Safety are Closely Linked
In safety-critical systems, reliability and security are closely connected. Many security vulnerabilities stem from the same low-level defects that threaten system stability, including memory corruption and arithmetic errors. In defense environments, where adversaries may actively search for exploitable weaknesses, eliminating these defects reduces attack surface.
Formal verification doesn’t replace secure architecture design or threat modeling. Rather, it strengthens the foundation by providing guarantees at the implementation level. By proving the absence of entire classes of runtime errors, engineers can focus security efforts on system-level concerns instead of defensive coding around undefined behavior.
This approach is increasingly important as software supply chains grow more complex, and systems mix-and-match components developed by multiple organizations.
Integrating Formal Verification into Engineering Practice
Formal verification is sometimes perceived as requiring specialized languages or research-oriented workflows. In practice, modern approaches are designed to analyze production code written in widely used languages and integrate into established development environments.
Formal verification complements testing and review rather than replacing them. Exhaustive source-level analysis identifies defects early, reducing downstream debugging effort and increasing confidence before system-level validation begins. This layered assurance strategy aligns well with existing safety-critical development practices.
As tools and processes mature, formal verification becomes less of a specialized activity and more of a standard component of high-assurance software engineering.
Toward Demonstrable Correctness
The growing complexity of critical and defense software demands assurance techniques that scale accordingly. Traditional validation methods remain necessary, but they’re no longer sufficient on their own to justify the level of confidence required by these systems.
Formal verification offers a path toward demonstrable correctness by enabling exhaustive reasoning about software behavior. In domains where failure isn’t an option, this capability is becoming an essential part of responsible system design.
For developers and software engineers working on safety- and security-critical systems, formal methods are a practical response to the realities of modern, software-driven critical systems.
About the Author
Caroline Guillaume
CEO, TrustInSoft
Caroline Guillaume is the Chief Executive Officer of TrustInSoft. She has an extensive background working in the critical software industry, notably at Thales Digital Identity and Security, where for 14 years she contributed to the Sales division including as the VP of Sales - Software Monetization Europe and VP of Banking and Telecom Solutions Sales out of Singapore. She also previously worked as director of Product Marketing at Gemplus. Caroline holds an engineering degree from Télécom SudParis.
Voice Your Opinion!
To join the conversation, and become an exclusive member of Electronic Design, create an account today!

Leaders relevant to this article:



