Verification Evolves Into Lean, Mean Bug-Stomping Machines

Sept. 11, 2008
As chip design gets larger, verification methodologies get smarter. Not only do they help you ask the right questions, they also let you know when you've gotten all the answers that really matter.

We all want our next-generation Pocket Rocket to do new stuff (and do the old stuff better), as well as get smaller, run longer, and cost less. We also don’t necessarily want to wait for the holiday season for it to hit the shelves. We gadget freaks are often rather impatient in that regard.

For the design team, these marketplace realities don’t exactly lead to a leisurely existence. Instead, it means negotiating the extraordinary challenges of making it all happen and getting it right in an increasingly compressed timeframe. The system-on-a-chip (SoC) that gives wings to your Pocket Rocket is becoming more expensive to design with each new generation, and even more expensive to ensure that it does what you think you designed it to do.

With 65-nm process technologies becoming mainstream and 45/40-nm processes emerging, average gate counts are on the rise. In turn, this makes the functional-verification bottleneck even more burdensome (Fig. 1). For designers, design is the rock and verification is the hard place, and they find themselves in between. In this article, we’ll examine how verification methodologies are changing to cope with the sheer size of today’s designs.

WHERE WE'VE COME FROM As process technologies have moved down the curve toward the deep-submicron nodes, there have been distinct approaches to how engineers write their testbenches or the suite of test vectors that are run against their designs in simulation. The most typical approach in years past was to use directed tests. “Such tests are reasonably high in quality, as they were usually written by the same people who designed the circuit,” says Mark Olen, marketing manager for the SoC business unit of Mentor Graphics.

However, SoCs have become so complex that even the designers themselves don’t have the time or comprehensive insight required to write enough tests to cover all of the functionality, including all of the corner cases caused by any number of minor (or major) changes to the RTL along the way. Face it, even very smart people are apt to forget things.

Along came the addition of constrained-random testing, which solves the quantity problem. “If you want lots of testbench sequences, use constrained random testing. However, this comes at the expense of quality,” says Olen (Fig. 2). “Your engineers are giving up control and direction of test generation to a random number generator, guided by algebraic constraints.”

The constraint solvers used today for random test generation are quite sophisticated, allowing engineers to write constraints that guide the randomness of the testing to some degree. This enables constrained-random methodologies to cover more of a complex design than directed tests. Still, a gap remains between what was designed and what gets tested—and that chasm tends to keep verification engineers up at night. If they knew how to bring the two together, they would. But they don’t.

WHERE WE'RE GOING The logical conclusion is that to improve the quality of verification, while achieving greater efficiency in the process, there needs to be automation applied to directed testing and less emphasis on randomness. “We see the trend moving back to engineers wanting to make sure they test everything,” says Adnan Hamid, CEO of Breker Verification Systems. “They’re saying, ‘I don’t have time to write all the tests so I want a tool to generate the tests.’”

Verification is headed in the direction of the so-called “intelligent testbench” (a term widely attributed to analyst Gary Smith of Gary Smith EDA). Such technologies are taking shape, largely under the auspices of a few startups such as Breker, NuSym, and Certess, as well as in at least one large, established EDA company—Mentor Graphics. Some of these technologies are “white box” in nature, providing a clear view of the design’ s internals, while others are more of a “black-box” approach.

Meanwhile, other EDA houses are attempting to leverage the powerful verification capabilities inherent in the SystemVerilog language to instill more intelligence into the tried-and-true constrained-random methodology. Such approaches are usually augmented by increased reliance on assertions.

smartening up So what exactly is an “intelligent testbench,” anyway? That depends on who you ask. “Keep in mind that the true intelligent testbench includes the automation of the tool flow, including formal verification, and the automation of the methodology,” says Smith. Not every vendor necessarily complies with that caveat, though.

Breker Verification Systems’ approach was forged in the company’s roots inside Advanced Micro Devices. “In CPUs, random testing never made sense,” says Hamid. “We went to directed test cases and started using a graph-based approach. Now the big breakthrough was to combine graphs with a constraint solver.”

Continue on Page 2

Graph-based approaches to describing functionality require the design team to think through all of the system’s possible behaviors, which brings a directed test-like element to the process. They also lend themselves to the testing of complicated and arbitrary behavior by breaking it down into bite-sized pieces.

In the past, graph-based approaches to verification have failed because the graphs ended up as large and unmanageable as the problem they were trying to solve. Hamid says that the graphs produced by Breker’s Trek tools are manageable size-wise.

Breker terms its technology “coverage-model-based directed-test generation.” In practice, it requires verification engineers to consider all of the inputs to whatever portion of the design is under test. This can be done in either black-box or white-box fashion. They must consider each input’s possible cases. The result is a graph of the inputs (Fig. 3).

Next, designers must independently think about all possible outcomes, or behaviors that need to be checked, as they determine which inputs feed which outputs. The resulting graph of inputs and outputs, combined with constraints, constitutes what Breker terms a “coverage model.” That model carries enough information for Breker’s tool flow to generate directed test cases.

“It all comes back to the coverage models,” says Hamid. “They answer the fundamental questions about verification, which are: What are the proper questions to ask about our designs; what are the answers to those questions; and when are we finished asking them?”

Importantly, Breker’s methodology provides a straight answer to the final, all-important question about coverage. By building a single graph that describes the system’s behaviors, it eliminates the problem of having to hand-write all of the directed tests that apply to those behaviors.

“If you considered all the behaviors you want to test, and about all the ways in which that behavior can be induced, and they’re all represented in the coverage model, you’re done,” says Hamid. “The problem has been that until we got to this notion of coverage models, random testing never let us know if all the behaviors have been tested.”

Breker’s flow also performs static reachability analysis to determine which system states are essentially dead ends (Fig. 4). This further improves effective verification coverage, because it eliminates the need to write directed test for these “don’t cares.”

HITTING THE TARGETS A somewhat different approach to functional verification comes from NuSym, as described by its CEO, Venk Shukla, who’s not a fan of blindly throwing random tests at a design. “Imagine a wall studded with thousands of buttons, all about an inch apart,” says Shukla. “You’re standing across the room from that wall, blindfolded but armed with a machine gun and unlimited ammunition, and you want to shoot every one of those buttons. Chances are you will hit 70% or 80% of the buttons easily. But hitting the rest is very difficult and will take an enormous amount of ammo.”

The corollary in random testing is that you can generate millions of random tests, which finds some problems but doesn’t find others. Designers then must try to figure out how to modify the testbench to exercise the areas of the design that it hasn’t reached. Without considering the internals of the design, such an approach is bound to exercise the same parts of the design over and over and continually miss others, leaving many bugs on the table.

NuSym’s stance is that there’s plenty of information about the design within the design itself and the existing testbench. “Our approach is that without changing anything in terms of how the testbench is written, the tool should be smart enough to read the internals of the design and then incrementally change the tests to do the things that make sense,” says Shukla.

On the first pass over a design, NuSym’s simulator works much as other verification approaches do, throwing lots of random tests at the design to find bugs. But as it iterates, the tool applies what it learns about the design’s internals to the subsequent passes. As a result, the tool reaches higher coverage faster and with fewer simulation cycles.

Continue on Page 3

Furthermore, the simulator can determine which points in the design carry dependencies. “If there’s a nested if-else, and if the ‘if’ is not hit, the simulator knows you won’t hit the ‘elses,’” says Shukla. “Or, it can determine that a particular coverage point is dependent only on these five input variables, so it knows that we need to play with just those five input variables and change the values.” NuSym doesn’t guarantee 100% coverage, but it does ensure improved coverage and greater insight into designs. Much like Breker’s tools, NuSym’s simulator provides feedback on “don’t cares” and why they aren’t being covered.

The tool takes in standard Verilog and a standard testbench, whether it’s written in Vera, Verilog, or C/C++. Its usage, says Shukla, is no different than any other simulator. The tool has run on designs as large as 20 million gates. However, only a few select customers use it, and it’s yet to be publicly announced.

ULES AND GRAPHS An overall approach much like that of Breker’s is used in Mentor Graphics’ verification portfolio, as exemplified in its inFact intelligent testbench automation tool. Like Breker’s Trek, inFact uses what computer scientists would recognize as Backus-Naur forms (BNFs) to define system behavior.

When using inFact to create a testbench, input is written in the form of rules (Fig. 5). Those rules define a “grammar” of stimulus and expectations about the results from those stimuli. This can be done in standard languages, including SystemVerilog, SystemC, and/or C/C++. Existing code in the e or Vera testbench languages may also be used. Rules can describe any sort of behavior, including design specifications, protocols, interfaces, or something else entirely.

The rules are then compiled into graphs, which essentially comprise a flow chart of system behavior. That chart describes all of the legal behaviors performed by the system, which are then transformed into the constructs that make up the test sequences to be used in simulation.

Users gain complete control over the generation of testbench sequences when using inFact’s algorithms. “One of the things this technology can do is eliminate redundant tests,” says Mark Olen, marketing manager for the SoC business unit of Mentor Graphics. “Once you’ve created your first testbench sequence and simulated it (with Mentor’s Questa simulator), inFact knows it’s been through that path and knows not to do it again.”

Users can also turn on or off various parts of the graphs dynamically so they don’t waste simulation cycles on portions of a design known not to be in working shape. In addition, inFact has a self-test mode in which it exercises every aspect of a graph before its use, essentially testing the testbench before being applied to the design itself.

VERIFYING THE VERITIFACTION Another tack in intelligent testbench technology is to step back a level and ensure that the quality of your verification methodology is up to snuff. Certess takes that direction with its Certitude tool. The goal is to provide an objective measurement of a given verification environment’s quality.

“If your design has bugs to be found, three things need to happen,” says Mark Hampton, CTO of Certess. “First, you need an input sequence that exercises the functionality of the design where the bug is located. Second, the erroneous value must propagate through the verification environment to where a check is being made. Finally, the checker must indicate that the test has been failed.”

To determine how well the setup is working, Certess’ method injects artificial bugs into a design at known points. Users then run their verification environment and see if it can find them (Fig. 6).

“EDA companies develop technologies around things that can be quantified. Unfortunately, in the case of verification, we have had, in the past, a very poor metric for measuring the quality,” says Hampton. “We’ve only had measurements of the stimuli quality. It’s not good at telling us if it has propagated or whether the checkers are working.”

Certess’ technology is built on the foundation of a software technology known as “mutation analysis,” a similar technique that injects artificial errors into code. This notion of mutations, which Certess refers to as faults, can be applied to any formalized design description. This can include graphs or state charts.

While it’s not independent of the design, Certess’ approach is independent of the verification tools being used. Today, Certess focuses on simulation-based verification, but it could extend to qualification of formal methodologies in the future. Currently, the company’s offerings support VHDL, Verilog, and SystemVerilog with limited customer availability for C.

Continue on Page 4

CLOSING THE LOOP Even as newer methodologies continue to appear from startups, established EDA vendors continue to improve existing verification methodologies based on constrained-random stimulus. While users will typically have a target for verification coverage of their design and a time limit that goes along with it, reliance on simulation regressions and random test generation makes it very difficult to know when you’ll reach the goal.

“Today, it’s an open-loop process,” says Swami Venkat, senior director of marketing for verification products at Synopsys. “Stimulus generators receive no feedback regarding how much coverage has been achieved and where or what the holes are. It’s one-way traffic. The lack of automation and the open-loop nature of the process hamper coverage convergence.”

In Synopsys’ perspective, coverage convergence has to happen at both the pin level (be it for an intellectual property block for an entire system) and the transaction level. At pin level, Synopsys prescribes a combination of its Magellan formal tool and VCS simulator.

“Assuming a goal of 100% coverage, the simulator is able to reach, say, 50% coverage through dynamic regression testing,” says Venkat. Given the design constraints, Magellan then performs reachability analysis to determine which states are unreachable. “If the design can never get into certain states, your 100% coverage is less than you thought,” says Venkat.

For coverage closure at the transaction level, VCS now provides intelligent feedback from coverage to constraints so there’s more sharing of information. It doesn’t generate stimulus for coverage that’s already been achieved.

A SEQUENCE-BASED APPROROACH For Mike Stellfox, principal verification solutions architect at Cadence Design Systems, his notion of the intelligent testbench is colored by his history at Verisity prior to its acquisition by Cadence. “We basically started with Specman 10 years ago. Since that time, the notion of an ‘intelligent test-bench’ has evolved to include some additional capabilities on top of that,” he says.

To help users adapt to this kind of verification methodology, the first added capability is capturing a verification plan in a form that lends itself well to an intelligent testbench. “Before we had intelligent testbenches, people created directed test plans. But as they moved to intelligent testbenches, test plans tell you what you need to test,” says Stellfox.

Cadence’s methodology now captures verification plans that are easily abstracted and connected to a set of functional coverage metrics that tell you what is being tested by your testbench. The second area Cadence has extended is the manner in which stimuli are described.

“Essentially, if you rely on constrained-random stimulus, you will eventually cover all the combinations, but there’s a finite state space. So one of the areas we pioneered is defining a formal semantic for stimulus generation that we call sequences,” says Stellfox.

Sequences comprise a layer on top of constrained-random stimulus generation that lets users specify high-level parametrizable scenarios of stimulus and then randomize those. “The controlled order of the stimulus allows you get to the areas of interest more quickly,” says Stellfox.

“All this was pioneered in Specman and based on the e language. Cadence has also focused on SystemVerilog and e for testbenches, so we’ve applied these same capabilities to both languages. If you look at the SystemVerilog OVM, which is based on a lot of the work on the e side, this includes these same concepts of sequences,” Stellfox says.

Cadence’s approach starts with the Incisive Enterprise simulator, which allows designers to build reusable testbenches in either SystemVerilog, e, or a combination of the two. On top of that, the Incisive Verification Builder automatically builds testbenches based on user responses to prompts.

THE LINK TO DEBUGGING No matter how intelligent your testbench or how thorough your verification coverage, it’s worthwhile to remember that none of it matters unless your debugging environment can extract useful and meaningful data from the verification results. “For a designer or verification engineer, it’s difficult to trace back and understand what’s happening when things go wrong in simulation. The comprehension is the key component,” says Thomas Li, director of product marketing for SpringSoft’s Verdi debug product.

The advent of SystemVerilog has brought new challenges to debug. “There are a lot of new characteristics like dynamic data and object-oriented concepts. This creates new challenges in terms of how to capture behavior, ” says Li. “For the design itself, it’s straightforward. But for the testbench, it’s not that simple. There will be objects created and functions happening in real time. We need to move to a higher level of abstraction to record information into a single database.”

To that end, Verdi brings the concept of intelligent logging. It uses a unified database with signal-level details and higher abstraction-level transactions from testbench components. The tool makes it easy to instrument and record optimally during simulation. Instrumentation can be placed anywhere in your code or even in your logging classes in your libraries.

Engineers will also need to understand the RTL code, typically written by multiple people/teams across different geographies. For such design code, Verdi offers powerful capabilities in terms of source code browsing/tracing, schematics, and more. It enables designers to visualize the design in various ways, such as waveforms and tables of activity. It also provides facilities for analyzing testbench code in greater detail through a smooth link between logging, which gives an overall picture of the debug environment, and interactive means of diving into the testbench details.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!