Hidden Verification Cost Needs Visibility

April 24, 2006
There is a little-discussed cost of verification that is consuming excessive disk space, simulation cycles, and engineering effort, while also playing havoc with the predictability of verification completion...

There is a little-discussed cost of verification that is consuming excessive disk space, simulation cycles, and engineering effort, while also playing havoc with the predictability of verification completion. Whenever a testbench discovers that there is a mismatch between expected and actual results, engineers must trace the bug back to its cause. In order to do this, they need the values of the signals in the causal logic over the period of time from the cause to the mismatch. This period of time and the subset of design signals actually involved in causing the observed wrong value are extremely difficult to determine ahead of time, and this causes all kinds of consequences.

First, the engineers need to decide whether they’re going to simply record everything so they can debug right away after a simulation or emulation run. Of course, this slows the simulation down dramatically, usually about 5X. If the test case is short enough, this is probably acceptable. But if it’s long, as in graphics and networking applications, then they may be forced to guess which design sub-tree is likely to be causing the error, and dump only those signals. Unproductive time is spent making this decision. Let’s say they decide to record everything, and wait the requisite time. What if the disk fills before the run completes? Let’s say they dump only a subset of signals, and later find that the actual cause is in some logic that they didn’t initially suspect. In either case, they’re stuck and have to run the test case yet again.

There is a better way. Intelligent automation can dramatically improve the efficiency of this process and make it more predictable. There are three requirements: 1) remove the need to decide what to record; 2) cut the overhead of gathering the necessary information; and 3) regenerate all the signal values using the recorded values for just a subset of signals.

User experience with such an approach has shown that the amount of data required to gain full visibility is only about 20% of the total, and that the overhead of producing this data relative to running the test case without recording data is only about 20%. That means that users have much more flexibility to always record what they need to debug as soon as a testbench flags an error, or worst case, to be able to re-run the case quickly and predictably without having to guess what to dump.

This new technique is called “visibility enhancement” and it bridges the gap between simulators, emulators, FPGA prototypes, and even early silicon running in prototype systems – the data generating processes – and the debugger – the data consuming process. The paradox is that the data-generating process is impeded by the need to record data, while the debugger is able to do a much better job when more data is available. Filling the need for massive quantities of data with minimal overhead on the verification tools provides a huge cost savings.

This is critical when you consider that verification is inherently “visibility challenged” and is becoming more so as designs get larger, and more complex. This is more obvious in some cases than others. For example, real silicon, packaged and inserted in a board, certainly makes it very difficult to observe signal values during operation of the chip. In the old days, we could connect logic-analyzer probes to the wires connecting TTL logic. Now, we need to insert special logic to make signals observable. This idea covers a range of techniques collectively known as “design for debug,” or DFD. Visibility enhancement technology works with DFD methodologies to identify where instrumentation should be placed, to expand the data for greater visibility, and to raise the level of abstraction from gates to RTL. This last point deserves closer consideration: the netlist that represents the design that actually gets built is not as easy to understand as the RTL that precedes it. Engineers are generally more familiar with and have an easier time understanding the higher-level abstraction. So, by taking the data generated by the low-level circuit, mapping it back to the RTL, and doing the data expansion in RTL, visibility enhancement technology enables a huge leap in design comprehension.

FPGA prototypes have the same visibility challenges, but are somewhat more flexible than the actual chip. There are tools available for inserting observability logic into FPGAs for acquisition of debug data. However, this comes at cost in area and performance. Visibility enhancement technology works with these tools in the same way it works with DFD approaches.

It may seem that emulators and simulators are immune to these visibility problems because they are capable of recording any signal value at any time during operation of the design. However, consider the cost of observation. Asking an emulator to record value changes is like asking a bullet train to stop at every local station. The speed advantage is lost in all the picking up and dropping off. Therefore, the advantage of using visibility enhancement techniques with emulation is clear: record less, slow down less, and still get excellent visibility. And the gate-to-RTL correlation is important here too, as emulators use netlists to program their FPGAs. Likewise, simulation can record any signal value at any time. The overhead, while not as severe as with emulation, is still large.

Visibility is the hidden cost of verification that is long overdue for a major trim. New visibility enhancement techniques can improve the predictability of verification and the productivity of engineers by automatically determining what signals are essential, expanding the data to fill in the values for signals not recorded, and ultimately raising the level of abstraction from netlist to RTL for easier comprehension.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!