Image

When One Plus One Has To Be Less Than One

May 15, 2009
A customer recently suggested he could only add new steps to his process if the sum of the current workload and the additional workload to add the steps would result in less work overall. It took a little while for me to let that math settle in, but in re

At a recent customer meeting, an observation really hit home. We were discussing verification at the block level and at the chip level, as well as using mixed levels of abstraction. Everybody seemed to agree on one specific obstacle that, if fixed, would improve verification productivity.

With verification now happening at different levels of abstraction, the verification done at earlier stages often isn’t reused further down the road efficiently. We discussed specific solutions to connect high-level models directly to RTL verification to drive “live verification data.”

Next, we discussed the necessity of formal verification between high-level models and RTL. Our customer’s lead verification engineer had been notably quiet until then. When the necessity of formal verification was questioned, he became very vocal.

He stated that “they could not afford to do more verification.” He continued to argue that “everything they should consider doing, like using high-level models for verification, is only worth the effort if the result of the overall verification effort decreases.”

To be precise, he suggested that he could only add new steps to the current process under the condition that the sum of the current workload and the additional workload to add the steps would result in less work overall. It took a little while for me to let that math settle in, but in retrospect this comment precisely explains one of the key issues preventing mainstream adoption of system-level design. The return on investment (ROI) for system-level design is not quantifiable “enough,” which often deters metric-driven verification teams.

Down Memory Lane

This math, which was difficult to understand at first sight, reminded me of my first chip development project. We developed a chipset to enable motion vector estimation for HDTV encoding. At the time, logic synthesis was still in its infancies.

I was responsible for one of the chips, which was performing fast Fourier transformation (FFT) on incoming video data. In addition, I was responsible for system verification, which entailed making sure that the combination of four of my FFT chips and two other chips would perform as intended and specified. We followed a thorough development flow with detailed inspections of the written specifications and detailed code reviews involving various team members.

The three chips were entered using schematic entry, and the correctness of the layout was verified using layout versus schematic (LVS) tools. To verify the system, we actually didn’t use the gate-level schematic. We instead recoded everything using Verilog RTL.

Compared to the gate-level schematics, Verilog RTL really looked like a high-level language and was much easier to handle. Its simulation was much faster than simulating at the gate level. And, debug was much easier than gate-level debug using line-based Verilog debuggers.

Bottom line: We had added the additional step of coding RTL on top of the existing gate-level description, and as a result, we reduced the overall time and effort it took to get to the verified chip and system. The additional coding and verification effort of the RTL was easily made up in faster simulations combined with decreased efforts for implementation and debug.

Your Turn

So what does this mean for mainstream adoption of system-level design? The industry needs to make the ROI into system-level technologies clearly quantifiable. Yes, more exploration early in the design flow may result in electronic products meeting their specifications better. But what value does one attach to that?

Yes, starting software development prior to RTL being available will reduce the overall time it takes to get the chip into production. But, again, how can we quantify that value? Can the additional effort it takes to create a virtual platform be offset by reducing the overall effort it would have taken to get results without it? In all the projects I have been involved with, that has clearly proven to be the case. But we need better and more quantifiable ROI data to back it up.

When development teams fully embraced logic synthesis, the step from gate-level to RTL was well quantifiable.Trying to implement designs with complexity following Moore’s Law at the gate level simply became impossible to manage. Quantifiably coding and verifying in Verilog RTL, synthesizing from RTL to gates, and avoiding re-verification at the gate level using equivalence checking resulted in lower effort overall.

Well, there is hope yet. To my surprise, in a recent survey we did at DVCon, over 50% of the respondents told us that they are already running embedded software on embedded processors in their design to verify the surrounding hardware. We also know that simulation of RTL in conjunction with TLM processor models runs between 20 to 50 times faster than their pure-RTL counterparts.

Suddenly, for this case, the value of system-level design becomes easily quantifiable. Replacing processor models with their TLM equivalents to allow verification of the surrounding RTL hardware can be measured. And the effort it takes to do that is easily offset by the savings gained in simulation time.

In addition, less quantifiable advantages come to play as well. With the embedded software becoming part of the testbench for verifying the hardware, it can be re-used across virtual platforms, the actual RTL, later on for FPGA prototyping, and emulation and even for post-silicon validation. Bringing hardware and software together that early also reduces the risk of finding defects at the interface between hardware and software during late integration, potentially at a point at which they can no longer be corrected easily.

So it turns out the math I learned that day can hold true. Sometimes one plus one results in something less than one.

About the Author

Frank Schirrmeister

Frank Schirrmeister is Senior Director at Cadence Design Systems in San Jose, responsible for product management of the Cadence System Development Suite, accelerating system integration, validation, and bring-up with a set of four connected platforms for concurrent HW/SW design and verification.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!