Image

2010 Will Change The Balance In Verification

Jan. 8, 2010
Complexity will drive change in verification and system-level design, specifically in terms of hardware/software co-verification, says contributing editor Frank Schirrmeister. Embedded software is now being used to verify the hardware it runs on.

What will 2010 bring for verification and system-level design? The semiconductor “beasts” that need to be verified are getting more and more complex. These beasts are developed at smaller technology nodes, and with the declining number of design starts, there are fewer of them per year. Programmability plays a significant role, both in ASIC and ASSP designs, since users have to deal with more and more processors.

In FPGA designs, software programmability on processors in the FPGA complement the programmability of the hardware itself. Also, other forms of programmability find more adoption, specifically configurable and extendible processors that allow the optimized design of application-specific subsystems.

Some of the verification drivers depend on the application domains. The International Technology Roadmap for Semiconductors (ITRS) differentiates networking, consumer portable, and consumer stationary as separate categories in the system-on-a-chip (SoC) domain. For networking, the die area remains largely constant but the number of processor cores used in this application domain increases significantly.

THE CONSUMER DOMAINS

Processing power increases largely influence the consumer portable domain, which still has short product lifecycles. The consumer stationary domain’s purpose is to buy processing performance while functionality is mainly defined. Yet software has to be distributed across different processing engines.

The reuse of pre-developed intellectual property (IP) is increasing across all domains. According to Semico, reuse is approaching 60%. In absolute terms, the average number of IP blocks designed into chips will be well over 50 in 2010 and approach more than 70 in 2012.

The biggest driver of change, the embedded software, starts to dominate chip development efforts. It becomes the long pole in projects as the availability of software determines when chips can go into volume production. Not only is software becoming more complex, it’s also dealing with multicore hardware architectures. While this shift to multicore may solve the hardware side, it causes significant issues for software because there is no solution for automatic parallelization, except for very narrowly defined special cases like graphics processing.

The beasts that the industry will try to verify going forward are much more complex designs with a significant number of reused IP blocks and a significant effort focused on software distributed across processors. And good luck getting the verification done with the increasing portion of analog mixed-signal content.

Luckily, divide and conquer applies, and verification can be tackled in several different steps. Honing in on the software, more customers say they not only use embedded software to define functionality, but also use it to verify the surrounding hardware. Such a shift is quite fundamental and has the potential to move the balance from today’s verification, which is primarily done using SystemVerilog, to verification using embedded software on the processors, which the design under development likely contains anyway.

CHALLENGES FOR ENGINEERS

On the downside, this means verification engineers may have to learn another language for testbench development, namely C/C++ compiled to an embedded processor. But there are very substantial benefits to this. The vision is that software-driven verification can be reused across different phases of a project.

First, users start developing verification scenarios that represent the device under test (DUT), even before RTL becomes available, by developing test benches using C or C++ on virtual platforms representing the design in development. Once RTL becomes available, these tests can be refined and reused both in RTL simulation and in execution of RTL on FPGA prototypes and/or emulators. And even after silicon has become available, the same software-based tests can be executed to verify the SoC when the actual chip has been manufactured.

The enabling technologies to allow execution of those verification tasks throughout the different project phases are called prototyping. First, users apply virtual prototyping before RTL, typically assembling transaction-level models in a virtual platform together with instruction set simulators (ISSs) of the required processor cores. Virtual platforms can be very fast in execution when kept at a loosely timed abstraction level.

The more accurate the users require the models to be, the slower they will run. Later in the flow, FPGA prototyping offers higher accuracy at higher execution speeds. Finally, once the first chip samples are available, hardware prototypes become viable and later become development kits for software developers.

About the Author

Frank Schirrmeister

Frank Schirrmeister is Senior Director at Cadence Design Systems in San Jose, responsible for product management of the Cadence System Development Suite, accelerating system integration, validation, and bring-up with a set of four connected platforms for concurrent HW/SW design and verification.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!