System-Level Verification Takes More Than Just Speed

Sept. 12, 2006
Electronic devices continue to reign as the "darlings" of the consumer market. The ultra-cool graphics and feature sets of handheld devices, games, and MP3 players—just to name a few—are straining the boundaries of complexity that designers and verificati

Electronic devices continue to reign as the "darlings" of the consumer market. The ultra-cool graphics and feature sets of handheld devices, games, and MP3 players—just to name a few—are straining the boundaries of complexity that designers and verification engineers can predictably deliver. The consumer market, driven by feature and integration demand, puts intense pressure on complete system-level verification solutions that must adapt to the growing complexity. While there is no doubt that performance is king in the area of system-level verification, there are new techniques that will further ease adoption and streamline the process.

Verification, especially at the system level, has steadily grown to represent a dominant percentage of the entire project schedule. It causes designers and other specialists to spend less time on innovation and differentiation and more on verification. That’s why engineers are turning to higher levels of abstraction with SystemC transaction-level models (TLMs) and hardware-based acceleration or emulation as essential techniques for system-level verification. Much of the EDA revenue addressing the system-level verification market is coming from these tools today. As these approaches become more popular and necessary, there are a few little-known secrets that will aid in customer adoption and overall satisfaction.

These secrets to enhanced high performance include serving system-level verification with a good dose of verification process automation (VPA). Some of the most important VPA-based methods include common debug environments that will successfully handle multiple levels of abstractions and engines (simulation, formal, acceleration, and emulation), re-usable verification IP (VIP), and central views or databases that can aggregate "total system coverage".

If you take a birds-eye look at a system-level verification project, there are actually many steps (or stages) involved in building a working device. Typically the phases involve architectural exploration and validation, block and chip-level functional verification, hardware/software co-verification, and post-silicon verification. In almost every stage, designers, software engineers, and system architects take advantage of sophisticated transaction-level modelling approach and powerful acceleration/emulation solutions to get more system-level cycles. While there is no doubt that the performance levels of these approaches will impact the verification runs at each stage, this is not enough. What pure performance won’t do is measure your progress and establish "check points" or smooth management and transition plans for each stage. Or deliver a debug environment you can immediately get up and running with your preferred abstraction level. It is this lack of predictability, planning, and familiarity that sets up questionable and extremely risky methods or sign-off points at each stage.

The interesting point here is that when you mention system-level verification, most users continually equate these options with brute speed or raw performance. They want to devour the verification tasks at hand with the biggest and baddest engines on the market. While performance is extremely important, your overall system-level verification project quality and schedule predictability can be further enhanced with these newly introduced and often overlooked complementary methods and applications.

For instance, how "simulation-like" is your debug environment? Without a debug environment, integration issues will slow your simulation and emulation environments. How much time are you spending bringing up a new design’s verification environment? Are you able to track and measure turnaround time from identification of a problem until the time you found the root cause, fixed it, re-compiled your design, and ran your regression suite again? Does your environment offer a rich set of verification IP that is re-usable across multiple engines? How about your interface with SystemC simulators? With SystemC becoming one of the premier system-level languages, it is extremely important that you have an infrastructure that enables re-use of your SystemC reference models through the verification flow. By paying close attention to all of these process automation-based procedures, you and your extended teams will greatly improve your experience with these powerful underlying verification engines.

Another concern for system verification to closure is fully understanding what the goal of each phase is, based on measuring against the plan. A proper plan has goals for each phase that makes it more obvious and measurable. Are you done or have you reached a hand-off point? Has all your success criteria been met; for each engine in simulation, formal, and emulation or acceleration; within each specialized part of the design? My bet is that the answer to most of these simple questions is a rather weak, "I’m not really sure" or, "We’re doing the best we can."

Fortunately, there are software- and hardware-based tools and methodologies available that are inclusive and provide complete system-level process automation and management. Verification plans built around a system-level approach need to have metrics and reporting built in that roll up to management. This will allow system-level project teams to better understand the entire verification climate, and budget time and resources to address emerging problems.

As separate teams working on individual blocks, and software designers working on firmware or specific drivers begin to incorporate these methods in familiar environments with preferred languages, they will instantly become more efficient. They will begin to adopt team-based practices and methods that directly improve the overall verification process involved at each stage of the complete system. While the performance of the verification cycles at each stage is extremely important, it is really the performance of the entire team running in-sync that will get you the quality and predictability you need.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!