Electronic Design

Solving The Modeling Accuracy And Availability Crisis Head On

Design verification has quickly evolved into one of the most critical steps in the system-design process. It was previously possible in many types of systems to use a limited simulation environment to verify critical sections of custom logic. This included ASIC or FPGA modules. System verification was left to the lab environment. But, difficulties associated with debugging in the lab have forced project teams to spend more time evaluating Verilog and VHDL simulation as a method to verify system designs. The greatest roadblock for system simulation is the accuracy and availability of simulation models.

To better understand the simulation-modeling crisis, look at the Motorola MPC8260 as an example. This microprocessor integrates six previously separate devices into a single device. It includes a PowerPC microprocessor, a communications-processing module, and a system interface unit, as well as approximately 45 peripherals. The data book for the device, 1006 pages thick, is typical for the type of device used in today's embedded systems, for which many project teams must now find simulation-modeling solutions.

In the past, engineers used three basic types of models to do system simulation. The first type, the bus functional model (BFM), can generate different bus transactions for a given device, usually a microprocessor. Hardware engineers like the BFM because it's easy to drive from a test bench. Unfortunately, it's created by an engineer using information from a data book, a process that always leads to accuracy issues.

The second type of model, the register-transfer level (RTL) or full-functional model, is created using the RTL or gate-level description of the chip. Accuracy is excellent because it's derived from the code used to produce the silicon, but performance is extremely slow. Sometimes these models are encrypted to protect the source code from users. It's not uncommon for such a model to require hours of simulation just for the reset sequence of a complex device.

The third type of model is the instruction set simulator (ISS). Software engineers use the ISS to start executing code early in the project, before any prototypes are available. This model offers good performance, but it makes no attempt at accuracy.

Aside from accuracy and performance questions, all three software models suffer from limited availability. Due to support and legal issues, very few semiconductor companies, for instance, are willing to give out source code or some encrypted version of source code to more than a few key customers. Still, it's necessary in order to create RTL models. Both the BFM and the ISS must be created by a modeling engineer. Typically it takes anywhere from four months to more than one year to do so.

Compounding the problem, EDA vendors have created co-verification tools that use these modeling methods. But the tremendous time-to-market benefits of co-verification can be lost if too much time is spent debugging model problems.

But, there's another way! For most devices, the best way to address the accuracy, performance, and availability issues is with a modeling solution that uses hardware to control actual silicon, which serves as a model. As a result, the model is guaranteed to be 100% accurate, including device errata. Plus with this type of model, high-performance hardware, coupled with advanced synchronization techniques between the model and the logic simulator, provides the performance required for system simulation. Best of all, engineers no longer need to wonder if a simulation problem is in the model or in the design.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish