Shorter time-to-market cycles and the increasing densities of both programmable logic devices (PLDs) and system-on-a-chip (SoC) ICs have made design simulation an essential part of the development cycle. The growing practice of designing hardware at a higher level of abstraction than register-transfer level (RTL) means that simulation is becoming a part of the design process. Effective simulation improves design reliability while reducing the development cycle. Yet even with modern software tools, producing simulations for complex designs can be difficult and error-prone.
The benefits of simulation on any single project are readily apparent. Simulation enables design engineers to test for wide ranges of conditions that may be difficult to generate using prototypes. When design changes are easier and less expensive to make, designers can shorten cycles and improve quality by verifying functionality earlier in the design cycle. Plus, simulation and modeling tools provide an environment where designers are better able to observe and analyze their design's behavior.
This is so important that some logic vendors see differentiation in developing and providing their own simulation tools. Others opt to license tools from specialty vendors, such as Model Technology, and offer several different levels of simulation, depending on the needs of the design team.
But, simulation modules developed for smaller projects aren't easily scalable to larger and highly complex designs, which makes it difficult to build an effective test bench. There also is little guidance on how detailed the simulation model should be to capture the level of implementation necessary to provide adequate information (see "Modeling Of Embedded Processors In PLDs," p. 96).
Virtually all EDA tool chains include facilities for simulation. PLD vendors, like Altera Corp. and Xilinx Inc., give away entry-level versions of simulators with their devices. In fact, that may be part of the problem with simulation. These versions enable new HDL designers to experiment by creating small HDL designs and simulation test benches.
Critical to effectively using HDL simulation is the ability to build test benches that are both scalable and reusable as designs grow more complex. Engineers who develop a methodology emphasizing these goals will find that simulation pays off in better designs and less rework. Plus, new software tools are enabling engineers to implement simulation in new and powerful ways to better debug and verify more complex designs.
Using Simulation In Design Simulating a design is a standard practice for ASIC designers and others working with high-density devices. Verilog and VHDL simulators replaced older simulators, which generally had separate languages for netlist, models, and test bench uses (Fig. 1).Simulation can occur at three points in the design process: at the RTL, during functional design and simulation, and at the gate level. Once the design is created, it must be verified prior to the RTL to ensure that its functionality is as intended. At this point, the test bench should be created or expanded, so that it covers the range of tests needed to exercise the design. It will be used throughout the FPGA flow to verify the functionality of the design at the register-transfer, functional, and timing-gate levels.
A test bench, a separate set of VHDL or Verilog code connected to the design's inputs and outputs, is an integral component of simulation. Because these stay constant through synthesis and place-and-route routines, the test bench can be used at each stage to verify functionality. Because of the complexity of large designs, there are significant challenges in building testing models and test benches.
Two ways exist to implement a test bench. One method is to gradually create your own test bench over a period of time, adding the necessary components for testing features of specific designs. Here, it's critical to design both components and tests that are modular and reusable across different designs, and even across different design teams.
This approach requires careful planning and close coordination between the various design teams, as well as the use of a central repository for tests. You're not likely to see the full benefits of this home-grown approach in the first couple of projects. But performed properly, it could become one of your most valuable design tools.
The other way to implement a test bench is to buy a commercial model library, such as the FlexModel library from Synopsys Inc. This type of model won't implement your specific designs, so you will still have work to do on your test benches. But commonly, they will model the off-the-shelf components and IP that you're using. Additionally, they have the advantage of already being optimized and verified. Because these libraries vary in their offerings, you will have to search for suitable components. FlexModel, for example, includes a wide range of microprocessors, controllers, and bus interfaces. Commercial libraries jump-start the simulation test bench, but are by no means a complete answer.
Test bench automation tools enable engineers to more easily specify and create the test bench environment. These tools provide built-in test generation, reactive response checking, and functional coverage analysis. They should include a high-level verification language, too, that will increase engineering productivity and verification quality by eliminating much of the laborious process of creating high-coverage test benches.
Performance is a key aspect of simulation, especially as your designs grow larger and more complex. If the design encompasses hundreds of thousands or millions of gates, simulating all aspects of the design, and running the simulation test cases many times can take up an inordinate amount of debugging and verification time.
One way to improve simulation performance is by speeding up the simulator through the use of more efficient code. This lies in the realm of the simulator vendor. But it's incumbent on the design team to seek out the most efficient simulator for serving its purposes.
The second method is employing another simulation technique altogether. Traditional event-driven simulators are highly interactive. This grants designers the advantage of being able to observe results at any point in the simulation process, and inserting new boundary conditions or other modifications while the simulation is running.
These traditional simulators, however, have a significant disadvantage in that they're complex. They combine timing, initialization, and functional verification into a single process, which can seriously impact the time spent using the simulator. An event-driven simulator laboriously predicts signal delays with precise timing. It adds new events to a "timing wheel" and slowly progresses along the wheel, simulating each event at its time. Thus, it produces a timing-correct model of the design at each moment in time, but at the expense of the overall speed of simulation.
Event-driven simulators have been used successfully for small designs, or for module debugging of larger designs when a high degree of interactivity was needed and the module sizes were small enough to compile and simulate quickly. You can continue to use your event-driven simulator to develop initialization vectors throughout the simulation process. But traditional event-driven software simulators might not be able to provide the performance required for verifying high-density designs in limited time.
An alternative to event-driven simulation is cycle-based simulation. Because cycle-based simulators have to evaluate the circuit only once per clock phase instead of accomplishing many evaluations per clock as do event-driven simulators, their performance can be orders of magnitude higher than the event-driven alternative.
Cycle-based simulators, such as Cadence Design Systems' SpeedSim, were designed with speed as their primary objective. Cadence says that SpeedSim makes it possible to reduce the design's normal simulation time from ten to 100 times by drastically cutting down on the amount of memory and other system resources used. SpeedSim employs different simulation algorithms that make it easier to verify large designs. Cadence claims that SpeedSim has been used for production verification of designs with over five million gates.
For simulating RTL code directly, Synopsys' Cyclone optimizes the source code utilizing cycle-based algorithms that were designed to work with synthesizable RTL descriptions. Like SpeedSim, Cyclone is proficient at simulating large designs with mostly synchronous functions, minimal legacy gate-level code, and a test bench that follows a set of Cyclone modeling guidelines. The simulator accepts a broad range of RTL constructs, including complex types, asserts, wait statements, while loops, text I/O, and sparse memories.
To assist with identifying problematic coding styles, the Cyclone analyzer performs both syntax and synthesis policy checking. The simulator compiles the test bench and design under test and alerts the designer as to the location of hidden design and coding style problems. This includes detecting unintentional asynchronous feedback, inferred latches from incompletely specified processes, plus unintentional multidriven nets.
Analyzing Results To get the most out of a simulation, the data collection and analysis must be both robust and flexible. The problem is that virtually every simulation problem is different, and the generated results have to be analyzed in almost an infinite number of ways. While simulation tools come with some predefined graphs for visualization, it's unrealistic to provide enough predefined analysis features to satisfy all design needs.Analysis tools should support various graphical and test-based techniques that allow engineers to visualize the simulation results and cross-reference them back to the original design. Such techniques may include waveform displays, source-code editors, register windows, and design-hierarchy viewers.
But instead, many simulation tools have turned to scripting languages to provide the necessary power and flexibility for customizing the user interface and data-analysis components of the simulation tool. The most popular of such languages is called Tcl/Tk. Tool Command Language, or Tcl, is an open-source scripting language used by hardware and software developers that has become a critical component in thousands of applications. With a simple and programmable syntax, it can be either used as a standalone application, or embedded in application programs.
Tk is a graphical user interface toolkit that makes it possible to create powerful GUIs incredibly quickly. Also an open source, it has proven so popular that it now ships with all distributions of Tcl.
Using Tcl/Tk, designers can produce buttons, menus, wizards, and other components that not only make more customized data analysis possible, but in many cases also customize the simulation environment itself. Synopsys employs Tcl to provide a command interface to its simulation tools (Fig. 2). Model Technology's ModelSim incorporates Tcl/Tk not just for customizing analysis and user interface, but to provide an interface to other programs as well, including commercial or home-grown data-analysis tools.
HDL use lets designers organize and programmatically analyze highly complex designs with tools like HDL simulators. In other words, simulation result analysis for functional design verification has replaced schematic visual inspection in the design process.
This makes it important to have a modeling and design language that reduces the effort necessary to code and simulate test cases. Often, the experience of the design team and the tools available for the project dictate the language of choice. But, engineering teams seeking reusable code and processes over a number of projects are open to using different languages for different parts of the project, or to changing languages if the productivity advantages are clear.
Furthermore, no existing language is ideal for employment in simulation environments and test beds. Verilog, for example, offers no facilities for dynamic memory allocation, and it isn't well designed for text processing and built-in input. Verilog 2000 addresses some of the current limitations, but even assuming that these are implemented, there are still shortcomings.
Designers also are experimenting with general-purpose programming languages, like C, C++, and even Java, for hardware modeling and simulation. In significant design projects, it's becoming accepted practice to perform the initial design exploration in a general-purpose programming language, typically C or C++, before implementing the design in an HDL.
Designers are experimenting with C or C++ programming languages as alternatives to the traditional design languages. The existing RTL-based design methodology is successful because an RTL model in Verilog or VHDL can be verified by simulation or formal means and then transformed into a design implementation in a way that was proven correct. C/C++ lacks the necessary features to unambiguously model hardware designs, so some engineers are proposing language modifications or extensions to make it more appropriate for their purposes.
The key to using C/C++ as a hardware description language is to provide the required data types, concurrency model, and hierarchy support. Without these features in the language itself, any method of hardware description using that language becomes arbitrary and loses the consistency between execution results, or simulation, and synthesis results. In the case of C or C++, the required HDL semantics can be provided by C++ classes as an extension of the language to bridge the gap.
Several vendors have proposed extensions such as these to a general-purpose programming language like C/C++, or to a hardware design language like Verilog. This is an attempt to overcome the limitations of any single language in hardware modeling and simulation. These languages are either hybrids of two or more existing languages, or supersets of one language or more.
According to Peter Flake, cofounder and chief technology officer of Co-Design Automation, such a language requires hardware features, including concurrency, events, and timing, along with software features like user-defined data-type structures, pointers, and storage. To better support verification, the language also should include hierarchical and concurrent state machines, support for dynamic arrays and queues, the ability to replicate some of the process-management features found in operating systems, plus the ability to monitor and execute a protocol.
A Superset-Type Language An example of a superset-type language is Co-Design Automation's SuperLog (Fig. 3). Based on Verilog, it's extended with features from both C and VHDL. It includes the C typedef instruction for creating user-defined data types, C operators, and control constructs. SuperLog produces a number of modifications that make it more of a cross between a general-purpose language and an HDL. This means that it has the potential for possibly using a single environment for end-to-end specification and design.Utilizing a general-purpose programming language for initial system specification additionally creates the problem of translation into a usable hardware description. When the design process is finished and the high-level design parameters have been decided upon, the resulting program must be translated into an RTL model in Verilog or VHDL, so that the design can be implemented using standard simulation and synthesis design tools.
This rewriting step might be termed a design gap because it's not only an entirely manual process, it's a complete rewrite of the design from one type of language to another as well. Resulting from this is a host of problems emanating from the inability to verify the equivalence of the two design representations. There's simply no guarantee that the design prototyped in a general-purpose language is the same design that's implemented using, for example, VHDL.
Tools like CynApps' Cynlib come in here. Cynlib's approach is similar to that of SuperLog, where an architectural model can be iteratively refined from a very high level to a detailed implementation model in a continuous fashion. The whole time, the model is executable, so there's no discontinuous jump from one representation to another. At the lowest levels, the execution of the model provides a cycle-accurate simulation of the final hardware. Once the simulation tools can be fully extended across the various languages that are used at different points in the design process, engineers will be able to work at higher levels of abstraction while shortening debugging and verification time later in the cycle.
In practice, many tools support mixed-language simulation. For in-stance, for design definition, Cadence's SpeedSim accepts any combination of synthesizable Verilog constructs—RTL or gate-level, including UDPs. Cadence supports C/C++ for test bench construction, claiming that these languages make the most efficient test benches. But, SpeedSim also accepts many Verilog behavioral test bench constructs. From Model Technology, ModelSim offers a combination of mixed Verilog and VHDL code simulation, as well as either language individually.
The Role Of IP IP plays a key role in the design-cycle time for complex SoCs and other complex designs. Today's SoC designs reuse IP from earlier, or legacy, designs and third-party developers. The IPs can be represented in various formats, such as VHDL, Verilog, or C. This requires that simulation and verification tools understand multiple design languages, and that the circuitry, including these IPs, function correctly.Simulators typically work with IP as a black box, because module verification isn't necessary. Model Technology's ModelSim offers the possibility of third-party IP providers delivering compiled code. Then, this is combined in the simulator with code that's implemented by designers to accomplish a full design for simulation.
With SoC designs extending into several million gates, simulation performance and memory issues become critical. As a result, simulation environments are beginning to go more toward 64-bit platforms, such as Sun SPARC and Intel Itanium processor systems. Primarily, the need is for memory and storage capacity capable of manipulating very large code models. Model Technology has already announced its support for selected 64-bit simulation platforms, and others will follow over the next year.
When using high-density FPGAs to create increasingly larger and more complex designs, simulation represents a way to more fully debug and verify the design in a shorter period of time. State-of-the-art simulation tools coupled with other advanced verification tools and techniques can provide design engineers with a powerful arsenal to attack the burgeoning verification challenges. Advances such as higher performance, 64-bit platform support, and more alternatives in mixed-language and multilanguage support, enable engineers to spend less time in the simulation phase of the design.
Companies Mentioned In This Report | |
Altera Corp. (408) 544-7000 www.altera.com Cadence Design Systems (408) 943-1234 www.cadence.com Co-Design Automation (877) 626-3374 www.co-design.com CynApps (408) 588-4000 www.cynapps.com |
Model Technology (503) 641-1340 www.model.com Synopsys Inc. (650) 584-5000 www.synopsys.com VeriBest (888) 482-3322 www.mentor.com/pcb/ Xilinx Inc. (408) 559-7778 www.xilinx.com |