For some time, designing and testing PLDs has taken place primarily in the world of hardware description language (HDL). Of course, most PLD are eventually mounted on a pc board of some sort when they're deployed in the real world. Traditionally, the design process employs HDL test benches to ensure that the FPGA/CPLD (field-programmable gate array/complex programmable logic device) will operate correctly when fed with signals from the rest of the pc board. Another method has emerged, though they're not mutually exclusive. Correct PLD operation can be confirmed by simulating the programmable device as part of the pc-board-level circuit in which it will operate.
Here's how the process works. A PLD, designed and simulated in HDL, has its symbol placed within a schematic and is simulated as part of the entire pc board. This involves using a new technology called co-simulation. Basically, co-simulation integrates the simulation data produced by HDL and Spice simulation engines and generates a single set of results, which can be viewed as waveforms, or analyzed further. Given the increasing use of FPGAs/CPLDs in today's designs, such capability provides an important new tool to PLD/pc-board engineers.
Several factors are driving the increasing use of programmable devices in today's circuit designs. One is the pressure on industrial, commercial, and consumer electronics manufacturers to miniaturize their offerings. Cell phones, wireless e-mail devices, MP3 players, and PDAs are the showcase products.
Next, as more designs become predominantly or fully digital, PLDs become feasible alternatives to pc boards, especially as they advance to handle larger circuit sizes. (State-of-the-art FPGAs provide hundreds of thousands to millions of gates.) Finally, price can be reduced. A PLD can cost dramatically less than a finished pc board. All of these facts have led to more designs employing programmable devices.
FPGAs/CPLDs imply HDLs
The large number of gates that programmable chips can now support drives down the size and cost of the design. This means, however, that the usual design methods for implementing circuits on pc boards are no longer practical for circuits destined for FPGAs/CPLDs. Most notably, schematic entry of a design with gate counts typical of FPGAs/CPLDs is impracticably cumbersome, if not impossible. Design entry is usually not schematic-based, because the size of most modern designs prevents it.
When designing with a PLD, the engineer tries to program the chip to behave in a certain way, as opposed to describing the electrical components of the PLD circuit. This difference is the fundamental reason why HDLs are utilized. They describe the behavior of a device, rather than the underlying circuitry.
Originally conceived to be behavioral-level languages, VHDL and Verilog HDL are by far the two most common HDLs. They're text-based, so for FPGAs/CPLDs, design entry is performed using a text editor instead of the typical schematic-capture tools em-ployed for pc-board designs.
For simulation, a fundamental step in the design flow of any chip, the programmable device needs to be modeled. Spice simulation works well for SSI- or MSI-level digital devices by representing them with transistor-based equivalents. As a result, it's extremely accurate. But transistor-based equivalents become unwieldy once the number of gates gets too large. This problem exists for any sufficiently large digital chip, whether it's a programmable device or a microprocessor.
Therefore, some benefits of co-simulation for PLD design also can be exploited for modeling/simulating any complex digital device within the board-level environment of which it will be a part. For this reason, FPGAs/CPLDs aren't modeled in Spice, but rather use an HDL for simulation.
As already noted, almost all PLDs today are programmed using either VHDL or Verilog HDL. Both are industry-standardized. (VHDL and Verilog are based on IEEE 1076-93 and IEEE 1364, respectively.) Consequently, each tool offers a nonproprietary method for designing with chips from a variety of different vendors. Figure 1 shows a portion of the typical VHDL source code, plus simulation results for an arithmetic logic unit (ALU).
The complexity of programmable devices, and the fact that engineers can't observe their internal circuitry, makes simulation an essential step in the design flow for all FPGA/CPLD-based designs. To perform any simulation, the device/circuit being simulated must have stimuli supplied to it. This is a VHDL test bench's purpose.
An ideal test bench lets the engineer verify the functionality of a design by applying input stimuli to the device under test (DUT) and then monitoring its response. This monitoring may be performed by collecting data to a log file for further analysis or, more typically, by plotting it to a waveform viewer integrated with the HDL simulator.
After the device itself is fully described, the test bench is created by writing additional HDL code. That code is then used to drive the simulation of the PLD acting as the DUT. In many cases, verification via co-simulation is more accurate than an HDL test bench because the latter is based on functional and timing specifications extracted from the original circuit.
Co-simulation bypasses the specification-extraction step, allowing the whole pc-board circuit to be simulated directly. Because the whole circuit's operation must usually meet a given specification, co-simulating often allows for relaxing the design constraints on the HDL description.
Using a test bench to verify correct behavior of the HDL code (which will ultimately be used to program the PLD) is a common, reliable, and generally mandatory step. Writing a test bench has its challenges, though, and can be a very time-consuming process. In fact, complex designs may lead to test benches that are more complicated than the designs to which they apply.
The test bench may additionally contain coding errors, or it might not provide proper coverage of the FPGA's functionality. This latter problem is caused by the test bench simulating the environment under which the design will function only to the degree that the appropriate stimuli can be known (and then modeled) by the engineer. It may not be possible to adequately conceive of, or describe, the stimuli to which the chip will be subjected. Finally, the test bench could be written by a different engineer than the one who wrote the original HDL code (in an attempt to avoid overlooking something).
With simulation evolving to the point of integrating Spice, VHDL, Verilog HDL, C-code, and other development languages, the designer can now augment the test bench by using co-simulation. An important consideration is the fact that co-simulation lets multiple simulation engines interact with each other in real time, which is especially important for PLD designers.
For example, it lets users perform schematic capture of a circuit for implementation on a pc board while using HDL-modeled FPGAs/CPLDs as part of that circuit. The HDL model for the FPGA/CPLD comes from the first step in the PLD design process once the HDL code has been simulated (perhaps even with the use of a test bench) separately from the pc-board circuitry.
With this new technology, PLDs use VHDL or Verilog HDL code as simulation models, while other discrete parts or ICs (analog, or smaller-scale digital devices) utilize Spice. With Spice, VHDL, and Verilog HDL interacting in the background, the entire board can be simulated, and the PLD receives the appropriate signals from its surrounding components and connections, just like in the real world. The user sees integrated simulation results, displaying the pc board's overall behavior with the FPGA/CPLD taken into account.
To synchronize the interaction between the Spice and HDL simulators, co-simulation requires unique, custom technology. The behavior of any individual simulation engine is already a very sophisticated operation, based on extremely complex software code. The key technical difficulty is the incompatibility of the basic principals of each simulation technique. The most critical tasks are coordinating "time" and passing control back and forth between simulation engines. The word "time" is in quotes here not because it's relative in Einstein's sense, but because different simulation techniques treat time differently.
Spice simulation typically uses an iterative approach, moving forward and backward in time, as it attempts to create equations that accurately describe the electrical behavior of the circuit elements. These equations are frequently high-order, nonlinear differential equations that get "written" as matrices in the simulator and then solved (or at least such an attempt is made).
A difficulty arises here. If the matrix can't be resolved, the calculation engine instructs the simulation engine to step back in time! The calculation is then run again, with complex mathematical techniques applied to assist. Although Spice may be quite happy to hop back and forth in time, HDL simulators have trouble doing so.
HDL simulators only have the concept of moving forward in time. Further complicating matters, HDL simulators tend to work in far larger units of time than do Spice simulators, so one engine is almost always waiting for the other to provide the necessary information to begin again.
Finally, HDL simulation is "event driven," meaning calculations take place when something happens (e.g., State change). Spice is a "cycle-based" engine and, thus, time-step driven. This dichotomy leads to many problems in deciding which engine has control at what point in the overall simulation.
Fortunately, obtaining these ad-vanced capabilities is possible, and using them can be quite straightforward. One of the very first products to offer such functionality is the newest version of Multisim, from Electronics Workbench. Multisim provides schematics, a component database, Spice, and VHDL or Verilog.
Typically, engineers write, simulate, and debug the HDL code for their FPGAs/CPLDs with a VHDL or Verilog design tool. At this point, engineers can use a test bench to verify that the design will behave correctly for particular stimuli. They may then employ a schematic-entry tool to create the pc-board-level design, combining analog, digital, and programmable device(s), all modeled in an HDL.
Next, the entire circuit, including the PLD, is simulated using co-simulation. The designer then troubleshoots the circuit and the FPGA/CPLD, making corrections where necessary. The most advanced software offers all of the above capabilities in one integrated tool.
Figure 2 shows a simple ALU chip, the principal component of pc-board display controller/driver. (For this example, the ALU is designed from a FPLA made by Altera Corp., San Jose, Calif.) Figure 3 depicts the test-bench simulation results of two 4-bit operands (s, which counts 0 to F, and r, which is fixed) added together. The output is shown at ALU_Upper_STD_Logic and ALU_Lower_STD_Logic.
Once all of the test vectors have been input and the output observed, the test bench asserts its internal "done" signal. The bench uses and exercises the HDL description and helps check that the results (usually outputs) of the HDL description match those expected by the functional specification—within the time constraints given in the timing specification.
To co-simulate Spice and HDL together, the HDL description should be brought into the co-simulation environment. In Multisim, the component symbol is created with a Component Editor, and the component model associated with this symbol is simply a reference to the underlying HDL source-code file. The user must also specify a process technology (e.g., TTL, ALS TTL, CMOS, etc.), which determines the delay and loading models of the component. Therefore, the newly created component's simulation behavior is defined by the HDL description.
The user can then add the interface and analog portions of the circuit to the schematic. Simulation can be accomplished through virtual instruments, such as logic analyzers and oscilloscopes, or through transient analysis. The inputs and outputs of the HDL description become the pins of the newly created component. Note that the VHDL vector-type inputs and outputs expand into several pins on the new schematic component.
For example, in the schematic of Figure 2, the ALU_Lower_STD_LOGIC is shown as 4 bits driving the decoded seven-segment display. Figure 4 depicts the co-simulation results. The upper four traces show the 4 bits of ALU_Lower_STD_LOGIC, while the lower four traces show the changing of the s input.
Finally, the PLD and the pc board must be taken from the abstract to the real. The pc board goes through the steps of layout (and autorouting, as is commonly necessary). In this step, the PLD is simply one of the footprints on board.
Now comes the time to program the FPGA/CPLD. The engineer may prefer to synthesize the design using a tool normally offered by the chip vendor, often free of charge when purchasing the vendor's devices. The tools needed for the last step of place-and-route for the PLD must be obtained from the device vendor, as they require knowledge of proprietary technology.
Functionality, called the VHDL Initiative Towards ASIC Libraries (VITAL), can generate HDL code that includes propagation delays in a standard delay format. VITAL (IEEE 1076.4) is a standard method for annotating timing information onto a VHDL simulation model. Vendors also commonly use it to create a vendor-specific library corresponding to their supported VITAL primitives. Armed with the propagation delays based on place-and-route results, or from standard libraries of vendor-specific primitives, the engineer can co-simulate the PLD in an even more accurate simulation environment.
Co-simulation is a powerful new tool in the engineer's arsenal for designing and verifying mixed-mode circuits comprising devices like FPGAs/CPLDs as part of the pc-board design process. This method will eliminate design mistakes, shorten time-to-market, and improve the likelihood that all possible test cases are covered.