Facing Up To Today’s FPGA Verification Challenges

Sept. 1, 2004
Fifteen years ago verification of FPGA designs was easy but as the size of FPGAs has increased so have the verification challenges, Jerry Kaczynski explains.

Today it is not unusual for FPGA users to have to deal with more than one language in their designs. At earlier stages of the design development it may be necessary to interface HDL simulation with environments using Domain Specific Languages, such as Matlab. To speed up testbench simulations, patches written in C/C++ are frequently used. Sometimes, when simulation is still too slow hardware acceleration may be necessary.

In the last two years embedded systems found their way into the FPGA domain, adding one more headache – how to test both soft and hardware in the simulation environment not prepared for this task. Here we analyse sample solutions to the problems mentioned above but first the lessons of history

When Xilinx released the first FPGA in 1985, the XC2064 chip and its 1,000 gate size seemed impressive. No one probably predicted that by the year 2004 the size of an FPGA would be 10,000 times larger. As long as design size remained within the range of several thousand gates, schematic design entry and a good gate-level simulator were enough to create and verify the entire design. Hardware descriptions languages started to sneak into schematic designs in the shape of HDL macros and, as designs migrated into tens of thousands of gates, they gained importance. By the time FPGAs reached 100,000 gates HDLs would have to eliminate schematic entry and gate-level simulators.

The two most important factors were:

  • The impossibility to manage all-schematic designs at this level of complication,
  • The necessity to synthesise HDL macros before gate-level simulation.

Although HDL simulators were available since the late 80's, lack of efficient HDL synthesis tools prevented wider application of an HDL-only FPGA design flow. When the speed of HDL simulations started to approach the speed of gate-level simulations, synthesisers became more efficient and schematic tools turned into block diagram editors able to generate HDL netlists, it was time to switch FPGA design flows to HDLs.

VHDL and Verilog were quickly joined with traditional programming languages (C/C++) and domain-specific languages (Matlab). In the following sections we demonstrate how an FPGA designer can deal with challenges created by this diversity.

VHDL was the first hardware description language that gained popularity in the FPGA design world. When the size of FPGAs started to grow, Verilog solution providers working mainly in the ASIC domain realised the opportunity to enter the FPGA market. Right now both VHDL and Verilog are used in large FPGA designs.

The first HDL simulators were usually dealing with one language only. When two languages had to be handled in one design, co-simulation using both VHDL and Verilog simulators was the obvious solution. Frequent data exchange between separate simulation engines may have a negative effect on the performance of the entire design simulation. That's the main reason why single kernel simulators are now the most popular verification tools.

Although there are differences in scheduling mechanisms used in Verilog and VHDL simulations, similarities prevail. It is possible to create one simulation engine (kernel) that meets the requirements of both hardware description languages. When paired with matching compilers and elaborators, a single kernel simulator creates the optimum environment for verification of mixed language designs. Benefits are:

  • The use of one simulation engine means that designers don't have to fight with configuring multiple tools to co-simulate properly.
  • The growing size of designs creates the pressure to increase speed and reduce resource usage during simulation; a single kernel makes any kind of simulation optimization easier than separate kernels.
  • A single kernel simulator can be easily turned into a VHDL-only or Verilog-only simulator via licensing options, eliminating the need of maintaining multiple tools by the software vendor.

Single kernel simulators supporting Verilog and VHDL are very popular and should be the first choice for anybody working in a mixed-language environment.

LANGUAGE INTERFACE IN HDL SIMULATION Large FPGA designs usually need advanced verification algorithms. Some of those algorithms, even if they can be implemented in VHDL or Verilog, are not simulating efficiently in the HDL environment. That's why modern simulators enable the interface with routines written in traditional programming languages. Typical applications include:
  • Encoding functions without native support in HDLs (e.g. trigonometric functions in Verilog).
  • Accessing functions of the operating system.
  • Accessing hardware devices (logic analysers, data collection units, etc.)

Since its very beginning, VHDL provided open access to programming language routines via foreign architectures and subprograms. This approach enables a very efficient connection between the simulator and user-written routines, but requires excellent knowledge of the simulator's application program interface (API). Even if developers have no problems with the use of a given simulator's API, the chances are that whatever works now will not be portable to other simulation platforms.

Verilog used a slightly different approach. Its standard contains a description of the C language procedural interface, better known as programming language interface (PLI). We can treat PLI as a standardised simulator API for routines written in C or C++. Most recent extensions to PLI are known as Verilog procedural interface (VPI); the solution enabling a similar interface between VHDL and C/C++ is in the final stage of development and is called VHPI (VHDL Procedural Interface).

PLI and VHPI give design and verification engineers developing C/C++ routines a mechanism that shields them from the low-level details of their simulator operations that are irrelevant to the verified design functionality. Since PLI (or VHPI) is standardised, both C code and matching PLI calls should be much easier to port between different simulation platforms. But one non-standard area still remains: the connection of the PLI (VHPI) engine with the simulation kernel.

Procedures involved here vary dramatically between different simulators and may look like black magic to designers that are not professional C/C++ programmers.

Fortunately a little bit of good will shown by a simulator vendor can eliminate this last hurdle. A small applet or wizard (like the one shown in Figure 1) should be able to create low-level interface files.

A user preparing C code for connection with the simulator has to fill in several simple fields related only to the C code being connected and PLI/VHPI routines that have to be used. After completion of the wizard, two cpp files are created. One contains all low-level routines required to connect the simulator with the PLI/VHPI engine and does not have to be modified by the user. The other contains placeholders for both pure C functions and related PLI/VHPI routines. After entering their code, the user compiles and links both files, receiving a dynamically linked library that can be used during simulation.

CO-SIMULATION WITH DOMAIN SPECIFIC LANGUAGES Frequently HDLs are not the best choice to start the description of a digital system. If the design has to implement advanced mathematical operations, Matlab is a very convenient environment for quick verification of ideas. For many DSP designs using algorithms published in C, a toolset similar to Celoxica's DK2 with Handel-C support will be the best choice.

In both cases we are dealing with domain specific languages used for the description of the design. Once the initial description is verified in its native form, the designer faces the task of implementing that description in hardware. Some solutions translating domain specific language descriptions directly into the vendor-specific netlist may exist, but the traditional approach involves a gradual translation of original files to HDLs. The key issue here is maintaining the design integrity during DSL to HDL translation. Of course various co-simulation solutions exist, but the effort required to make them work may be discouraging to the designers.

Let's consider the case of converting one block of Matlab description of the design to a VHDL design unit.Once the designer has the VHDL model with functionality identical to the original Matlab description, they need to create an interface between the data systems of both environments. For every port of a VHDL entity they have to specify at least typecast (pair of matching data types in VHDL and Matlab). If the port happens to be a vector, there are several additional tasks: specification of the number of bits in the integer and fractional part, quantisation method and overflow handling mechanism.

Then there is the task of convincing Matlab's Simulink that the VHDL descriptions are ready for co-simulation. Fortunately Matlab provides a convenient black-box mechanism.

Once the black-box is created, and before starting co-simulation in Simulink, the designer may have to adjust some additional parameters. When co-simulation is running, Scope from the Matlab environment can be used to visualise native Matlab signals and ports of the black-boxes; to observe internal black-boxes signals it is necessary to use an HDL simulator.

It is important to note here that Matlab provides an open method of adding blocksets for co-simulation; the actual blockset creation is the task of the user connecting his or her simulator. Good HDL simulators should provide an automated method of generating Simulink blocksets. In the Windows environment they will usually take the shape of Wizards.

Figure 2 presents a sample solution: a wizard started for each HDL module that should have its black-box for co-simulation. Upon completion of all wizard sessions specifying common output directory, a Simulink blockset is created automatically in the specified location.

SPEEDING-UP SIMULATIONS As the size of FPGA design grows, the decrease of pure HDL simulation performance is noticeable. When verification procedures take hours to execute, it is time to think about hardware acceleration. ASIC designers were implementing hardware acceleration of HDL simulation for some time before FPGA designers were forced to follow their steps. One important difference between accelerating ASIC and FPGA simulations is that while there is no target silicon available yet when an ASIC design is verified, the FPGA designer has access to the target silicon all the time – it just requires programming. Consequently, ASIC designers have to use costly emulators, but FPGA designers will get similar results using a prototyping board.

FPGA designers can use two methodologies of speeding-up their simulations:

  • EMULATION assumes that the entire design is synthesised and implemented, then pushed into the FPGA on the hardware board connected to the computer where the HDL simulator is installed. During verification, the HDL simulator provides stimulus for the design pushed into hardware, reads design response and processes received data.

    While this methodology assures maximum verification speeds permitted by the board and its interface with the computer, design visibility may be insufficient for advanced debugging.

  • ACCELERATION assumes only a part of the design is pushed into the hardware; the rest is kept in the HDL simulator environment and co-simulates with the hardware part. Efficient communication protocols between the board and the simulation kernel are required to maintain a significant increase of verification speed in this methodology, but even if they are available, the wrong selection of design modules pushed into hardware may nullify gains introduced by acceleration. That's why profiling of the design being verified is essential: modules occupying a significant portion of simulation time should be pushed into hardware first.

    Although acceleration is slower than emulation, it is easier to implement when a high level of design visibility is required during verification. FPGA designs with a high percentage of original HDL code will probably benefit more from acceleration. Designs with heavy use of IP cores and previously created and verified modules may only need emulation.

THE FPGA CHALLENGES Changes in the methods of creating and verifying FPGA designs have been evolutionary than revolutionary. But once the size of FPGAs became large enough to place the entire microprocessor inside revolution had to come.

The nature of SoC is dramatically different from traditional, hardware-only FPGA design: system software running on the embedded microprocessor is the integral part of the system, not just the way of designing it. Traditional flows used in FPGA development or software development always leave some part of the SoC not verified. Of course it is possible to develop system hardware and system software independently, verifying the system after the prototyping stage.

This approach has several important flaws:

  • Design verification cycle is longer (each detected error on the hardware side requires re-creation of the prototype)
  • Visibility of the design during verification may be insufficient
  • Hardware designers are forced to use slow MPU models during simulations
  • System software developers are using inaccurate C models of hardware.

There are two promising solutions that address at least some of the problems mentioned above: SystemC and SystemVerilog. Both have interesting features, but both are still in the development stage .There are some success stories describing projects developed using both solutions, but they are coming from big design houses dealing with ASICS or even discrete systems. The question is: what does an FPGA designer have to do if he works on an SOC project right now? There are several systems that integrate existing solutions to provide an environment for the FPGA designer.

In conclusion designers face many different challenges while working on his or her project. Fortunately there are many solutions to choose from, both currently available or being developed. We should expect more powerful, user-friendly tools that will help designers meet new challenges, that will inevitably appear as the size of FPGAs grow.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!