Increase Visibility Into FPGA-Based Prototypes

July 19, 2007
Auto-interactive visibility enhancement technology facilitates the analysis and debug of FPGA-based prototypes.

One of the most significant challenges facing verification teams using prototypes based on field-programmable gate-array (FPGA) is understanding the prototyped system’s internal behavior when it fails to perform as expected. A key factor with regard to analyzing and debugging these designs is the difficulty in observing internal signals.

Today’s state-of-the-art FPGAs provide tremendous capabilities with regard to both capacity and performance. Members of the Xilinx Virtex-5 family, for example, can contain hundreds of thousands of logic cells that may be configured as logic, RAM, or shift registers. Furthermore, this programmable logic can be used with hard IP blocks, such as megabits of RAM and hundreds of 25-by-18 multiplier/DSP functions, all running at up to 550 MHz.

These devices, which may also include multiple hard and/or soft processor cores and associated peripherals, can be used as powerful prototyping platforms for ASIC and system-on-a-chip (SoC) components.

New tools, improved methodologies, and higher levels of abstraction are helping engineers to experiment with different macro- and micro-architectures, as well as increase their overall design productivity.

In terms of verification, the sheer size and complexity of these designs coupled with their dramatically increasing software content makes FPGA prototyping an appealing option, both to increase verification throughput via hardware acceleration and to provide an early software-development platform. However, successful prototyping requires due consideration of what happens when the device doesn’t operate as expected and the engineer must debug.

As was previously noted, a key factor with regard to analyzing and debugging prototyped designs is the difficulty in observing internal signals. The problem is that there may be tens of thousands of these signals, but only a limited number of input/output (I/O) pins on the device by which these signals may be exposed to the outside world.

Furthermore, the act of observing internal signals impacts both design and verification. Selecting the appropriate signals to monitor is a non-trivial task, and modifying the design to observe these signals consumes engineering and FPGA resources. Also, it takes time to capture, dump, and record any signal values that are being observed.

Depending on the approach used, the tasks of accessing and analyzing signals internal to an FPGA can be complex, tedious, and time-consuming. Having said this, the overall process can be broken down into just five main steps:
1. Determine a set of signals to be observed.
2. Modify the design to observe the selected signals.
3. Observe and retrieve data while the FPGA is operating in-situ.
4. Map the retrieved data to the original RTL representation.
5. Compute data for additional signals that were not in the initial observed set.

This article first discusses the limitations of existing techniques with regard to performing these activities. Next, an emerging visibility-enhancement technology is introduced; this new approach includes the auto-interactive selection of a reduced set of signals to be observed coupled with “data expansion” techniques that can fill in the “missing pieces,” those being unobserved signal values.

Limitations of Conventional Techniques As just mentioned, locating, analyzing, and debugging problems in FPGAs using traditional approaches can be extremely tedious and time consuming. The reasons for this can be summarized in brief.

The first step in the process is to decide which signals need to be observed (captured and dumped). But any increase in the number of signals to be observed increases the logic resources required to capture them and the time taken to convey their data values to the outside world. For both of these reasons, it’s possible to observe only a limited number of signals at any particular time (for any particular verification run, that is).

The problem here is that selecting the best signals to monitor is a non-trivial task. For example, a register that appears to be a prime candidate for monitoring may actually provide limited visibility into the design's operation. By comparison, a seemingly innocuous register may provide a great deal of visibility into the design.

Once a set of signals to monitor has been selected, the design must be modified so as to allow the signals to be observed directly, or to allow them to be captured and dumped to the outside world. In the broadest sense, this is referred to as Design-for-Debug (DFD). In the case of the former approach, the design may be augmented with multiplexers and control logic that can be used to present selected internal signals to the outside world via primary output pins. Generally speaking, implementations of this approach tend to be homegrown and ad hoc, and they require significant effort to gain limited insights as to what is happening inside the chip.

An alternative technique is to use internal logic analyzers (ILAs). These may be homegrown, but are more commonly provided (along with a corresponding configuration application) by the FPGA vendor or by a specialist third-party vendor. Each ILA is constructed using a combination of configurable logic cells and RAM blocks. The control logic for the ILA is designed in such a way as to allow a specified trigger condition (or combination of trigger conditions) to initiate the capture of one or more specified signals and to store attributes associated with these signals, such as data values and time stamps, in on-chip memory. At some stage, these values have to be dumped to the outside world. A common technique in this case is to use the chip’s JTAG port.

Designing your own ILAs takes time and effort. In fact, it can be difficult to determine whether you’re debugging the design itself or the ILA. Even when using robust and proven ILAs from the FPGA vendor, the design still needs to be recompiled every time a new set of signals is selected for monitoring. Recompilation can take hours, so it’s desirable to minimize the number of times this task needs to be performed.

Following design modification and the recompilation of the design, a verification run is performed and data from the internal signals is captured. In order for this data to be usable by downstream debug tools, it must contain specific attributes. In addition to the logic values themselves, the data must include the full hierarchical instance name of the signal, along with the relative operational time (time stamps) of each data transition. Also, the dumped data’s file format should be an industry standard, such as VCD or FSDB.

In the case of proprietary solutions, it may be necessary to add these attributes into the signal data stream and/or to translate in-house formats into their industry-standard counterparts. Fortunately, ILAs supplied by the FPGA vendors and specialist vendors typically capture the required data and use industry-standard formats.

The data gathered from ILAs is usually associated with the gate-level view of the FPGA. Designers, however, are more familiar with the design’s RTL representation. Thus, to facilitate the debug process, it’s necessary to map the gate-level instances into the RTL view. This isn’t as simple as it sounds, because in many cases there isn’t a one-to-one correspondence between the gate-level instances and the RTL view. Many conventional and in-house solutions fail to provide this capability.

Following a verification run, it’s invariably necessary to access and analyze additional signals in order to track the problem down. When using a conventional design flow, designers must return to the first of the five steps enumerated earlier. That is, they have to select a new set of signals, modify the design and recompile it, perform a new verification run, map the new data to the RTL, and then analyze the results. This is a process that must be repeated many times.

A Visibility Enhancement Approach To address the limitations of traditional FPGA prototype debug environments, an approach has emerged that provides enhanced visibility into the design’s inner workings. To be fully effective, visibility-enhancement tools and techniques must be applied to every step in the flow.

As before, the first step in the process is to decide which signals need to be observed. Based on the erroneous outputs being exhibited by the system, designers usually have a “feel” for one or more functional blocks of interest. For example, the memory controller and/or bus-arbiter blocks.

As a rule of thumb, one needs to be able to observe approximately 15% of the signals internal to a block (usually registers, internal memory locations, and primary inputs/outputs to the block). This will provide 95% to 100% visibility in the context of the automatic data-expansion techniques discussed later in this section.

Unfortunately, resource limitations may not permit capture of all of these signals. In this case, it’s obviously preferable to select those signals that provide the best bang for the buck. As a result, visibility-enhanced signal selection includes the concept of “influence-ability” or, the amount of downstream logic each signal influences. To determine the minimum lineup of essential signals required to debug the selected blocks, you will have to analyze the assertions, RTL, or gate-level netlist code—sometimes all three will require attention to assess influence-ability. To debug assertion failures, for example, visibility-enhanced signal selection will analyze the design and the selected assertions to extract the minimal set of signals required to debug each assertion.

Furthermore, if the designers explicitly define a set of signals they wish to observe (where such selection can be made in the RTL and/or gate-level netlist), the visibility-enhanced signal selection tool will automatically identify any registers, memory elements, and primary I/Os that must be captured in order to observe the specified internal signals.

Once a set of signals to monitor is selected, the visibility-enhancement environment will automatically communicate with the FPGA and/or third-party tool vendors to modify the design by adding the appropriate ILAs. In the event that there aren’t sufficient resources to capture all of the desired signals, the environment will base its selection on those signals deemed to have more influence as discussed above.

When a verification run is performed, the visibility-enhanced environment will automatically record and/or provide any information required by downstream analysis and debug environments; this information will include logic values, the full hierarchical instance name of the signal, and the relative operational times of any data transitions. Also, the dumped data file will be in an industry-standard format, such as VCD or FSDB.

As noted earlier, the data gathered from ILAs is usually associated with the gate-level view of the FPGA. To understand what’s happening in such gate-level logic, one must be able to correlate the gate-level data back to the RTL representation of the design, or even to a system-level description.

Due to synthesis and optimization, however, not every signal in the gate-level representation will have a corresponding signal in the RTL representation. To address this issue, the visibility-enhancement environment must somehow localize signal correspondence. One technique is to automatically generate structural dependency graphs and employ approximate graph-matching algorithms. This approach imitates the process employed by humans, whereby one often locates corresponding areas by looking at registers in the fan-in and fan-out cones.

Perhaps the most significant aspect of visibility enhancement is its on-the-fly data expansion capability (see the figure). This capability, however, relies on all of the previous points, especially visibility-enhanced signal selection. The signals to be observed are selected specifically to facilitate automatic data expansion.

Here’s the idea behind data expansion. Often, the designer may wish to display and analyze signals that weren’t in the captured set. Rather than modify the design and perform a new verification run, it’s preferable to interpolate the missing data. Consequently, the visibility-enhancement environment will use data expansion to fill in the missing gaps in the captured data.

In particular, such data expansion can populate the signals internal to blocks of combinational logic that sit between registers whose signals were captured. To maximize performance, the expansion is done on-the-fly or dynamically only for the logic under investigation rather than statically for all design logic. A comparison of a traditional design environment compared to its visibility-enhanced counterpart is illustrated in the table.

Visibility-enhancement technology can dramatically speed the process of locating, isolating, and understanding the causes of error symptoms in FPGA-based prototypes (similar techniques can be applied to FPGA-based emulation and software simulation).

In a typical design, registers account for approximately 20% of the signals. Using visibility-enhancement technology allows designers to use these signals as the basis for determining the values on the remaining 80% of the signals, which equates to an approximate fivefold increase in visibility. In turn, users of this technology report a fourfold reduction in debugging times. In other words, every hour spent debugging without visibility-enhancement technology can be reduced to only 15 minutes with this technology.

As for the future, the data-expansion capabilities provided by a visibility-enhanced environment provide the basis for using internal FPGA signal data in conjunction with advanced debugging techniques that are typically considered only in the context of software simulation. If the device contains sophisticated internal buses, for example, the expanded data could be viewed at the transaction level, thereby making it easier to understand the device’s operation. Careful integration of the data-expansion technique in the context of the debugger could provide reductions in both verification run time and the resulting captured data file sizes. Such an environment would empower automated guided debug along with advanced analysis and tracing capabilities.

Conclusion One of the most significant challenges facing design and verification teams employing FPGA-based prototypes is to understand the internal behavior of the system when it fails to perform as expected. A visibility-enhanced verification and debug environment addresses this by: • aiding in the selection of the signals to be observed • working with (and negotiating with) the other tools to modify the design to capture the selected signals • capturing all of the required data and attributes necessary to drive downstream tools • using advanced techniques to automatically map between the system, RTL, and gate-level views • performing data expansion to interpolate values for signals that weren’t captured

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!