TLM-Based Verification Finds Strength In Standards

June 22, 2010
Transaction-level modeling has proved a valuable tool for verification and debugging. Learn how standards such as OSCI's TLM 2.0 and Accellera's SCE-MI have helped usher TLM-based verification into the mainstream for SoC design teams.

Rapid prototyping system

HAPS-60 systems

UMRbus architecture

SCE-MI 2.0 transaction

TLM-2.0 transactor adapter

Synopsys table

In design automation, as in other areas of the electronics industry, technologies and methodologies find broad acceptance through standardization. For example, the standardization of the Verilog hardware description language made RTL synthesis viable in the mid-1980s.

In the verification realm, new and emerging standards are behind the broadening acceptance of transaction-based verification methodologies. Standards like OSCI’s TLM (transaction-level modeling) 2.0 and Accellera’s Standard Co-Emulation Modeling Interface (SCE-MI) have led to a groundswell of interest in transactions. Also, flows are now using hardware acceleration and emulation to give transaction-based verification a turbo boost in speed.

Why Use Transactions?

In modeling system designs, there are usually three goals to be achieved, says Ran Avinun, group director of system design and verification product management at Cadence Design Systems. “One is early software development, a second is early system definition, and a third is description of executable specification, which you want initially for architectural tradeoffs,” Avinun says.

Where do transactions fit in, and why would a designer want to start from transaction-level models and eventually move those models into hardware acceleration? For many users, the answer lies in much faster simulation. “If you write models as TLMs, or you communicate through transaction-based verification, simulation runs faster,” says Avinun.

Another benefit of using TLMs is faster and easier debugging. “In general, if you write TLMs, you generate fewer bugs and spend less time debugging problems. It also gives you an opportunity to distinguish between functionality and implementation,” says Avinun.

“You want to write a model that represents functionality, and then separate the constraints. Those can be clocking constraints, or process node-specific things that might change over time. It’s easier to reuse models as you move from application to application or node to node,” Avinun says.

How Transactions Are Used

At least five use models have become predominant when it comes to TLMs, according to Frank Schirrmeister, director of product marketing for system-level solutions at Synopsys (see the table). At the top of the list is a reuse scenario.

In such cases, you would already have a good deal of your design in RTL. Here, the best solution is a mixed-mode simulation methodology in which the existing RTL is run on FPGAs. Meanwhile, TLMs for new blocks within the design are run as a virtual prototype.

Second on the list is a verification use model. In this scenario, you start with a testbench and then develop a virtual prototype before you have RTL available. This is accomplished with untimed TLMs.

“People start with untimed models to define the verification scenarios they want to cover,” says Schirrmeister. “Can my phone receive a call while I play a game and download something? You can easily define those early on with the virtual platform because they are in software running on the processor. You then use them later on in the project.”

A third use model is in evaluating the connections between the system and the outside world. These connections can be in the form of physical or virtual I/O.

Continue to next page

“For USB, for example, you want to connect to a real-world interface at full fidelity. But if the interface doesn’t exist yet, you can hook it up in virtual fashion so you can start software development,” says Schirrmeister.

While USB is an effective example, Schirrmeister also cites cases of design teams using this methodology to hook up air interfaces into their cell phones. This is accomplished through the FPGA software on the virtual side.

The fourth use model is for remote software development scenarios. To satisfy such requirements in cases where the physical hardware doesn’t yet exist, the answer is an early software-development environment in the form of a virtual prototype. “In this scenario, you create a development environment in which the software developer doesn’t even need to know if he’s executing on a FPGA prototype or a virtual platform.” This is, as Schirrmeister wryly puts it, the “keep the software developers out of the lab” approach.

A fifth and final use model is an even simpler software-development use model involving hardware prototypes on FPGAs. “It turns out that FPGAs are not that great for running processors, because they’re focused more on DSP,” says Schirrmeister. By moving the processor model itself into software you gain more balanced speed by connecting the hardware prototype and keeping the processing on the software side. Further, by abstracting away portions of the software, you get very fast execution.

Tools And Flows Evolving

There are three needs for TLMs to fulfill going forward. The two most obvious ones are embedded software development and design verification. “Verification engineers need simple, direct tests,” says Schirrmeister. Another ongoing requirement is random test generation with checking monitors and coverage that will include comprehensive verification for systems-on-a-chip (SoCs) represented in TLMs. Generation of random test patterns, coverage checking, and use of monitors will in time propagate into the TLM domain and virtual prototyping.

A third vector for TLMs that has yet to coalesce is a direct link to implementation. “We call this flow ‘TLM to GDSII,’” says Schirrmeister. “There was two worlds in the past. One focused on virtual platforms and the other one on high-level synthesis. We think that down the road, those two worlds will migrate into a single world.”

For many designers, as well as EDA vendors, the question is how to link between the virtual platform and high-level synthesis (HLS) flows.

“TLM has always tried to be the link between these worlds,” says Brett Cline, vice president of marketing and sales at Forte Design Systems. “The problem has been that the standards only considered verification and not synthesis. There were very basic things missing in the TLM specifications that are fundamental for hardware design. There was no specific reset mechanism, for example.”

Efforts within OSCI ultimately led to revisions of the TLM 1.0 standard that gave birth to TLM 2.0. “We took OSCI TLM 1.0 and put some extensions on it, did things you’d expect that we need to know for synthesis,” says Cline. “TLM 2.0 is a more synthesis-aware standard that is very focused at bus-based systems.” TLM 2.0 includes a number of transaction application programming interfaces (APIs) for bus-based systems.

In Cline’s view, virtual platforms and HLS are separate because of the divide between verification engineers and implementation. “People came at it from two angles. You either were a verification person, looking at virtual platforms, or you were an implementation person and doing hardware design in SystemC,” Cline says. “Now the verification people have realized that TLMs represent a viable path to implementation without rewriting. Meanwhile, the implementation people were realizing how to get things into a system model that ran really fast.”

In the past, most vendors and users have used virtual platforms and/or HLS in isolation, says Schirrmeister. “TLM 2.0 was created to help with early software development and high-performance simulation, but with less thought about HLS. We are actually driving the TLM 2.0 standard to address HLS with a synthesizable subset. That’s the direction and vectors that industry needs to address,” he says.

Continue to next page

Hardware Happenings

An important aspect of transaction-based verification is the hardware that enables extremely high-speed verification with TLMs. Recently, Synopsys launched its HAPS-60 series of rapid prototyping systems as part of its Confirma platform. Based on Xilinx Virtex-6 FPGAs, the HAPS-60 systems are the latest answer to the “build-versus-buy” decision that traditionally comes with rapid prototyping.

The series consists of three models: the HAPS-61 (one FPGA; up to 4.5-Mgate capacity), the HAPS-62 (two FPGAs; 9-Mgate capacity), and the HAPS-64 (four FPGAs; 18-Mgate capacity). In addition to doubling the capacity of the earlier HAPS-50 series, these systems sport performance of up to 200 MHz.

A high-level overview of the components of a Confirma rapid prototyping system (Fig. 1) starts with RTL design files, which go through synthesis. The design is then partitioned onto the rapid prototyping board. The system’s Confirma software performs this partitioning, and it is aware that it is targeting a HAPS board. The user can then instantiate the interfaces required to simulate the prototype as well as link the design into other environments as needed for co-simulation and transaction-based verification.

Older generations of rapid prototyping systems ran up against bandwidth limitations that stemmed from FPGA pin counts not keeping pace with the size and speed of designs. In the past, the answer to this issue was interconnect multiplexing, which serves as a stopgap but ultimately limits overall system performance.

The HAPS-60 systems get around these bandwidth limitations using automated high-speed time-division multiplexing. The system’s software automatically inserts time-division multiplexing logic rather than forcing users to insert it manually (Fig. 2, left). “Doing it the old way would require digging into the RTL design files,” says Doug Amos, business development manager of solutions marketing at Synopsys.

The automated approach results in a 1-Gbit/s data rate that’s coupled with automatic timing synchronization. This translates into up to seven times more effective pin bandwidth and an average system performance gain of 30% (Fig. 2, right).

The inclusion of the UMRbus architecture makes the HAPS-60 systems particularly applicable to transaction-based verification (Fig. 3). The UMRbus is a high-performance, low-latency communication bus that affords connections to all onboard FPGAs, memories, registers, and other resources.

“The UMRbus is used for overall board control,” says Amos. It enables remote access to the entire system for configuration and monitoring purposes. Numerous design interaction and monitoring features (Fig. 3, right) are included. “The user can control the design, access it, augment it, read back memories, and debug it,” says Amos.

The UMRbus also enables a number of advanced modes, including transaction-based verification and co-simulation (Fig. 3, left). “Users can write programs to implement the various design interaction and monitoring features,” says Amos. The systems feature a number of host-based debug modes that have traditionally been associated with emulation.

When it comes to transaction-based verification, the HAPS-60 systems can vastly reduce verification time through use of the SCE-MI 2.0 transaction interface (Fig. 4). “This is what SCE-MI was developed for,” says Amos. “The SCE-MI interface lets us have the software perform transactions, pass them into hardware, and the hardware recreates the transaction. This technology is used in emulator-type environments to mimic what the real world would do.”

The HAPS-60 systems now enable this kind of emulator-style methodology to be implemented on a rapid-prototyping system. SCE-MI allows high-level concepts to be used in this prototyping space. “This system blurs the line between prototyping and emulators, and SCE-MI is an enabler for this,” says Amos. The result is performance that can be up to 10,000 times faster than simulation when a simplified testbench is run on the HAPS hardware.

Continue to next page

Support For TLM 2.0

Another vendor of hardware that supports transaction-based verification, EVE, recently added support for the TLM 2.0 standard to its ZeBu line of emulation platforms. TLM 2.0 is the Open SystemC Initiative’s (OSCI’s) interface standard for SystemC model interoperability and reuse. “It’s more like transaction-based co-emulation for us, given that we bring an emulator into the picture,” says Lauro Rizzatti, general manager of EVE-USA.

EVE has implemented support for TLM 2.0 through a transactor adapter (Fig. 5). The adaptor supports multiple targets and initiators, blocking and non-blocking transport interfaces, and the loosely timed (LT), loosely timed temporal decoupled (LTD), and approximately timed (AT) coding styles.

At the system level, users can integrate the TLM-2.0 transactor adapter with virtual platforms, as well as with advanced SystemVerilog hardware verification environments. At the emulator level, the ZeBu TLM-2.0 transactor adapter is an open architecture that enables interoperability with other ZeBu transactors, either from EVE’s transactor catalog or created using ZEMI-3, EVE’s behavioral SystemVerilog compiler for transactor bus-functional models (BFMs) that makes it extremely easy to write cycle-accurate BFMs and exchange messages with a C++ or SystemVerilog testbench.

According to Rizzatti, 70% to 80% of EVE customers use ZeBu in transaction-based mode. “They might also use it for calls, in simple C-based cycle mode, as opposed to transaction mode. But even if they do that, they still use transaction mode for the benefits it affords,” he says.

The introduction of TLM 2.0 support takes EVE’s emulators a step further into interoperability, says Ron Choi, EVE’s director of marketing. “For years we had a transaction-level interface. But it was always through a specific API. That was a perfectly usable methodology but there is now more desire for a standards-based approach,” he says.

The TLM 2.0 transactor adapter solves the problem of designers having to write different code to interface between different products. “In general, ESL tools have always had the ability to connect to an RTL simulator through a programming-language interface (PLI) and to an emulator through an API calling C/C++ functions,” says Rizzatti. “That requires them to write wrappers around different interfaces. They had to write in their own interoperability. A better approach is to use TLM 2.0, which defines the interoperability layer so it shields the user from lower-level implementation. In this way, it doesn’t matter if they use SystemC models.”

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!