Virtual Platforms And TLMs Going Mainstream

Dec. 27, 2011
EDA has begun to grow, and nothing within EDA is growing faster than the electronic system-level (ESL) segment. Here's what you can look for in ESL tools and methodologies in 2012.

Fig 1. ITRS data shows that SoC complexity is fast outstripping the ability to add enough designers to fill the available gates in a given amount of silicon real estate. (courtesy of Calypto Design Systems)

Fig 2. Software issues in consumer electronics will affect the entire supply chain. (courtesy of Synopsys)

Fig 3. More design teams will use emulation systems to verify RTL subsystems, marrying TLM 2.0 representations with SystemVerilog testbenches. (courtesy of EVE-USA)

In 2011, Synopsys made the biggest splash in the EDA pool when it acquired Magma Design Automation. The teaming of these two companies may well result in some interesting doings in 2012 on the RTL-to-GDSII front.

Meanwhile, most eyes are on the front end of the design process as electronic system-level (ESL) tools and methodologies slowly but steadily make their way into the mainstream. As a whole, EDA is beginning to grow once again as a market segment.

Within EDA, the fastest growing segment is ESL, with vendors reporting revenues to the EDA Consortium of about $250 million over the last four quarters. The upswing in revenues points to increasing adoption of ESL tools and methodologies.

ESL Adoption on the upswing

Several factors lie behind the growing interest in ESL among design teams. For one thing, ESL design flows and methodologies have begun to solidify somewhat.

“Historically, most ESL adoption has been in verification,” says Brett Cline, vice president of sales and marketing at Forte Design Systems.

Designers would write transaction-level models (TLMs) of their system hardware for early verification efforts. But that has been a very fragmented market, with little cohesiveness with downstream tools and flows. Models written for one tool might not work with another. Thus, a lot of work would often be put into high-level modeling but it would be lost to the rest of the flow.

This is changing, however. A big reason for that has been the TLM 2.0 standard, which goes a long way toward creation of a standardized interface between models. As a result, models will be better able to communicate with tools and with each other.

Yet the fact remains that there still is no standardized output from virtual modeling to downstream tools. There is also no standardized input to ESL synthesis. One might point to the synthesizable subset of SystemC, but not all tools handle that same subset. That’s in contrast to the latter days of Verilog, when all of the synthesis tools on the market could more or less handle the same inputs.

A Tour Through The ESL Landscape

It is helpful to look at high-level design in a segmented way. After all, it does encompass a number of aspects. There are four major areas. The first is the earliest stages of architectural exploration. The second is the development of hardware blocks. The third is software development, and the fourth is system integration.

According to Frank Schirrmeister, senior director of product marketing in Cadence’s System and Software Realization group and one of Electronic Design’s Contributing Editors, trends are emerging in the first of those four areas, the pre-partitioning phase of system definition (see “The Next Level Of Design Entry—Will 2012 Bring Us There?”).

For one thing, more people are becoming interested in using UML or the MathWorks’ MATLAB language. These techniques can be used to describe functionality at a very high level without tying that functionality explicitly to either hardware or software.

The next step will be to connect these high-level descriptions of functionality either to software implementation or to hardware implementation by generating the shell of a SystemC model. “That’s the next level of high-level synthesis,” says Schirrmeister.

The key to this kind of technology will be the fabric that connects all of the functional blocks, such as ARM’s AMBA fabric or the Open Core Protocol-International Partnership (OCP-IP) fabric. Once you begin connecting elements of the design across the fabric, you can begin analysis with bus traffic using an accurate representation of the fabric. In the future, techniques of this kind will become more critical with the proliferation of multicore architectures and issues surrounding cache coherence.

Implementing Hardware Blocks

A second segment is the implementation of hardware blocks, in which there are two broad trends to consider. One is that intellectual property (IP) reuse continues to rise in importance. No one wants to build functional blocks from scratch if they don’t have to when they can reuse one from a library, whether from within their own organization or from an IP vendor. Thus, there will continue to be issues with integration of reused IP and how to qualify that integration effort.

The other broad trend in hardware implementation is high-level synthesis (HLS), which concerns implementation of new IP blocks. HLS has come a long way in terms of adoption, says Schirrmeister. System-level methodologies have historically been strongest in Europe and Japan, but this also is changing.

“We expect to do a fairly large portion of our business this year in the U.S.,” says Forte’s Cline. Within two years, Forte expects a majority of its business to be done domestically. Korea also reportedly is a fast-growing adopter of ESL tools and methodologies.

Within the U.S., numerous sections are increasing in their adoption of HLS. Cline cites growth in video processing, wireless design, and graphics processing. “The latter covers both datapath and control logic, and ESL’s detractors have always said that ESL doesn’t work well in control logic,” says Cline.

The consumer electronics sector is being drawn toward ESL in a big way, says Shawn McCloud, vice president of marketing at Calypto Design Systems. A prime example is the image processing done in cellular handsets to correct for distortion created by low-cost lens systems.

On the horizon are efforts to obtain feedback from RTL analysis on the HLS tools’ output and then feed that back into the HLS flow for further iteration. “In the future, you might run something through silicon place and route and get early feedback on congestion,” says Cline. “You would put that code back into HLS to tweak it and create a different architecture. That is something that will mature a little more.”

A key advantage of HLS is its ability to standardize RTL coding styles. In the future, this will influence certain aspects of RTL design, especially coding for power efficiency. There are RTL coding styles that are well known to minimize power consumption. HLS tools are built to automatically invoke such best practices in their RTL output, making that code tailor-made for downstream RTL synthesis.

Why High-Level Synthesis?

There are three key drivers behind HLS adoption. First is system-on-a-chip (SoC) complexity, which, according to ITRS data, is rising rapidly (Fig. 1). At 65 nm, gate density was in the neighborhood of 300 kgates/mm2. At 32 nm, that figure was up to 1.2 Mgates/mm2. That translates into about 60 million gates on a die measuring 50 mm2.

The problem is that given existing RTL methodologies, an RTL engineer can generate about 200 kgates/year. So if systems houses want to take advantage of process shrinks, only so much can be gained by hiring more designers. They will need tools that enable each designer to create more gates/year.

The second key driver is power integrity. Historically, systems houses have addressed power consumption through supply scaling. As they move to smaller process geometries, they scale down the power rails. But VDDs are already down to 0.7 V and the physics around leakage, thermal issues, and IR drops pose insurmountable limitations. Below 45 nm, power density scales up in nonlinear fashion.

The industry has hit an inflection point on this issue. HLS tools will be relied upon for efficient power optimization even before RTL is created. And at RTL, power optimization is mandatory to automatically insert better clock gating and to take advantage of the light-sleep modes in memory devices.

“Architectures are increasingly important to differentiate because of power limitations,” says Johannes Stahl, director of product marketing for system-level solutions at Synopsys. Expect even more pressure to optimize for power at the earliest architectural definition levels of the design cycle. Design teams will need to look at the power architecture issues at very high levels of abstraction.

The final key driver is verification, which is becoming exorbitantly expensive. The Wilson Research Group conducted a study on functional verification from 2007 to 2010 and found that the average percentage of total time engineering teams spent on verification jumped from 50% to 56% over that span. There also was a 58% increase in the number of verification engineers. Verification has become a key reason to adopt ESL if only because it can help deliver cleaner RTL to the logic-synthesis flow.

Hardware Meets Software

In typical system design cycles, software development begins long before target hardware exists on which to verify software functionality. Moreover, this is where transaction-level modeling and virtual platforms (VPs) come in. These technologies will have an increasingly important role in the future of ESL flows.

Software development is obviously being made massively more complex by multicore architectures (Fig. 2). “It’s not trivial to distribute software across cores,” says Synopsys’s Stahl. This is true in many sectors, including consumer, where innovation often comes in the context of architectures.

Likewise, in the automotive market, there is a growing trend toward more complex software. Many safety features are implemented in software, which is a direct impact of the ISO 26262 functional safety standard for vehicles. “This will likely cause a major methodology shift,” says Stahl.

Virtual platforms have two functions. One is software/hardware codesign, where designers optimize and verify their system with software in mind. The other is when the VP serves as a high-level hardware model that’s delivered to software developers before the actual target hardware exists.

The next step will be bringing the TLM platform and prototyping environment together with TLM synthesis, says Calypto’s McCloud. Doing so centers on HLS, but it also involves verification, using automatically performed C-to-RTL equivalence checking to ensure no errors have been introduced in synthesis.

Making Models That Matter

Going forward, there will be very different needs between the models that one uses in transaction-level modeling and the ones that are fed into high-level synthesis. TLMs must execute at 200 to 300 MHz to be able to run software and achieve reasonable coverage. They do not need to model all of the nuances of actual hardware. That’s why they execute faster. But models that are fed into HLS must carry specifics about interfaces and hardware protocols to synthesize properly.

Look for a move to models that execute at the higher speeds required for transaction-level work but also have enough detail to be synthesizable in an HLS flow. Calypto Design has done work in this area using a technology it calls “multi-view I/O,” which is a means of changing a transaction-level interface to a pin-level or HLS interface for implementation.

TLMs and virtual platforms are finding new applications in many areas, according to Bill Neifert, chief technology officer at Carbon Design Systems. “Verification is the number-one area for growth in virtual platforms,” he says.

Early adopters of virtual platforms used them for architectural exploration in the beginning stages of design cycles. Now, the trend is for them to move into later stages such as firmware development. In turn, firmware teams are using VPs to drive verification of their work. In addition, system integrators use the firmware results as part of the verification suite for the overall SoC.

“This is not yet a mainstream use, but leading edge customers who have used VPs for a while are branching out into verification now in a big way,” says Neifert.

Additionally, VPs are now seeing use in definitions of system power requirements. Design teams have begun to realize that making architectural decisions that positively impact power early in the process has huge advantages. An emerging trend is to get software running on a cycle-accurate VP early in the process and use the platform to generate power vectors, which are notoriously inaccurate. However, it’s relatively easy to instrument the VP, which generates power data on the fly as the software runs.

The result is a more accurate view of power consumption while the design is still in flux. Teams then can use that information to make better decisions about software, hardware, and partitioning.

Hardware/Software Integration

The last area of what can be called ESL is the integration of hardware and software after partitioning decisions are made. The notion of prototyping enters the picture here. It’s also where ESL bumps up firmly against RTL.

“There are four gears to this car, one might say,” says Cadence’s Schirrmeister. One is transaction-level modeling, another is RTL simulation, a third is emulation/acceleration, and the fourth is FPGA-based prototyping. “These are four different ‘gears’ for putting hardware and software together before you have actual silicon,” says Schirrmeister.

The connectedness of these engines is where the future lies and where Cadence and other EDA vendors will concentrate their efforts. The trend in this regard is to optimize hardware execution of parallel blocks in the design.

“Some of it already works, as in RTL simulation being combined with emulation so you have different levels of speed,” says Schirrmeister. For example, RTL simulation with Cadence’s Incisive platform can serve as the front end to both RTL simulation on a host processor and the execution of RTL on an emulation platform.

This leads into considerations of how best to choose a prototyping platform. “For multiprocessor designs with a graphics or video engine in parallel, it’s clear that the processor itself can be prototyped best on the host using VPs,” says Schirrmeister.

But blocks such as video decoders or graphics engines are so compute-intensive that they do not map well on the host. “Those items are best kept in hardware in the emulation box or on FPGA-based rapid prototyping boards,” Schirrmeister says.

Thus, a growing trend is for designers to more carefully consider their prototyping and emulation vehicles. Here is where standards such as TLM 2.0 and the Standard Co-Emulation Modeling Interface (SCE-MI) play an important role.

According to Lauro Rizzatti, general manager of EVE-USA and one of Electronic Design’s Contributing Editors, interest is growing in ESL co-emulation, particularly in Asia, and in the U.S. to a lesser degree (see “Social Media And Streaming Video Give EDA Cause For Optimism”).

“Design teams have been asking us to prove that our ZeBu emulation systems can play into the ESL environment by providing performance and cycle accuracy for anything described at RTL level,” says Rizzatti.

Co-emulation, which is the marriage of high-level models with RTL, has paved a path to adoption on a larger scale. In the U.S, the main driver is accelerating software development ahead of silicon. TLM 2.0 will help for hardware debugging with SystemVerilog testbenches (Fig. 3). According to Rizzatti, in the past year six Asian systems houses have asked EVE to integrate ZeBu using TLM 2.0 with ESL environment. This clearly points to the trend toward growth in co-emulation.

Standards on the Move

If Synopsys’s acquisition of Magma Design Automation was the biggest event in EDA last year, the second biggest might have been the merger of Accellera with the Open SystemC Initiative (OSCI). Now known as the Accellera Systems Initiative, the combined organization is in a better position than ever to positively impact the broader adoption of ESL through its standards efforts.

According to Accellera Systems Initiative chairman Shishpal Rawat, it is very important that interoperability of flows is based on industry standard info. “This way, users can define the flow that best fits their need,” Rawat says.

Rawat sees a continued move from best-in-class point tools to full flows. Thus, it’s critical for future standards efforts that the EDA vendors monitor users’ needs, while users reciprocate by making vendors aware of their concerns.

Together, vendors and users bring these observations back into the Accellera Systems Initiative, which discusses, forms, and ratifies standards. This embodies a trend toward a common platform on which system design standards, IP standards, and chip standards are formed.

Within Accellera’s verification IP technical steering committee, there has already been work that accounts for OSCI’s TLM 2.0 standard and leverages that in the development of some of the Universal Verification Methodology’s verification IP. Look for this kind of synergy to be nurtured going forward.

“I think we will ensure that they collaborate on the next generation of the UVM,” says Rawat.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!