IP Modeling Holds Key To Easier Design Reuse

Aug. 6, 2001
IP models must be accurate and provide reasonable simulation speed before design reuse really hits the mainstream.

On the surface, designing complex systems-on-a-chip (SoCs) with reusable blocks of intellectual property (IP) looks like a breeze. Just throw together your architecture, drop those functional blocks into your design, and send it over the wall for physical implementation. Unfortunately, things aren't so simple in practice.

The topic of IP usage, never mind reuse, is anything but simple in real life. If it were, we wouldn't find so many vendors of EDA tools purporting to have "the answer." One thing is certain, though. Any IP block has to be modeled in a way that accurately captures its functionality and behavior and enables simulation, both as a standalone block and as part of the overall SoC or ASIC.

The designer evaluating a piece of IP wants models that permit checking out the IP, preferably in the context of an overall design, in a reasonable amount of time and at a level of abstraction that won't take weeks to simulate. In the meantime, the provider of said IP hopes to protect itself and its true IP while still giving the designer enough meat and detail to thoroughly evaluate the core in question. Modeling can be carried out on a number of levels, but with those options come tradeoffs that will affect the design process downstream (see the table).

Validating a piece of IP on its own is the easiest part of the equation. Depending on the source of the IP, it often is a given that the block is good and accurately modeled. If it came from a large, well-known supplier of IP cores, it will probably work if used properly.

Established IP suppliers jump through hoops to ensure that their IP works as advertised. Dave Wood, director of product marketing for register-transfer-level (RTL) cores at Mentor Graphics' Inventra IP Division, maintains that cores coming out of the company are exhaustively simulated and prototyped in hardware. "From an IP provider's point of view, our models have to work at the bit level. We must get the model as close to bit level as possible, which still gives us very good verification speed," he says.

But if it's a block with suspect provenance, then caveat emptor is the rule of the day. For example, you might consider a standards-based piece of IP for incorporation into a design, such as a Bluetooth radio. In cases like this, where the technology embodied in the core is in its earlier stages of life, the track record for the core in terms of silicon implementation just isn't there yet. This means that the core might work, but the IP integrator will have to do a considerable amount of homework. Therefore, a method for evaluating and verifying IP is required.

One can verify a block of IP in several ways, thus ensuring the accuracy of the models used for simulation, according to James Hakewill, director of product engineering at ARC Cores. "There are at least five levels of accuracy for modeling and verification: instruction-set simulation, cycle-accurate simulation, HDL simulation, hardware emulation, and the ultimate in accuracy, actual silicon," he explains.

A user most likely wouldn't create a chip to verify that a piece or pieces of IP work. That's more the type of job for the IP provider. For example, ARC Cores creates test chips for its user-configurable processors for demonstration purposes. That leaves IP consumers with the other, less costly, and nominally less accurate methods. Each one brings tradeoffs, primarily in the area of speed versus accuracy.

In most cases, IP consumers will want to begin their evaluation efforts by instruction-set simulation. With this kind of simulation model, designers compile instructions for their IP core to run and simulate its execution of them, checking the states of its outputs on each clock cycle. Typically, one instruction is executed each clock cycle. In the end, you will have checked whether the core put the ones and zeroes on the bus that you wanted it to with each tick of the clock. But the downside to this type of simulation model is that it conveys little or no detailed timing information.

Distinctions must be made between what some call "instruction-set" simulation and what others call "cycle-accurate" or "cycle-true" simulation, and fully detailed "timing-accurate" simulation. If, as in the latter case, you want to know what's happening in your simulation nanosecond by nanosecond, looking at the rise and fall times of clocks, then you must run your simulations using the RTL code (VHDL or Verilog) for your core.

The tradeoff between cycle-accurate or instruction-set simulation and highly detailed timing-accurate simulation, of course, is in the speed of the simulation runs (Fig. 1). The higher the level of abstraction, the lower the level of detail, but the faster the simulation runs. You will see much more detail at lower levels of abstractions, such as RTL, but the simulation runs can be very long and tedious.

For some at this early stage of the design process prior to implementation, though, what's important isn't detailed timing verification, but a coarse level of functional verification. Moreover, some argue against using RTL in this coarse functional verification, claiming that it can be accomplished much more efficiently at a higher level of abstraction.

Herman Beke, CEO of Adelante Technologies, is an advocate of taking the functional checks of IP to a higher level of abstraction, specifically C or C++. "Simulating in C goes very fast and checks the real functionality of your block without you knowing anything about implementation, because there is no implementation," he says. "The type of bus and the word length you will use is all detailed implementation information that you haven't even yet thought about. You only care if it works at this point."

In many cases, the next step is a mixed-mode simulation where one or more blocks are coded in RTL, while others are still at a higher level of abstraction. The resulting simulation can suffer from what Beke terms an "accuracy mismatch." Such simulation runs are slower than runs made with all blocks coded in C. Beke finds some incongruity in the fact that some designers at this point will use techniques to strip away the details, and thus the accuracy, of the lower-level blocks.

Beke proposes an intermediate approach. At DAC, Adelante announced that its AR/T Designer tool can automatically generate cycle-true models in C. The process starts from a functional description of a block generated in C code by Adelante's TurboCoder in C. Then, using AR/T Designer, users create and optimize an architecture for the block. The tool creates an RTL description of the block in VHDL or Verilog. Now it will also provide the cycle-true C model for functional simulation.

Modeling and simulation above RTL is a focus for other tool vendors too. AXYS Design Automation believes that bridging what Stefan Tamme, vice president of marketing and sales, calls the "hardware/software gap" key to success in SoC design. "Verilog and VHDL are great tools, but they were never intended for system-level design," he says. "We believe C is the next meaningful step, because most engineers know it very well. In our minds, there's no need to come up with new languages, as C does the job fine."

Better models are absolutely crucial. "Models are the crucial building block because they're really the content to the simulator," Tamme says. To that end, AXYS Design Automation's tools, such as MaxSim, approach the "software/hardware gap" by concentrating on cycle-accurate C models for the processor subsystem. MaxSim is a simulation environment for synchronous multicore SoC simulation, while MaxCore is a toolset for automatic generation of processor models and tools

The AXYS concept is to model the processor and DSPs, as well as their associated memory, early in the design process, even in the system definition phase. This enables software design and integration to begin much earlier and critical system architecture decisions to be made before it's too late or too costly to implement changes.

Also in the camp of those who advocate higher levels of abstraction for IP modeling and verification is Prem Jain, Cynergy System Design's CEO. Jain pushes for "clock-level" accuracy in IP models as opposed to "cycle-level" accuracy. Jain points out that in the real world, SoCs will have IP running from more than one clock. So in Jain's view, instruction-set simulation is inadequate. Cynergy's tools convert the RTL code into what's termed a "clock-accurate" C model for simulation (Fig. 2).

"In the RTL itself for an IP block, you have the clock accuracy. But it's not protected, and in simulation, it's slow," Jain remarks. "Perhaps more importantly, RTL is farther downstream in the design process. In SoC design, it's important that customers get the specification early in the design process."

Jain also notes that hand-coded C or C++ models can suffer from a lack of maintenance, and hence poor accuracy. "Such models must represent time manually, as there's no inherent concept of time in the C language," Jain explains. "Moreover, there's no agreed upon standard from multiple suppliers to represent the time in the same way. So everybody in-house represents the time differently, and the C models can't be integrated." Cynergy's RTL C, or RTL-accurate C models, purport to represent timing in a standard fashion. This enables designers to integrate models supplied from different sources in an automated manner.

Design and validation of IP at higher abstraction levels is finding support among synthesis-tool vendors as well. Pradeep Fernandes, director of technical marketing at Get2Chip, takes the position that IP should be addressed at levels of abstraction above RTL for the sake of flexibility (see the table, again).

Vendors of so-called "star" IP cores, such as CPUs and DSPs that can form the basis for an SoC, typically offer a customizable RTL core that was verified in their environment. "When integrating an RTL block, users need to be aware that the block is verified for a particular I/O protocol," Fernandes says.

"That's a plus for the IP provider, because he decides the verification strategy," he continues. "But it's a minus for the IP user, because he's tied to that I/O protocol. It becomes tough for both of them. Say the designer wants a particular pipelined rate that's different from what was verified. It's not easy for the IP provider to change the design to meet the user's requirements. The tradeoff might be more area, or power consumption."

Fernandes and Get2Chip propose moving up one level of abstraction to the functional level to perform high-level validation of IP. "At the higher level of abstraction, you're not committed to any particular throughput latency or I/O protocol. The designer can determine how he wants it to work. He can quickly, with constraints, change the protocol, pipelining, or throughput, and get a new architecture that's tuned for his application. Once that happens, IP adoption and reuse will be very rapid," Fernandes says.

Verification and modeling of IP at the block level is a critical problem. After all, if the pieces don't work, the whole won't work either. But verification of IP in context also is a major stumbling block to the widespread adoption of SoC technology. Fortunately, it's being addressed in a number of ways.

System-level modeling and verification becomes even more important when you're working with cutting-edge IP, says Norm Kelly, director of IP product marketing at Synopsys Inc. "I need a way to verify my entire system, incorporating all of the IP blocks that I've purchased. That's where the IP modeling requirements come in," he explains. "If I'm trying to verify a system with as many as 30 or 40 different blocks of IP, I'd better have a means of doing so that models each block at a pretty high level of abstraction. If I can't, my simulations will take forever, if I can run them at all. Also, I won't do a good job of verifying the system."

The real issue in full-SoC verification is interoperability, says Richard Curtin, chief operating officer of @HDL Inc. "IP, on its own, can often be validated against some standard, like a PCI bus controller," he says. "But there are different standards for the integration aspect. Yes, you can validate a MIPS core on its instruction processing. But when you plug that core in, how do you know you're using it correctly?"

The approach to functional verification undertaken by @HDL is a hybrid of two distinct techniques: static model checking and dynamic (intelligent-random) testbenches with links to simulation.

Formal techniques are mathematical, static techniques for fully exercising a model's state machines without a simulator. But such tools have capacity limitations. @HDL's approach folds in automatic generation of dynamic testbenches that perform automatic property extraction on the HDL code for simulation. The company's @Verifier tool also enables designers to write their own properties using simple Verilog extensions. This way, they can add constraints related to the interoperability between blocks, easing users into model checking at the system level (Fig. 3). Moreover, the model checking occurs before synthesis, helping to ensure that the RTL is clean before taking that step. Implementing the hybrid static and dynamic model checking, the tool ferrets out state-machine deadlocks, multicycle path logic errors, false-path logic errors, and multiclock domain synchronization errors before they can propagate into gates and physical layout.

Another approach to SoC-level IP verification is taken by Forte Design Systems. This company uses a hierarchical approach to verification that enables designers to verify very large designs of 1 billion gates or more. Verification and modeling of such large systems is quite a challenge at the RTL. Forte's GigaScale verification methodology and QuickBench 5.5 testbench automation software now give designers the ability to use their choice of languages for design and verification. Among the supported languages are Forte's Cynlib and C++ for verification speed and system modeling; Forte's RAVE language for detailed, complex verification; and Verilog and VHDL for traditional design implementation.

Forte's GigaScale Hub is a key factor in the tool's ability to handle large designs. Essentially, the Hub is a transaction manager that allows Forte to plug other tools and capabilities into QuickBench. It also let the company add C++ capability to the tool. "We believe that for very large systems, C++ is an important thing to have," says Jacob Jacobssen, Forte's president and CEO. Jacobssen anticipates that high-level simulation will probably be done in C++, while the fully implemented modules will be done in Verilog at the same time. The Hub allows both to run simultaneously.

The C++ capability also permits use of that language for IP modeling and distribution, says Brett Cline, Forte's director of marketing. IP vendors can compile the source code for their models and distribute the object file, making the source code secure. Cline cites the case of one Forte customer, an IP vendor selling to system houses. "In the past, they had to create a number of different representations of their low-level RTL and send that out for five platforms and five different simulators, which is a real nightmare for internal management," Cline explains. "Now the customer can model at a very high level in Cynlib, where they're achieving 25-times to 50-times speed improvements over RTL by doing dataflow modeling. The dataflow model has a bus-functional model wrapped around it, so it has the right protocols attached to it."

For designers wishing to work at the RTL, certain techniques can determine whether the RTL code is functionally correct in terms of the designer's intentions. Recently, Real Intent Inc. introduced its Verix RTL formal verification tool, which performs hierarchical, scalable verification of RTL starting from the block level and working up to full system level.

Verix analyzes the RTL to automatically instantiate RTL integrity checks, which are broken down into desired sequence checks (DSCs) and undesired sequence checks (USCs). It's important that a formal verification system support both USC and DSC. Proving that undesired sequences can never exist requires true exhaustive analysis. It's much more difficult than proving that a desired sequence can exist. Users can additionally write their own checks with the native HDL. The system verifies the conditions specified in the checks for all paths throughout the system, not just at the block level.

Building In Verification One interesting way to ensure that IP is verifiable is to include assertion checks in the RTL code from the start. To that end, Verplex Systems Inc. has placed its verification library in the public domain in the hopes that IP providers will pick up on it as a method to build in seamless interoperability between simulation and formal verification tools, like its own Black Tie tool.

"The dilemma with IP is that it saves you time in design work, but verification time can be compounded because you don't know how to write vectors for something you didn't design yourself," says Dino Caporossi, Verplex's director of marketing. The Open Verification Library (OVL) effort amounts to what Caporossi terms "self-verifying IP."

The library was designed in Verilog and works with any Verilog-based or mixed-language simulator. It comprises a plug-in supplement to monitoring mechanisms, such as those used in simulation, extending their use into detailed error detection and reporting. IP builders can choose the appropriate assertion monitor from the library and place it in (or connect it to) the suspected area of the design where the bug might occur. If a test vector triggers the assertion monitor during simulation, it will report an error.

When problems with IP integration arise, they often don't result from the SoC designers misunderstanding the functionality of an IP block. Instead, they have misunderstood the interface between the block and the rest of the SoC. One simulator vendor that has worked with Verplex on the OVL concept is Co-Design Automation. The company has attempted to address the problems that arise with interfaces in two ways.

For one, Co-Design's Superlog design language includes a feature called Interfaces that permits modeling of an entire interface specification. It also lets users instantiate the interface, both into the system and onto the block of IP. For another, Co-Design has had some success with recoding Verplex's Verilog OVL assertions into Superlog. Between the two approaches, it's hoped that Co-Design will find a more flexible way to implement the OVL assertion checks with greater programmability and deeper functional explorations.

Recently, Verplex donated its OVL to Accellera, while Co-Design Automation donated its Superlog language to the same design-language standards organization. Verplex will continue working with Co-Design Automation to further seamless interoperability between simulation and formal verification. The companies will work to refine and extend the capabilities of the OVL and Superlog.

The kinds of constraints that can be applied and checked for thorough model checking and subsequent formal verification are among many constraint sets that crop up throughout the design process. Startup Atrenta Inc. has introduced its SpyGlass product, an analytical look-ahead engine. The tool enables designers to foresee and collaborate on critical downstream engineering and manufacturing issues very early in the design process, including those that might be applied during the modeling and verification stages.

SpyGlass captures and aggregates an enterprise's expert knowledge and corporate requirements for good design practices at all levels. It looks ahead and anticipates the final stages of product development by providing intelligent analysis of the RTL code and applying the constraints in the knowledge base.

Hopefully, the work done today by standards organizations like Accellera will lead to better and cleaner IP models in the future. These in turn will ease the creation of the complex SoCs coming down the road.

Companies That Contributed To This Report
Accellera
(408) 358-9510
www.accellera.org

Adelante Technologies
+32 16 39 14 11
www.adelantetech.com

ARC Cores
(408) 361-7800
www.arccores.com

@HDL Inc.
(408) 441-1317

Atrenta Inc.
(408) 573-1430
www.atrenta.com

AXYS Design Automation Inc.
(949) 341-1900
www.axysdesign.com

Co-Design Automation
(877) 626-3374
www.co-design.com

Cynergy System Design
(512) 338-0165
www.cae-plus.com

Forte Design Systems
(800) 585-4120
www.fortedesignsystems.com

Get2Chip
(408) 501-9600
www.get2chip.com

Magma Design Automation Inc.
(408) 864-2000
www.magma-da.com

Mentor Graphics Corp.
(503) 685-7000
www.mentorg.com

Real Intent Inc.
(408) 982-5444
www.realintent.com

Synopsys Inc.
(877) 321-6063
www.synopsys.com

Verplex Systems
(408) 586-0300
www.verplex.com

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!