How An Emerging Methodology Better Supports SoC Design

Jan. 11, 2011
Cadence's Steve Brown and Raj Mathur outline the company's EDA360 vision for a SoC design/verification methodology that unifies hardware and software development.

SoC development

Increasing system complexity

Verification plan

Virtualization Capabilities

Fueled by massive functional capacity, high-performance and low-power silicon processes, and exploding software application content, the pace and scope of electronics innovation is accelerating. The challenges of forging increasingly complex systems are also growing, confronting traditional approaches with out-of-control verification costs and missed market windows. What’s needed is a system design approach that allows both earlier software development and faster silicon development, along with earlier and more frequent system integration steps.

This methodology must enable a unified hardware/software development and verification environment, and allow the specification, analysis, and verification of constraints such as timing and power in the context of the software and the system. This article highlights recent advances in system design, transaction-level verification, and software development that support such a unified environment. The article discusses the role of standards and explains the productivity gains behind a transaction-level modeling (TLM)-to-GDSII design and verification flow.

Specifically, this article looks at the challenges and emerging methodologies behind what Cadence terms “system realization,” which is defined in company’s EDA360 vision paper as the development of a complete hardware/software platform that provides all necessary support for end-user applications. System realization recognizes that the value and differentiation of new devices lies increasingly in the application software. The article concludes by noting some practical innovations, available now, that will help design teams build an effective flow.  

The economy is motivating consumers to be more selective in their electronic product choices. Manufacturers must satisfy demands to increase application capabilities, provide longer battery operating lifetimes, and hit a narrow consumer shopping season. Electronic system developers are coping with the consequences of exponential growth in software and are facing a need to accelerate all aspects of system development.

Shortening Project Schedules
Short project schedules do not represent a new challenge, but the problem is becoming worse with rising complexity. With more and more software content in chips – whether embedded or application – RTL design and verification are no longer the main cost bottlenecks as a system-on-chip (SoC) goes to tapeout. System development, embedded software development, and verification are rapidly becoming the key cost components of the electronic design process. As the cost rises (Fig. 1), the need to shorten project schedules with methodologies that leverage trusted RTL simulation tools becomes paramount.

Increasing Task Complexities
With software increasingly driving hardware content, verification teams are searching for ways to revise, or preferably augment, their current verification strategies. This is because current verification strategies are running out of steam (Fig. 2). Current verification strategies lack a consistent verification methodology from ESL to GDSII. They also lack interoperability of the different tools used in the entire flow as well as reusability of the same or similar testbench to drive verification at different abstraction levels through the entire flow. Finally, they lack a common dashboard to monitor and capture key metrics from all verification tools used in the entire flow.

Continue on next page

Growing Importance of Design for Power
Power consumption has direct impact on product usage, heat, form factor, and economics. Battery life is a key differentiator for almost all mobile devices and can be as important as some of the functionality. Power consumption correlates with generated heat and can make a device costly to cool or too hot to operate. Lower power consumption can also enable miniaturization of devices such as hearing aids or other space-sensitive considerations. And power consumption costs money in wired or battery energy expenses.

The challenge is that power consumption has been an outcome of the design process, one that cannot be measured until late in the project. What is needed is a way to assess the power consumed by the device under different operating conditions, running different software application stacks, before building silicon. Even more valuable is the capability to use power estimates to make architectural choices early in the project, before a significant effort is expended.

An Overview of Methodology Trends
To meet the expectations and requirements of today’s rapidly evolving electronics markets, devices must be designed from the perspective of the user applications, with all components designed to meet the required function, form factor, and performance characteristics. This is a dramatic change from the past, when semiconductor innovations drove device design. This change in the orientation of the design relationship is demanding changes from the design tools and methodologies.

An emphasis is now being placed on the decomposition of the system at each level into sub-components. Designers must refine the requirements for each sub-component and then quickly create and verify them. Next, the sub-components must be reintegrated into a whole system and then the system’s integrated capabilities are verified. Several necessary innovations have emerged to enable the next wave of design methodologies and tools that automate these processes.

Earlier Software Development and Integration Testing
In the past, it was possible to develop software by using early silicon device samples from the manufacturer. Early pressures on system development have led to a widespread use of pre-silicon emulation to validate the system integration and support early software development. The significant growth in using FPGA prototype boards for software development is another form of hardware-assisted verification using the RTL representation of the design before samples from an ASIC process are available. Still, this stage is primarily only possible after RTL design is completed. In today’s rapidly evolving electronics marketplace this does not advance software development early enough in the project schedule.

Some system developers today are using virtual prototypes of the system to enable even earlier software development. The idea is to model the device or board with a collection of executable models that are created much earlier than the silicon; these models provide enough accuracy and performance to develop software and perform some verification. Different approaches are being applied using service-based modeling approaches, proprietary modeling languages and technologies, and standard SystemC.

Continue on next page

Virtual prototype models are mostly used only for software development, while the creation of RTL remains a separate, duplicate modeling effort. High-level synthesis of C/C++/SystemC is emerging as a way to unify virtual prototype models with the silicon design flow. Most would agree it remains difficult to create such models early enough, economically enough, and with enough performance and accuracy to feed into the RTL-based design flow.

Expanding Virtual Prototyping Opportunities
Even project teams that are successfully using virtual prototypes are spending a great deal of effort to do so. In addition, they are often also utilizing an emulation approach, as well as building FPGA-based prototypes. Each of these systems provides different benefits in software development (Table 1). To fully develop software and hardware in parallel, each of the approaches needs to enable early software development, find bugs earlier in the project, and ensure high-quality system validation.

Mixing of Multiple Abstractions for Verification
Very few design teams can afford to start their designs from scratch. Derivative designs increasingly consist of components at multiple levels of abstraction. While some components may be new and can be defined initially at the electronic-system level (ESL) of abstraction in a high-level language for fast model efficiency, other components may be available only in RTL form. Meanwhile, others may be available as gate-level descriptions with some pre-existing qualifications. These are all models of the system at different abstraction levels with varying degrees of model accuracy and pre-qualification. A single verification environment that can accept the different model representations is required for application-driven system level verification. Driving the multi-abstraction ‘design-under-test’ with automated testbenches and that can be applied within a multi-abstraction verification environment will accelerate system-level verification. If a single verification environment isn’t readily available, then the next best option is a hybrid of verification engines coupled together cohesively with tool interoperability. The tools can include TLM simulation, RTL simulation, and hardware-assisted verification.

Standardization of Design and Verification for Increased Reusability
Increasing design reuse presents both opportunities and demands for definition of standards. RTL design code has been notoriously difficult to integrate into new SoCs. Reusability involves far more than a common structured approach to microarchitecture. As virtual platform methodology becomes more common, the ability to reuse models from different sources becomes an imperative, placing demands on modeling techniques, abstractions, and coding styles. Enabling high-level synthesis flows also places standards requirements on the design methodology. Design modeling standards such as SystemC, TLM1, TLM2, and the SystemC synthesizable subset are being driven by the Open SystemC Initiative (OSCI). Using multiple verification engines with testbench automation requires the Unified Verification Methodology (UVM) and Standard Co-Emulation Modeling Interface (SCE-MI) standards.

Expanding Use of Transaction-Based Methodologies
One of the most important standardization trends is the use of transaction-level modeling as the basic architectural concept for design and verification methodologies. Transaction-level modeling is the core methodology that enables interoperability between virtual prototyping, high-level synthesis, transaction-based acceleration, and the integration into existing RTL design and verification approaches. This transition in methodology is needed to enable the modern technologies required for system design and verification.

Continue on next page

The transaction modeling concept is required by virtual prototyping to enable quick model creation and fast simulation speeds needed for software development. Recent high-level synthesis tools and methodologies enable direct use of those models for rapid creation of RTL designs. And transaction-level modeling is the underlying principle in the Unified Verification Methodology (UVM), enabling a single verification environment to be applied to TLM design simulation, RTL simulation, and RTL acceleration.

Transaction-based acceleration is a technology that accelerates simulation using specialized hardware to speed the computation of instructions compiled by a simulator. Traditionally, the hardware has been driven with Verilog, VHDL, SystemVerilog or e-based testbenches, but there is an increasing demand for the hardware to be directly driven from SystemC or C/C++ based testbenches, enabling software and firmware developers to accelerate their code verification. The use of hardware-assisted verification is further expanded as the industry embraces the Universal Verification Methodology (UVM) and metric-driven verification built on transaction-based acceleration technology.

Moving to TLM Abstraction for Design and Verification
The industry is moving to a higher TLM abstraction to increase productivity for creating new hardware IP. The reduction in lines of code speeds IP-creation time, reduces the number of bugs that are inserted, and speeds verification performance and turnaround time. With Cadence’s Incisive Enterprise simulator, SystemC functional verification throughput can be as much as 10 times greater than with RTL, enabling more exhaustive UVM-based verification of the device functionality before committing to RTL.

Designer productivity is increased by the faster creation of SystemC designs, as well as automated high-level synthesis such as Cadence’s C-to-Silicon Compiler to produce the RTL. Describing IP at a higher level of abstraction also increases IP reusability and implementation at different process nodes. Design teams can leverage the synthesis tool to create optimal microarchitectures for different system specifications, rather than coding RTL by hand each time.

A comprehensive methodology guide is available from Cadence to lead engineers and teams through key decisions for modeling, UVM functional verification environment creation, IP reusability, and synthesizability. The guide comes with examples that help with learning as well as a book, “TLM-driven Design and Verification Methodology”.

The integration of Cadence’s C-to-Silicon Compiler with its Encounter RTL Compiler enables automated closure on a microarchitecture that meets quality-of-results (area, timing, power) constraints. C-to-Silicon Compiler is built with a database that enables efficient ECO implementation and is integrated with the RTL Compiler ECO flow. This enables minimal netlist changes to be introduced even when functionality at the SystemC input is changed. This same database enables Incisive Enterprise RTL simulations to support SystemC source level debugging that is tightly synchronized with the RTL debugging and simulation.

Using a Continuum of System Platforms
Earlier software development is a key industry requirement for parallel hardware/software development. Using IEEE 1666 SystemC to model the design, and executing with a fast instruction-set simulator of the processor, the Incisive Enterprise Simulator’s fast SystemC execution enables early software creation, integration with hardware, and functional verification. The Incisive Enterprise Simulator provides a native simulation engine that delivers high speed execution of the TLM-2.0 virtual prototype standard, a fully integrated development environment, and non-intrusive interactive debugging and analysis of the virtual prototype. It provides support for legacy RTL in virtual platforms, using a single-kernel simulation architecture and unified debugging environment.

Continue on next page

For hardware-assisted verification, Cadence’s Incisive Palladium XP Verification Computing Platform fuses simulation, acceleration, and emulation engines into a single platform. On top of these verification engines are methodologies such as metric-driven verification and low-power design analysis and verification. With these capabilities, software and firmware integration testing are achieved with increased accuracy of real-world stimulus when running verification in either in-circuit emulation (ICE) mode or in a hybrid mode of ICE and transaction-based acceleration (TBA). TBA is an optimized simulation acceleration mode that accelerates logic simulation by several orders of magnitude and uses message-level communication between the testbench components running on a workstation and the rest of the environment running on the Palladium XP platform.

Palladium XP further improves system-level verification productivity by offering a fast bring-up time, an easy-to-use flow, flexible simulation-like use models, scalable performance, and fast and predictable compilation. Supporting the Universal Verification Methodology (UVM), Palladium XP supports substantial re-use across multiple levels of abstraction with an advanced, automated verification environment with constrained-random stimulus, functional coverage, and scoreboard checking.

Optimizing System Debug
System-level debugging is challenging due to the scope of the problem: It encompasses different domains of functionality, different logic representations (hardware and software), and 3rd-party IP familiarity. It demands both verification speed and debug visibility, and spans engineers from multiple disciplines. Cadence’s answer to this problem is an integrated debug and verification engine environment that provides speed, debug visibility, and a unified debugging environment, presenting information and debug control in ways that are germane for hardware and software engineers.

“Hot swap” is a key technology that allows simulation runs to “swap” the underlying verification engine. Simulation runs can be easily transferred from a simulator to an accelerator or vice-versa. Typically, users initiate their run in the simulator in order to propagate through the Xs and Zs of a four-state simulation engine, and then hot-swap over to an accelerator running their verification cycles orders of magnitude faster in a two-state hardware accelerator. In both verification engines, the simulation use models are maintained for easy adoption of hardware acceleration.

Planning and Management of Complete Verification Flow
Verification coverage metrics have traditionally been written for and collected from different verification engines, resulting in costly time-consuming manual integration and assessment of the impact on system-level verification. With metric-driven verification, metrics produced from multiple verification engines can be collected selectively from each engine or simultaneously in a hybrid verification environment—from TLM simulation, RTL simulation, acceleration, and/or emulation.

A verification plan (Fig. 4) organizes the coverage metrics into system and schedule views. The metrics are collected into a database from each verification engine and presented in an integrated dashboard for easy viewing. Pass/failure analysis can be performed holistically at the system level as the metrics are back-annotated into the system-level verification plan. The unification and integration of the different verification engines for metric-driven verification takes the planning and management of the complete verification flow to the next level of end-user productivity.

Continue on next page

Unified Power Planning and Verification Flow
Creating IP in a high-level language such as SystemC enables architects to explore different microarchitecture implementations and optimize across the area/timing/power spectrum more flexibly than with RTL. Once RTL is created, the microarchitecture is fixed and system optimization possibilities are limited. In contrast, microarchitectural tradeoffs can be rapidly explored using C-to-Silicon Compiler to explore scheduling and make resource sharing tradeoffs.

The use of multiple power domains is an increasingly common technique for optimizing system power. But this magnifies the functional complexity of the device and makes functional verification even thornier. The complexity can be tamed by creating a power functional-verification plan with Cadence’s Incisive Enterprise Manager, which documents all power domains and the on/off transitions that must be verified. The verification plan is a high-level, metric-based mechanism to organize and thoroughly specify the functional verification requirements for the project from the system level to the firmware (including the SoC level). This plan is used by Incisive Enterprise Manager to measure that tests have been executed to verify all the power shutoff/turn-on transitions of the system, along with the complete system functional verification.

Dynamic Power Analysis
To verify that software-driven, system-level designs are optimized for low power, engineers require an automated process for identifying bugs and ensuring that all power-related logic is exercised. The EDA360 methodology for low-power verification and analysis integrates logic design, verification, and implementation technologies with the Common Power Format (CPF), which represents power constraints and intent.

Cadence’s Incisive Palladium Dynamic Power Analysis (DPA) brings a system-level perspective to power budgeting for electronic devices. Using the high-performance Palladium XP engine, DPA lets users run long system-level tests, empowering SoC teams to assess the power consumption of a performance-sensitive function against requirements for an acceptable end-user experience. DPA helps engineers quickly identify peak and average power of SoCs with “deep” software cycles with real-world stimuli. Furthermore, with a successive-refinement approach, system designers can make better decisions when selecting IP, an “adequate” package, and cooling requirements. They also can react quickly to changes in the specification or environment. By testing against various operational scenarios and “what-if” analysis to make architecture tradeoffs, designers can make better decisions to save power.

Although power-shutoff (PSO) techniques can drastically reduce SoC power consumption, their use increases the complexity of power verification. For large designs with multiple power domains, power verification may not be adequate with simulation alone due to performance limitations at the system-level. With Palladium XP’s high-performance engine with deep trace capability, power sequences can be verified at the system level with complex trigger conditions for power on/off.

Conclusion
EDA360 System Realization requires a reversal of the historical design process. Instead of a bottom-up, silicon-first methodology, it calls for an application-driven, top-down approach. Not only must the requirements be driven top-down, but the methodologies to design and verify sub-systems and silicon must each be more efficient, faster, and more easily flow with the rest of the system-development process. This is only possible through higher levels of abstraction, standardization, increased automation, and special-purpose verification computing platforms.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!