Electronic Design

45th DAC Takes The SoC Methodology Plunge

Come to Anaheim this June to get up to speed on the latest in ESL, physical design, analysis technologies, and verification, with a slant toward hands-on tutorials.

At the inaugural Design Automation Conference in 1964, then known as the SHARE Design Automation Workshop, the fledgling design-automation industry batted around some of the fundamentals of its mission to engineers. Papers carried titles such as “A method for best geometric placement of units on a plane,” and “New horizons in graphic output on the IBM 1403 printer.” Some of that year’s program hinted at yet unformed methodologies.

Now, as the EDA industry heads for the 45th Design Automation Conference (DAC, Anaheim, Calif., June 8-12), the focus is squarely on methodology. “In putting together this year’s program, we relied on two years’ worth of attendee surveys and feedback,” says Limor Fix, 45th DAC General Chair (see “Anatomy Of A Conference Program” at www.electronicdesign. com, Drill Deeper 18940). “People want more methodology- oriented content and more ‘how-to’ with today’s tools. The feedback was that there was too much future-oriented content and not enough hands-on tutorial information.”

So in accepting some 20% of the 639 paper submissions, DAC’s program committee forged a technical conference heavy on the methodological aspects of areas that should interest power users of EDA tools. Domains with the greatest number of paper submissions include system-level design and hardware/software co-design, physical design and manufacturing, low-power design and power analysis, timing analysis and design for manufacturing (DFM), and verification. All are well represented in the program.

“These five broad areas provide a good indication of what the industry is looking for and what academia is doing. It shows where the focus and money is going,” says Fix.

The electronic system level (ESL) features a number of growing areas for designers. One is the design of systems-on-a-chip (SoCs) with multiple processors and the creation of parallelized software to run on them (see “Software Rules The Day In Multicore SoC Design,” Electronic Design, April 24, 2008, p. 38, ED Online 18640).

Tutorials, panels, paper sessions, and invited papers at DAC will present new ideas, state-of-the-art design tools, and novel methodologies to reduce development time, reduce energy consumption, and enhance system performance of these complex designs and embedded parallel software.

Tools for power optimization at ESL have been emerging for a few years now. At DAC, ChipVision Design Systems will unveil two new ones for helping designers meet their power budgets early in the design cycle.

One, dubbed PowerOpt, lets RTL and system designers work interactively with system-level descriptions written in ANSI-C, SystemC, and C++, exploring and visualizing critical tradeoffs in timing, area, and power. It then implements their choices to generate power-optimized, register-transfer-level (RTL) code with up to three times lower power consumption than RTL flows.

ChipVision’s second new technology to appear at DAC is the P-SAM (Power Simulation, Analysis and Modeling) framework. It offers system-level designers and software developers a standards-based application programming interface (API)

for source-code instrumentation, as well as comprehensive analysis tools for design descriptions in SystemC or pure C/C++. It also enables early system-level power analysis based on system-level simulation.

Working interactively, PowerOpt and the P-SAM framework perform architectural, software, and power tradeoff analysis at a level not typically possible with other approaches (Fig. 1). Users can investigate various bus topologies, compare power consumption of intellectual-property (IP) blocks, identify hotspots, and explore other system areas to hone their power-management strategies and verify these are met.

Both PowerOpt and P-SAM are available now. PowerOpt costs $450,000 for a three-year, time-based license. P-SAM pricing varies depending on the SoC’s structure.

Another product that can provide gains in power management at ESL is version 3.4 of Forte Design Systems’ Cynthesizer SystemC synthesis tool. Cynthesizer 3.4 adds integration with a poweroptimization tool.

Power-estimation reports generated by the power-optimization tool are included in the results available with the Cynthesizer Workbench, Cynthesizer’s graphical interactive analysis environment. These reports, cross-linked to annotated source-code views, make it easy to understand implementation tradeoffs and make changes for improved quality of results (QoR).

Given the increased interest among designers in FPGAs, it’s only logical that high-level synthesis should make its way to FPGA flows. At DAC, Synfora will show its PICO Extreme FPGA and PICO Express FPGA algorithmic synthesis tools. The PICO Extreme FPGA makes it possible to implement dramatically larger and more complex subsystems using a recursive system-composition methodology. It allows for familiar design styles, reduces runtime, and achieves high QoR in implementing video codecs, wireless modems, or imaging pipelines. PICO Extreme’s recursive system-composition methodology is enabled by tightly coupled accelerator blocks (TCABs) that allow users to designate parts of their algorithm as custom building blocks.

PICO Express FPGA is a flavor of the tool optimized specifically for Xilinx’s 65-nm Virtex-5 and Spartan-3A DSP FPGAs. All products are currently available. PICO Extreme starts at $350,000, while PICO Express FPGA starts at $150,000. Standards in the ESL realm continue to evolve, and DAC will be the venue for the Open SystemC Initiative’s (OSCI’s) rollout of the final version of TLM 2.0. This standard enables SystemC model interoperability and reuse at the transaction level, providing an essential ESL design framework for architecture analysis, software development, software performance analysis, and hardware verification.

Continue to page 2

Expected changes to the final version of TLM 2.0 include unified interfaces for loosely timed and approximately timed modeling styles, as well as enhanced support for extended protocol definitions using the generic payload. OSCI is currently developing a TLM-2 language reference manual (LRM) that should be completed by the end of 2008. This OSCI LRM will then be used to drive the IEEE standardization process. DFM REMAINS A CONCERN

DFM remains a growing niche in EDA. At DAC, tutorials, panels, and paper sessions will discuss the current integration of DFM tools into the design flow and their anticipated impact. There will also be a comprehensive review of variation in manufacturing and the available analysis and optimizations at all levels of the design, starting at the architecture/microarchitecture levels. New techniques in OPC-aware (optical-proximity correction) routing and new ways to consider circuit performance during mask preparation will be presented, too.

Now that SoC design work is heading toward the 45- and 32-nm nodes, OPC and subsequent verification become much more compute-intensive. Unfortunately, the growth in CPU power isn’t keeping up with this trend.

Gauda Inc., a Sunnyvale, Calif.-based startup, will be at DAC with technology that can accelerate OPC and optical-proximity verification (OPV) up to 200 times faster than competitive technologies, according to the company. The acceleration is achieved without any specialized hardware or FPGA clusters.

Rather, Gauda’s approach involves a new breed of algorithms using CPUs and GPUs that are traditionally found in gaming systems. With Gauda’s technology and about 10 garden-variety desktop PCs, a large, full-chip 45-nm layout can be processed overnight (Fig. 2). OPV can be accomplished on a single desktop CPU. In stealth mode since 2005, Gauda has now demonstrated that its technology is linearly scalable to several hundred desktop machines. This makes it possible to complete OPC and/or OPV runs in a few hours.

One potential way to address DFM problems is with layout optimization. To that end, Takumi Technology Corp. will be showing its Enhance-RO layout optimization tool, which addresses both rule- and model-based issues that increasingly impact parametric and catastrophic yields at deep sub-wavelength process nodes. With its built-in cost analysis and dynamic tradeoff analysis, the tool lets users enforce all applicable recommended rules by objectively trading off against cost.

Coping with variation is a critical concern in recent designs. Thus, DAC will offer tutorials and paper sessions on how to take variation into consideration. For example, delay predictions in critical paths often have significant errors, and when silicon is measured, there’s a large variation of path delays. To combat poor timing estimates, new technologies will be presented that use early silicon data to tune and correct the predictions. Timing-constraint files are getting so large that ensuring the constraints’ validity becomes very time-consuming and error-prone. Blue Pearl Software will bring to DAC its Azure Timing Constraint Validator, a tool that automatically validates timing exception constraints at all stages in the flow, from RTL to final netlist. With it, designers can automatically check the constraints used to direct synthesis, static timing analysis, and placeand- route tools.

Azure Timing Constraint Validation is based on state-space search technology as opposed to tools based on formal verification using combinational analysis techniques. These techniques can’t validate multi-cycle paths and result in the incorrect reporting of paths that are sequentially false, the company claims. The tool is available now for a one-year, time-based license price of $95,000. Testing of deep-submicron SoC circuits at RTL is made substantially more difficult by the proliferation of at-speed faults, which are undetectable with traditional stuck-at testing. At-speed defects need to be detected by generating tests with new fault models.

Atrenta’s SpyGlass-DFT DSM, which will be shown at DAC for the first time, is touted as the only tool available for early fault-coverage estimation and analysis for timing closure on functional clocks due to at-speed test (Fig. 3). It generates at-speed test rules to help resolve timing-closure issues upfront at RTL. It also predicts at-speed test coverage early at RTL as it pinpoints and diagnoses low coverage issues.

Today’s large SoCs require physical-design decisions to be based on multidimensional considerations. At DAC, paper sessions will discuss how to optimize voltage partitioning using floorplan and application information. There will also be presentations on an analytical placer that will produce better results using routability considerations.

Atoptech will exhibit for the first time at DAC with its Aprisa physical-design suite. In addition to demonstrating Aprisa’s netlist-to-GDSII capabilities, Atoptech will also focus on detailing the tool’s True-timing analysis and low-power benefits. True-timing provides signoff-quality timing during place and route, says Atoptech. As a result, relying on timing estimates won’t cause you to over-optimize the design or require significant manual efforts to close timing. Aprisa provides modes to match either Synopsys’ PrimeTime-SI or Cadence’s CeltIC timing analyzers. Signal-integrity analysis is very fast and multithreaded. CCS model support for even greater accuracy (within 5% of Spice) is also available. DAC will see the launch of Magma Design Automation’s Talus Hydra floorplanner and hierarchical design planner, a tool that’s fully integrated into Magma’s RTL-to-GDSII flow for managing the complexity of achieving timing closure in multimillion-gate designs.

Continue to page 3

Talus Hydra anchors a full hierarchical methodology that supports bottom-up, block-based flows, top-down black-box flows, and mixed flows with automated floorplanning, partitioning, and time budgeting. It enables early design planning using a netlist consisting of a mix of gates, RTL, macros, black-box models, and GlassBox models. It affords quick feedback on design prototypes for floorplan, design, and timing-constraint refinement. Ciranova plans to show its Helix tool. It’s an automated analog layout suite that optimizes both circuit and device layout simultaneously, delivering design-rule-correct placement comparable in quality to that produced by an experienced layout designer, claims the company. Using Ciranova Helix, analog and custom designers can explore multiple layout alternatives in minutes, allowing them to get higher-quality designs to market in a fraction of the time needed by conventional methods.

With Helix’s fast runtimes, designers can explore multiple layout alternatives and even extract parasitics early in the design process. Benchmark results include placement of a 154-transistor, phase-locked-loop (PLL) circuit in under four minutes. Ciranova Helix’s primary inputs are a Spice netlist and a Process Design Kit (PDK) containing either Cadence SKILL PCells or Ciranova PyCells, such as those in the Interoperable PCell Library (www.iplnow.com). Helix is a native OpenAccess tool, which integrates seamlessly into OpenAccess-compatible environments such as Cadence’s Virtuoso, Silicon Canvas’ Laker, and Magma’s Titan, as well as DRC tools like Mentor Graphics’ Calibre and Synopsys’ Hercules. Ciranova Helix is available now.

DAC is the place to be this year for low-power designers, with tutorials, paper sessions, and several workshops delving into the how-to’s of designing low-power SoCs. Minimizing SoC power consumption has become a prime design goal, especially for multiprocessor SoCs, where peak temperature limits are of high concern. Sequence Design is planning to show its PowerArtist tool, which focuses on power reduction at RTL in three key areas: clocks, memory, and datapath. The tool’s analysis engines examine the design’s RTL code, prioritize design hotspots, and deliver power reduction in one of two ways. It can operate in automatic mode or guide the user through manual edits within a graphical user interface (Fig. 4).

PowerArtist can be integrated with all design flows, including synthesis and formal verification. It’s also compatible with OpenAccess databases through its open API. Pricing starts at $220,000 for a one-year time-based license.

An interesting entry in the low-power design arena is Envis, formerly known as Envision Technology. At DAC, Envis will display its range of tools for measuring and reducing power consumption. For the power-estimation side of the equation, Envis’ Kelvin performs automatic power-pattern generation. The tool takes in a netlist and generates vectors to estimate power consumption for that netlist. The estimates are realistic, correlating well with average power.

Interfacing through an API with any of the industry-standard simulators, the tool’s vectors can be used to generate average power estimates as well as detailed power-pattern data. Estimates can be made early in the development cycle, when designers are trying to choose between IP blocks for a given function. For power reduction, Envis offers up its Chill tool, described as a next-generation approach to clock gating. The tool automatically partitions the circuit into power partitions that can be turned on or off together. It inserts activity-detection circuitry that determines when a partition is inactive and disables clocking to that partition.

Clock gating can be done in combinational or sequential fashion. If manual clock gating has already been imposed on the circuit, the tool won’t override it. Instead, the tool optimizes it. Working together, the Kelvin and Chill tools comprise a powerful methodology for estimating and reducing power consumption. After synthesis, a Kelvin run on the netlist establishes a baseline estimate before running Chill. A second Kelvin run estimates the power savings achieved by Chill. Kelvin can be run again after physical design for an even more accurate estimate that includes parasitics. Both Kelvin and Chill are available now. Chill sells for $180,000 for a one-year, time-based license, while Kelvin sells for $50,000.

An increasing concern for power designers is the interaction between their ICs, the packages they reside in, and the circuit boards they’re attached to. A systemic approach can reduce design risk and optimize system cost. To that end, Apache Design Solutions will be at DAC with its Sentinel-PI, a chippackage- system co-design tool for power integrity. Sentinel-PI provides modeling, analysis, and optimization for IC, package, and printed-circuit-board (PCB) designers.

Sentinel-PI generates Spice-accurate models of the full-chip power-delivery network, which Apache terms Chip Power Models (CPMs). The models contain parasitics of the nonlinear switching and non-switching devices, as well as decoupling capacitance, loading capacitance, power/ground coupling capacitance, and effective RCs. This latest release of CPM adds inductive effects for higher accuracy.

To address the current gaps in functional verification, designers are raising the abstraction level of design entry. At DAC this year, we will learn about advances in functional verification of these abstract SystemC/C++ models. In particular, there will be presentations covering both simulation and formal verification techniques, with special focus on handling concurrency.

Verification above RTL may be appealing, but designers can be stymied by the lack of model availability. Where do models come from? And how long do they take to create? At DAC, Carbon Design Systems will demonstrate a speedier version of its Carbon Model Studio, a tool for automatic generation, validation, and implementation of hardware-accurate software models. The upgrade of Carbon Model Studio includes a key debug improvement that provides increased visibility into complex design constructs. It offers full visibility into named and unnamed generated blocks, VHDL composite types, and multidimensional arrays, including nested arrays and composites.

In addition, Carbon Models have new API calls for accessing design constructs.

With improved model validation, designers can control the model-validation generation process from inside Carbon Model Studio through a new component editor. The Model Validation component’s mixed-language shadow hierarchy matches that of the original design for tighter integration into complex testbenches, assertion languages, and custom validation environments.

Carbon Model Studio, shipping now, is available for Solaris and PC platforms running Linux and Windows. Pricing for the complete model-generation and execution environment is “use-model” dependent and starts at $20,000.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.