Cost-Aware Design Methodology

Sept. 1, 2007
Seldom does a design team receive carte blanche—at whatever the cost—to meet performance or power specifications. But given the partitioning and segmentation of many IC design flows, it’s not uncommon for a particular team to optimize fo

Seldom does a design team receive carte blanche—at whatever the cost—to meet performance or power specifications. But given the partitioning and segmentation of many IC design flows, it’s not uncommon for a particular team to optimize for one goal, say performance, without focusing on the implications it could have on the overall cost of the packaged chip.

Even the most conscientious and experienced designers know it isn’t easy to accurately predict the tradeoffs being made while keeping a design within functional specification boundaries. Will the architectural change adversely affect the overall power consumption? Will increasing performance cause the I/O count to increase so much that a ceramic package will be required?

Optimizing designs during implementation starts to feel like a game of whack-a-mole. Sometimes the next mole is so much larger and appears so much later in the design cycle that the whole design project risks getting, well, whacked.

Chip estimation systems have been used for years to accurately predict a chip’s die area, power, leakage, and cost before an IC design project gets the go-ahead (or before quoting a packaged die cost to an external customer). Yet a new methodology of using prediction systems alongside traditional EDA flows promises to deliver cost-awareness in a non-intrusive way.

Sort of like enhancing EDA with a fun and fast side-game of whack-a-mole. By telling the design team at any stage in the IC design process where the moles are, this methodology can be the best way to navigate around the moles without wasting scarce and costly implementation resources and time.

The Role of Chip Estimation Designers use chip estimation simply to avoid making mistakes. Mistakes are costly, and they’re becoming even more likely given the growing IP, technology, process node, and architectural options.

A broad spectrum of current chip-estimation methods ranges from simple mental calculations performed on napkins to spreadsheet analysis, sophisticated algorithms, and IP and technology data embedded in full-blown chip prediction systems. All of these methods are fast—measured in seconds and minutes—but their accuracy tends to vary.

The most advanced chip-estimation systems use macromodels of the data used by design implementation systems, combined with a user’s high-level design intent, to generate comprehensive chip plans in seconds and accurate to within 95% of silicon. The plans translate the users’ chip description or specification into a prediction of the reality they’ll face once their design is manufactured.

Acknowledging that achieving the desired functionality is seldom enough and falling within a certain cost range is what dictates the “go, no-go,” higher-end chip estimation systems also deliver comprehensive cost data.

Factors contributing to production chip cost should include volume-based package pricing. Silicon wafer pricing and defect density data allow systematic yield analysis, providing “good die” cost, test and assembly costs, and nonrecurring-engineering (NRE) cost data based on process-specific mask cost estimations.

The economic models that drive cost-calculation engines typically use Fabless Semiconductor Association reported statistical data. The more flexible systems allow users to tailor estimations based on their experience or foundry-partner data. Even return-on-investment (ROI) analysis is sometimes available, calculating the timeframe and volume over which NRE costs will be amortized as well as eventual design profitability.

With advanced systems, estimation results are formatted into charts, reports, and tables describing die area usage, chip bonding, dynamic and static power consumption, yield, and production chip cost. Given the speed of fast automated chip-estimation systems, it’s possible to generate and compare results for different scenarios and then tune the best plan.

Multiple iterations on a selected plan ensure that the optimal solution space is located and that the resulting specification will provide the best starting point for the design. Then, the chip plan is moved to the design implementation team, communicated through Verilog headers, the bill of materials of the selected IP, constraints, and floorplan data. All of these factors can be exported for direct import into industry-standard formats to guide the plan through Cadence, Magma, Mentor, and Synopsys design flows.

New Use Model for Chip Estimation Chip-estimation tools currently automate the “feed-forward” from early chip planning into the EDA flow. But everyone knows that plans change. This is the primary reason why chip estimation is now being used alongside design-implementation tools.

What about using the estimators throughout implementation, checking back with a plan that has evolved based on implementation decisions or constraints? What about using them not as a replacement, but as a way to assess the overall impact of decisions about to be made, design walls to overcome, or just changes dictated by new needs for functionality based on market requirements?

Companies are starting to use chip-prediction systems to explore way-down-the-road options after design implementation has commenced. This is much faster and more resource-efficient than using a design implementation team and tools for exploration and estimation.

In an IC design flow, exploration with classic EDA tools can take hours, even days. Offloading this to the chip-estimation system slashes the time to minutes and can deliver significant time-to-market advantages for a project.

This use model requires the original plan—or a new plan—to be manually updated based on the changes made since the specification-to-implementation handoff occurred. But automatically extracting the data from EDA tools to “feed-back” into chip-planning systems is in development. This will make it much easier to keep chip plan and implementation design in sync and greatly simplify the use of chip estimation throughout the design implementation flow.

Enhancing existing EDA flows through the chip estimation allows logic and back-end designers to finally have insight into the impact (including cost) that potential changes will have on the packaged chip. Automating the interaction between implementation and estimation systems will enable more moles to be whacked during the critical stage of a design, when stakes are highest.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!