Design For Manufacturing Sheds The Hype

June 11, 2009
Prematurely touted as the "next big thing" some years ago, DFM has found its proper place—tightly integrated with physical implementation—and is ready for prime time at last.

Four to five years ago, the hype surrounding design-for-manufacturing (DFM) technology for advanced system-on-a-chip (SoC) design was near insufferable. At that time, 90 nm was the state-ofthe- art process node and most fabless houses were preparing for a shrink down from the 130-nm node. And without some way of feeding process parameters back into the design side, the likelihood of any chip yielding at 90 nm was slim to none.

This set off a bit of panic among the design community on the one hand and a feeding frenzy among venture capitalists and would-be DFM startups on the other. Indeed, a rather large number of startups emerged in the DFM space. Almost any tool that touched the back end was being called a “DFM” tool for one reason or another, speciously or not. Just as quickly, a backlash from the DFM-have-nots arose, with accusations of “design for marketing” hurled at those who didn’t fit into the more rigorous definitions of what DFM was supposed to be about. Today, with the 65-nm node firmly entrenched and foundries ramping up their 40-nm processes, the DFM picture has changed quite a bit.

The hype isn’t as strident these days, but that doesn’t mean it isn’t an essential element of SoC/ASIC implementation flows. In fact, it’s more vital than ever and will become more so with the coming process shrinks to 40 nm and below. DFM is even becoming a factor in analog/mixed-signal flows for RFICs (see “The Mixed-Signal Angle On DFM,”).

WHY DFM STALLED The marketplace saw a surge of interest and activity in the DFM arena in 2004-05 (see “The Truth About Design For Manufacturing). A number of startups appeared, such as Aprio, Blaze DFM, Clear Shape Technologies, Ponte Solutions, and Praesagus, all of which purported to hold the key to achieving acceptable yields at 90 nm and below. But some significant issues still had to be overcome.

For one, there was accounting for process data. “When people went to fabless or so-called ‘fab-lite’ models, they couldn’t do classic DFM,” says Michael Buehler-Garcia, director of DFM products at Mentor Graphics. “At IBM, process engineers could tune the process to handle the on-the-edge parameters of IBM’s own designs as opposed to changing the design to suit the process. But if you’re a small fabless house, it’s not going to be practical for a merchant foundry to adjust their process for your design.”

As a result, the fabless community often tended to grossly over-margin its designs to ensure printability. Designers would be forced to sacrifice area in the process, to say nothing of power and speed. But the inability to obtain foundry process data made efforts at yield optimization a shot in the dark at best.

Another significant roadblock to widespread DFM adoption was the fact that the raft of standalone DFM tools from the startups of 2004-05 were limited in terms of optimizing for yield. “The DFM startups saw the future but they lacked the back-end implementation piece,” says Dave Desharnais, group director of IC digital products at Cadence Design Systems. “The tools could tell you where things were going to be a problem but couldn’t tell the implementation flow how to fix it. So these guys were relegated to abstractly proving their technology to the physicists at these fabless companies.”

MAKING DFM VIABLE Two things have happened since then to make DFM a viable technology. For one, the foundries, notably Taiwan Semiconductor Manufacturing Corp. (TSMC), have figured out a way to disseminate process data so EDA tools can use them without compromising their trade secrets. Second, those 2004-05 DFM startups with useful analysis capabilities were subsumed by the EDA industry’s RTL-to-GDSII houses, where their technology could be closely linked with those vendors’ implementation flows.

Since then, TSMC and other foundries have worked with the EDA ecosystem to better share process data so the EDA flow has access to it. “The biggest change in DFM in recent years has been that we’ve found ways to package process data and have it available to the DFM tools,” says Tom Quan, TSMC’s deputy director of design services marketing. “Then the EDA tools can tell the designer about lithography hot spots that need attention, and the latest generation of tools can also automatically fix these hot spots.”

The mechanism by which TSMC now shares process data is called DFM data kits (DDKs). TSMC monitors its lithography processes over time and gathers relevant manufacturing data that impacts yield. The way the data is packaged protects TSMC’s “secret sauce,” but exposes it to the EDA vendors’ flows. “We work with the EDA ecosystem partners to ensure their tool reads this data properly and uses it to analyze a given design in context of the process,” says Quan.

For the 90- and 65-nm nodes, TSMC’s DDKs encapsulate manufacturing data relevant to lithography, chemical-mechanical polishing (CMP), and critical-area analysis (CAA) of random and systematic defects caused by the process itself.

As of last year, however, TSMC began looking forward to the nodes beyond 65 nm. “The DDKs contain all the necessary information about hot spots that the EDA tools need to detect them. But for each tool vendor, we need to ensure that the way their tools detect hot spots is the same way we see it in manufacturing,” says Quan. The result was a new Unified DFM (UDFM) architecture announced last year.

Continue on Page 2

When DFM-aware EDA tools read in the data from TSMC’s DDKs, the tools perform abstraction steps that result in non-convergence with TSMC’s own DFM engines used in manufacturing. Therefore, the UDFM architecture now encapsulates not just the process data, but TSMC’s DFM engine as well.

Thus, the UDFM architecture encompasses a centralized DFM model, data, recipe, rule deck, and engine all in a single package (Fig. 1). Instead of the EDA tools simulating lithography and finding hotspots themselves, with the UDFM architecture, they’ll use an application-programming interface (API) to take advantage of the TSMC engine, which is the same one utilized by the foundry. This solves the non-convergence issue for process nodes below 40 nm. “We’ll be okay at 40 nm with the DDKs,” says Quan.

According to John Lee, VP of research and development at Magma Design Automation, the process of working with TSMC is very much an R&D collaboration. “Because the process is quickly evolving, it’s changed a lot from 65 to 45 nm,” says Lee. “It’s been a learning experience on both sides. The good news is the future is bright and we’re converging on a stable and supportable model. The UDFM architecture is a good first step toward that.”

THE TOOL PERSPECTIVE With the process-data bottleneck out of the way, the issue then becomes how the RTL-to-GDSII flows use that data to correct for process variability in the design cycle. One of the historical issues that has stymied better DFM efforts is the lack of commonality between design teams and the foundries, right down to their understanding of what makes up a chip and the terms they use to describe them.

“Design and manufacturing groups speak very different languages,” says Mentor Graphics’ Michael Buehler-Garcia. Foundries speak of chips in terms of layers because that’s how they’re manufactured. Further, foundries traffic in defects per layer; there are no errors in the fab, but rather defects to be corrected. On the design side, however, layers are not the issue but rather hierarchy and functional blocks.

“The designer doesn’t know or care what layers are involved in making a block work. It’s not their problem,” says Buehler- Garcia. In addition, for designers, it’s all about design errors— bugs—that translate into timing and power issues with reference to the target specification.

In the arena of “classic DFM,” a tool such as Mentor Graphics’ Calibre can serve as the translator between the designers’ world of hierarchies and bugs and the foundry’s world of layers and defects (Fig. 2). “Getting the data isn’t enough,” says Buehler-Garcia. “It has to be converted into what the designer understands so that he gains value in the design process.”

Translations of terms and their parameters is part of the puzzle. But for DFM to work on a deep level, delivering data back into the design process, there needs to be tight integration between the front- and back-end tools. There’s been an ongoing transition from rules-based DFM to a model-based paradigm, or perhaps a hybrid approach, says Magma’s John Lee. “Much of the DFM work we’ve done recently is based on physical models,” says Lee. “This kind of checking is more accurate than using rules.”

Magma has banked on a unified data model to underpin what Lee terms a “surgical correction methodology” for DFM issues. Further, the DFM methodology must be built into the design system. “It’s not enough for a DFM tool to tell the designer that there’s a bridging hotspot in the design. It has to indicate how to correct it. And if you do make a change, what’s the impact on timing? Did it impact quality of results? Can the tool handle engineering change orders (ECOs)? Standalone analysis tools permit none of the above,” says Lee.

Typically, place-and-route tools make changes in a chip layout on a routing-grid level. Many manufacturing issues can be solved by nudging certain edges or geometries on that grid. By doing so, the potential hot spot is removed with minimal impact on timing. But the key, says Lee, is a unified data model. “It all needs to be part of the same data model so that it can be minimally invasive for surgical fixes,” says Lee. In methodologies based on point tools, ECOs must be done manually at the GDSII level. “It takes too long, it’s totally manual, and you lose data integrity. You can’t retime, reroute, and replace. All the fixes made automatically in the design system are lost.”

Magma’s Quartz DFM is an example of a DFM tool that’s tightly tied to the implementation flow, using foundry-certified analysis engines to detect areas of a chip layout with printability problems. It performs lithography process checking (LPC), critical-area analysis (CAA), and chemical-mechanical polishing (CMP) analysis to uncover problem areas. It then uses pre-defined, user-editable recipes to automatically correct the layout along the manufacturing grid.

Not only Magma, but all of the major RTL-to-GDSII EDA vendors have stressed this sort of DFM integration with the design flow. Synopsys uses that integration to emphasize prevention, says Saleem Haider, senior director of marketing for physical design and DFM. “A few years ago, all of the DFM offerings were in the post-layout surgery realm,” says Haider. “A better approach, we feel, is in-design prevention and correction of manufacturability issues in context of the design constraints.”

Continue on Page 3

PRACTICAL VS. ADVANCED The DFM approach taken by Synopsys splits the art into two realms: one is practical DFM that’s handled natively in place and route; the other involves advanced techniques that are expected to become critical at the 32-nm node. The former includes items such as timing, power, and critical area; via elimination and duplication; wire widening and shielding; metal fill; lithography hotspot avoidance; design-rule checking (DRC); and soft rules. The latter features items such as lithography checking, CMP simulation, and handling of stress between devices on the silicon itself.

“As for the advanced techniques, we are in the mode of monitoring these things to watch how the need for them develops,” says Haider. “They are all still in early phases of development.”

For now, Synopsys’ DFM approach is best exemplified in the recently announced IC Validator tool for physical verification. Centering on the practical DFM issues outlined above, IC Validator directly addresses the trend toward physical verification during the design process.

Rather than an iterative, implement-then-verify-then-implement- again approach, IC Validator is an attempt to bring physical design and verification together, enabling designers to build functional blocks and verify them immediately. “In this fashion, physical verification is done by the physical design team,” says Haider. “There are no handoffs and no iterations required.”

IC Validator derives much of its power and scalability from a new analysis engine that supports multicore processing. “It enables us to divide the chip into chunks amongst multiple CPUs,” says Haider. “It also lets us divide up the design-rule deck.” The result of parceling out the design and/or rule deck to multiple CPUs is near-linear scalability (Fig. 3).

To facilitate signoff-quality rule checking, Synopsys’ hybrid processing engine operates on polygons as well a sedges. “Most advanced rules at 32 nm are captured in terms of edges,” says Haider. “With simple rules, we can capture them as polygon rules and process those very fast.”

To speed the process of physical verification, Synopsys implemented the application-specific Programmable eXtendible Language (PXL), which foundries use to create rule runsets. “PXL makes the task of capturing rules more efficient and the processing of them faster,” says Haider. Use of the language can result in runsets that are two to 10 times smaller than when utilizing TCL scripts.

IC Validator, which is tightly integrated with Synopsys’ IC Compiler design environment, is available as a pushbutton flow from inside IC Compiler. Thus, it performs signoff-qualified, in-design DRC as incrementally as the designer wishes. It also automatically detects and corrects violations. In addition, the tool performs pushbutton metal fill faster and with greater density than if it’s done within place and route or in the signoff tool, says Haider.

BRINGING DFM TO THE COCKPIT Primarily through acquisitions, Cadence has put together a full suite of DFM products to complement its implementation platform. Its 2007 acquisition of Clear Shape Technologies enabled lithography analysis to be built into the implementation flow in the form of two tools.

The first, called the Cadence Litho Physical Analyzer, marks the rebranding of Clear Shape Technology’s InShape product. The tool takes stock of the physical geometries in the design and calculates the contours for this geome try when actually printed on silicon. Those contours drive the optimization engine.

The second tool, dubbed the Cadence Litho Electrical Analyzer, covers the electrical side of the analysis coin. It also extracts contours for the printed geometries and feeds them into timing and power analysis engines

For CMP analysis, Cadence acquired Praesagus and its CMP Predictor. There are two primary use models for what is now known as the Cadence CMP Predictor: prediction of copper “hills” and “valleys,” and finding hotspots. The tool’s results drive intelligent metal fill, which chooses the optimum shape and amount of metal for a given area. What often passes for an alternative is a “metal-everywhere” approach, which carries a significant downside in the form of increased coupling capacitance.

According to Cadence’s Dave Desharnais, CMP and lithography analysis being part of the design flow has become even more important. At its recent technology symposium, TSMC declared both items as a mandatory check on the design side before design data is sent to the foundry.

“That means the design guys can’t call it a foundry problem anymore,” says Desharnais. “This opens up huge opportunities for us,” he says, and presumably for all of the RTL-to-GDSII vendors. It’s also an opportunity for the design community to take responsibility for DFM.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!