It was a steamy morning in late July when I drove into Manhattan for a breakfast meeting with Chuck Byers of TSMC and Anna del Rosario of Altera. Ostensibly, the meeting was to discuss TSMC's ongoing manufacturing partnership with Altera.
Because I had an upcoming Technology Report on my plate on the subject of design for manufacturing (DFM), I decided to pick Byers' brain a bit. Chuck's first comment on DFM was to remind me that the "D" stands for design. "DFM is a design problem," said Byers.
This got me thinking: If DFM is a design problem, perhaps I should approach my report from that standpoint. What, if anything, can front-end designers do with today's EDA methodologies to positively influence their designs' manufacturability and yields? That premise guided me in my interviews for this story.
I had my first conversation with Cadence. Upon explaining my angle for this story, Mark Miller, Cadence's VP for DFM business development, said that there are two ways to define DFM. "There's DFM as it is," said Miller, "and there's DFM as it should be."
THE STATE OF DFM
DFM "as it is" consists largely of traditional physical verification, optical-proximity correction (OPC) and reticle-enhancement technologies (RET), and some mask-data preparation. "It's a lot of reactive behaviors after the design has reached what we used to call signoff," says Miller.
Why is DFM necessary in the first place? It's mostly because at sub-nanometer geometries, the structures being printed in the fabs are smaller than the wavelength of light they're printed with. Current 193-nm steppers simply don't have the resolution to accurately render what was drawn by the designer. The shapes of the structures represented in the GDSII are distorted in the lithography process; thus they often require correction via OPC and/or RET techniques (Fig. 1).
What, then, should DFM be? Clearly, if it mostly comprises efforts to compensate for the steppers' inability to resolve the microscopic structures intended by designers, that's not design for manufacturing. For Miller, what DFM should be is "anticipating and compensating for mechanical, electrical, and lithographic process attributes and variations during the design phase."
While intriguing, Miller's comments weren't quite what I was expecting. Neither Miller nor David Thon, Cadence's DFM marketing manager, ever got around to how designers themselves could directly influence yield.
Yet Miller and Thon did point me in an interesting direction, one that proved to be a persistent theme in subsequent discussions. "When you talk to TSMC again, ask them about the Group 2 manufacturing advisory rules and 90-nm and below rule decks that they're building," said Miller.
Foundries such as TSMC have two sets of design-rule checking (DRC) for incoming GDSII. "In the good old days," said Miller, "they just gave you a DRC deck. Now this second set of rules has been added to the puzzle. They're not pass/fail rules, but rules centered around ranges of values."
The implication of the Group 2 rules, said Miller, is that the foundry is pushing the onus for manufacturability back onto the designer. "They're saying, we guarantee we can make you one of these. But because of variations and distortion factors, you may get a range of values for various parameters," said Miller.
Next, I spoke with Sameer Patel, senior director of the Design Implementation Business Unit at Magma Design Automation. He described the differences between manufacturability catastrophic (short and/or open circuits) issues and issues related to parametric (or statistical) variation and reliability. Examples of parametric variation may include capacitance or resistance increases that affect timing but don't necessarily cause total failure.
Some defects are related to random particles. Sometimes they're extra ones, and sometimes they're missing. There's little designers could possibly do to minimize these.
Others are systematic defects. These are akin to the processes that go into chipmaking. Lithography problems, such as the steppers' lack of resolution, fall into this category, too.
Processes like chemical/mechanical polishing (CMP), used to planarize the wafer, can cause substantial variation in interconnect thickness across the wafer and even the die.
When designers model their interconnects, they usually assume a constant thickness. At the pending 65-nm process node, CMP is apt to cause variations in interconnect thickness, and hence in resistance and capacitance, of up to 40% versus what they modeled.
A third classification of defects is the "unknown" type. Generally related to process variability, these effects simply can't be known or determined during the design cycle, making them nearly impossible to model or anticipate. They're usually parametric variations that will affect timing and must be addressed after layout.
So it's basically the systematic variations and, to a lesser extent, the parametric variations, that have a very small chance of being addressed by the design engineer.
According to Magma's Patel, rules will only get you so far. "The recommended rules are so vague and so padded that if one were to try to obey them all, they'd never meet their timing or area constraints," he said. As a result, the recommended rules are only followed wherever possible. At other times, designers ignore them and follow only the required rules. "So there needs to be a better mechanism of controlling layout dependency than rule decks," said Patel.
MODELS, NOT RULES
Just what might that better mechanism be? From Magma's perspective, the way to go is a modelbased paradigm. A continued reliance on rules to make up for process variances is a method that can't help but break down. For one thing, today's bloated rule sets result in very long run times for DRC tools. For another, the rules are, by definition, generic. They can never be specific to the design in question.
Indeed, Magma's BlastYield tool makes some progress with what the company calls "lithography-aware routing." This entails lithography simulation that can enable tradeoff of systematic and random yield-loss factors.
Even at Mentor Graphics, the bastion of OPC/RET technology with its Calibre family of post-layout processing tools, the feeling is that model-based technologies are the wave of the future.
"Eventually, OPC will be dominated by models with a little bit of rules," said Joseph Sawicki, VP and GM of Mentor's Design-to-Silicon Division. "First, you identify problems and target them with rules until eventually there's so many problems that you need to have a model."
Mentor is currently beta-testing a technology that performs what Sawicki calls "printability analysis." This is in the form of pre-mask verification.
"One of the reasons DFM became a hot topic wasn't because people were getting 48% yield when they wanted 60%. It was because they got zero," said Sawicki. "This tool, which will be released to the foundries, will be critical to make sure we don't have the zeroes anymore. We've been successful in identifying issues, stopping masks from being produced, and then retargeting the OPC to produce yield."
Another aspect of improving yields, which may or may not qualify as "DFM," is the level of communication between the design and manufacturing sides of the equation.
"If you could look at the lithography impact on your specific design for a specific process, then you could predict what you need to do to avoid those manufacturing problems," said Magma's Patel. "That's the gulf between design and foundry. There has not been enough information, or the right information, flowing back from the foundry to the design side." That comment served as a springboard for my talk with Aprio, a startup that sees communication between the design team and foundry as a huge stumbling block.
"Two of the root causes of communication problems are incompatible methodologies and data models," said Aprio's CEO, Mike Cianfagna. "On the methodology side, designers think 'hierarchical and incremental.' They reuse data and save results for later. On the manufacturing side, mask data is processed flat and sequentially, and nothing is ever reused."
The other area of incompatibility is data models, where designers think in terms of circuit elements and polygons. Manufacturing people think in terms of geometric shapes and process information.
Aprio has announced two key technologies to date. Both target integrated device manufacturers (IDMs) and foundries. One performs incremental optical-proximity correction, bringing a hierarchical and incremental mindset to the manufacturing side. Previous OPC results can be reused, and problem areas in a mask can be cut out, reprocessed for better OPC results, and stitched back in seamlessly.
The data-model incompatibility is addressed by Aprio's Trinity data model, which provides enough information in layout to allow reconstruction of any circuit element or geometric entity. Those entities can then be calibrated and analyzed against any process parameter. As a result, circuit, process, and geometry information is maintained throughout the process from design into manufacturing (Fig. 2).
Thus, Aprio's tools promise to bridge the communication gap between designer and fab. But as of now, they target the manufacturing side. The company plans to launch tools for the design side at the end of this year. Until then, though, their technology remains a one-way flow of information from design to manufacturing, and not yet in the reverse direction.
So I still found myself looking for something, anything, that would directly address the "D" in "DFM."
I finally got a little warmer when I spoke to Atul Sharan, CEO and president of Clear Shape Technologies. Clear Shape sees yield-loss mechanisms falling into the categories of random variations and systematic variations. Reasoning that random variations are more or less a given, Clear Shape is attacking the systematic-variation issue.
"The real problem is systematic variations due to resolution problems and thickness variations due to CMP. These, you should be able to predict, model, and account for," said Sharan.
Thus, Clear Shape works toward the launch of a tool that can take a model of systematic variations for a given circuit, netlist, or layout and predict the shape variation across the chip for both device and interconnect. The tool would then analyze or translate that variation into electrical properties (e.g., timing, noise, and leakage) and enable designers to reach timing or SI closure based on more accurate values.
Clear Shape received the attention and subsequent backing of both Intel and KLA-Tencor in the form of venture capital. But the technology, while promising, is still not in the hands of designers. We'll put this one in the category of "could be real helpful but we'll have to wait and see."
While I'd been largely stymied in my search for design-side DFM to this point, I was getting a tantalizing glimpse of what its future may hold. In the eyes of Srini Raghvendra, senior director of Business Development & Marketing, DFM at Synopsys, DFM is evolving and is nowhere near its final form. DFM's evolution will ultimately mean a shift to the design side.
Raghvendra cited metal fill for CMP as an example. Today, foundries perform metal-fill operations to compensate for the deleterious effects of CMP using a simple rule-based approach. "What we see happening at 90 nm is that metal fill will be timing-driven," said Raghvendra "If it's timing-driven, it's more appropriate that it's done during design by the designer."
In fact, generally speaking, Raghvendra sees a coming divide in terms of what kinds of DFM are done where and by whom. "If you can boost manufacturability without having to trade off something else in turn, then it can be done in the foundry. If it's done in a way that impacts some constraint, like timing or area or power, then it becomes the domain of the designer."
In Raghvendra, not only had I finally found someone who'd discuss design-side DFM, but I'd found someone who'd get specific (for more design-side DFM advice, see "DFM: What Can Designers Do?" at www.elecdesign.com, Drill Deeper 11125).
"There is DFM that the designer needs to be involved with," he said. "For example, there's wire spreading to make sure your yield improves. Things like incremental fills for planarity, or minimizing the number of vias and doubling them otherwise. We're talking about design that's forgiving of the areas in which manufacturing is coming up short."
Concerning the issue of moving manufacturing and process data up to the design side to facilitate DFM, Raghvendra and Synopsys prefer to embody such information within the extraction and DRC engines in EDA tools. This abstracts the problems away from the designer.
A key player in the feeding of process data to EDA tools is PDF Solutions. This company works with foundries to help them ramp up their processes. It does so via test vehicles that measure known yield-loss mechanisms. These mechanisms identify the biggest problems at any point in time. Yields are ramped through a process of continually identifying and solving problems.
PDF also uses manufacturing data in another way, which is through tools used by library designers to optimize their cells for a given process technology. That same information is employed by IC implementation tools from Cadence, Magma, and Synopsys, so that intelligent choices can be made between low-and high-yielding cell variants for each of the many instantiations of a given cell in a given design.
"Magma's Blast Yield, Synopsys' IC Compiler, and Cadence's Encounter are all now capable of reading cell-yield information and optimizing yield as they do area, timing, and power," said Kevin MacLean, PDF's director of marketing. "We're taking the manufacturable information from a particular process and feeding it into the design flow."
A conversation with Nitin Deo, director of marketing at Ponte Solutions, centered on another approach to a modelbased DFM paradigm. "Any change that has to happen after layout and in GDSII is too late," said Deo. Ponte's technology is a model-based approach that relies on yield analysis during the design stage.
Three applications emerge for the yield-analysis approach. One is during the design of cell libraries and IP. "Right from the start, you ought to be able to analyze libraries just like for timing and power and have different yield numbers available for different cells," said Deo.
Once characterized libraries and IP are used to build a netlist, that netlist is analyzed for factors that will cause yield loss. Designers have two options: swap out problem cells, or change those cells' layouts for improved yields.
The third use is full-chip analysis after detailed routing. This is still the pre-GDSII realm, but a point at which major contributors to yield loss can be ferreted out and put in check.
For Silicon Dimensions, whose Chip2Nite tool endeavors to move physical-design information farther into the front end of the process, DFM is not so much addressing problems early in design as it is avoiding them in the first place.
"The first thing is the wires," said Michael Munsey, Silicon Dimensions' director of marketing. "We always try to minimize wire lengths. At first, we did that for timing reasons. But if you minimize lengths, you have less chance for lithography problems."
Vias are another area in which problems can be avoided, and addressing wire lengths makes a difference. "Via counts have been rising even faster than die sizes," said Munsey. "The reason is because the wire length is going up. So less wires also means fewer vias, which also means less chance of yield issues."
A FOUNDRY'S POV
Having finally encountered some healthy discussion of tools and techniques for improving yields, I returned to a final talk with TSMC's Chuck Byers. That conversation was joined by Bill Hara, vice president of engineering and technology at Altera.
"Almost by definition, the foundry 'design-for' rules are guidelines. As such, it's very difficult to follow them," said Hara. "They are in a gray area between what absolutely must be done and what would be good practice."
The answer, say TSMC and Altera, is collaboration. "The foundry and IC maker must work together to decide why these rules were created and what the costs are," said Hara.
Just as important, if not more so, is collaboration between foundries and EDA tool vendors. "Libraries, IP, and process design kits (PDKs) play a role in the overall DFM paradigm," said Byers. "There must be collaboration with the EDA vendors. Our in-house libraries must be aware of the rules and recommendations. We're cooperating with the EDA vendors in aligning PDKs with manufacturing guidelines."
Having looked high and low for the spirit if not the letter of the law of true design for manufacturing, I can only conclude that it's a developing area. Rule-driven DFM must, over time, be shunted aside in favor of a model-based paradigm. To engineers facing a 90-nm tapeout, I can only advise caution in the face of marketing pitches. Look hard for true DFM. It's out there, at least in nascent form.
NEED MORE INFORMATION?
Clear Shape Technologies
Forte Design Systems
Magma Design Automation