Letters From The Front Line At 28 And 20 nm

Sept. 25, 2012
The 28-nm node is becoming mainstream, and early adopters are having their first go at 20 nm. Of course, there’s a lot more physical verification (PV) work and associated processing.

The 28-nm node is becoming mainstream, and early adopters are having their first go at 20 nm. For those of you back on the “home front” considering moving to either of these nodes, what should you know beforehand? In a nutshell, there’s a lot more physical verification (PV) work and associated processing.

Having worked with many companies that have already moved to these nodes, one pitfall we have observed is insufficient planning for the equipment and infrastructure needed to meet their cycle time objectives. In this article, we’ll discuss why there is a big jump in processing at these nodes, what the industry has done to prepare for it, and what your company should be planning as well.

More PV And Processing

As you will hear from me, and no doubt others, the semiconductor capital equipment industry (wafer scanners in particular) has not delivered the optical resolution improvements needed to stay on the traditional Moore’s law roadmap of reduced feature sizes. To compensate, EDA has improved resolution through geometry-based resolution enhancement software techniques (RET), more sophisticated and complex design rule checks (DRC), design for manufacturability (DFM) checks and simulation, double patterning, and so on.

EDA is delivering an increasing percentage of node-over-node resolution improvement through more sophisticated design optimization, which requires vastly increased computing power. Yet the volume of design data is also growing at an accelerating rate, due to design size explosion caused by normal scaling and system-on-chip (SoC) integration. Back at the 130-nm node, PV was simply DRC and layout versus schematic (LVS) comparisons. At the resolution delivered by the stepper/scanners in this era, those checks were sufficient to verify that a design was manufacturable.

Jump forward to 28 nm, and we see that the mandatory signoff requirements from the foundries now include complex equation-based design rules, on-grid/on-pitch checks, more complex LVS with advanced device parameter extraction, near-field solver accuracy for parasitic extraction, lithography simulation with pattern matching, and smart fill insertion and checking (Fig. 1).

1. Since the 130-nm process node, each cycle of IC scaling has required additional checks on the physical design to ensure it can actually be manufactured at an acceptable yield. For the upcoming 20-nm node, the list of required and recommended (by the foundry) checks has expanding tremendously, putting a huge computational load on design datacenters.

Moving to 20 nm adds voltage-based DRC and double patterning, which adds new requirements for DRC, smart fill, lithography simulation, and parasitic extraction. In addition to the mandatory PV/DFM checks, there are now “recommended” checks that are extremely important to perform if you want to reduce design variability and achieve high reliability and parametric yield.

Layered on top of the additional types of PV and analysis now needed, the complexity and processing required for each type has increased over the last 10 years, most dramatically at 40-nm, 28-nm, and 20-nm nodes. Both the number of checks in the design rule manual and the number of discrete operations needed to perform each check are growing at an exponential rate (Fig. 2).

2. The expansion of signoff requirements led to an exponential increase in both the number of design rule checks and the discrete operations needed to perform each check.

For a very long time, the industry has been striving to maintain an exponential growth rate by doubling the number of transistors available in an IC every two years. To complicate matters, at 28/20 nm, the foundries changed their approach to fill to address new process realities. Historically, the objective was to minimize the amount of fill to reduce unwanted parasitic effects.

However, with no or minimal fill, there is considerable variability in planarization, which can impact device stress behavior, parasitic interactions, the effectiveness of resolution enhancement technology (RET), and much more. At advanced nodes, the strategy has changed to maximizing the amount of fill added to reduce this design variability. While adding fill can increase the amount of parasitic coupling capacitances, the impacts can be characterized. Using advanced fill strategies greatly reduces the variability issues while minimizing the electrical impact.

The result, though, is an explosion in the number of fill polygons, and therefore the graphic design system (GDS) design file size, which can have significant throughput impact. To mitigate the contribution of fill to file size, the foundries changed from a polygonal-based fill to a cell-based fill strategy—that is, a hierarchical approach to fill. Even with cell-based fill, the number of fill geometries in a design is contributing significantly to data volume expansion.

How Is EDA Reacting?

The sum of all of these exponential growth factors is creating a minor crisis in the IC EDA world. It’s a big problem, and EDA vendors are attacking the problem from many directions, simultaneously.

Each EDA vendor has resources focused on improving core engine efficiency, scalability, and memory usage. Over the last five years, the CPU time required to perform a given set of operations has been reduced by 1.7 times through algorithmic efficiencies. At the same time, multi-CPU scaling and real-time performance (in this case, on a 64-CPU system) has been increased by a factor of 6.2 times, while RAM usage was reduced by a factor of approximately four times.

For many years, the microprocessor suppliers delivered improved performance primarily via clock rate improvements. More recently, performance improvements are being delivered via multi-core and multi-threading architectural improvements. The EDA suppliers are taking advantage of these advanced capabilities, as evidenced by the greater than six-fold speedup shown in Figure 3. With the extremes in data volume and high-performance computing at 28/20 nm, it is also important to re-evaluate load-leveling and network configurations to ensure they are optimized for the new workloads. Frequently, 28/20-nm jobs stress and sometimes break configurations that worked at prior nodes.

3. Performance improvements in EDA tools enable the growth in verification requirements while controlling the operational impact of verification.

Performance improvements in EDA software and hardware cannot be leveraged without efficient PV decks from the foundries, but the approach to performance optimization varies among the EDA vendors. Where the foundries use PV tools to help develop their process technology at each new node, the decks are written by the foundries themselves and become their golden signoff reference. The EDA vendor then works with the foundry to optimize the golden signoff decks to maximize capacity and performance.

While the foundry does a reasonable job, the tool vendor has the most insight and experience to bring to bear on the problem. A further performance improvement of 30% or more by the vendor is quite common.

On the other hand, while the EDA vendors that produce their own decks from scratch can write better performing decks from the start, their challenge is in trying to match the checking results of the golden signoff developed by the foundry. This can result in delays, false errors, finder pointing, or, worst of all, re-spins.

Call To Action

From a PV complexity and high-performance computing perspective, the move to 28/20 nm is a much bigger jump than the jumps at prior nodes. To make the transition smoothly, your company needs to plan accordingly.

EDA and the hardware computing industries have improved their performance in many practical ways. That said, with the exponential growth in PV checks and design data volume, they cannot possibly keep up without an impact on the required computing resources.

Plan and budget ahead of time for the necessary resources. The “let’s just use what we did at the last node and hope the tools make up the difference” strategy will result in much longer turnaround times. No one can afford to be late to market for something this simple to fix.

With the increased PV complexity at 28/20 nm, we have seen an increase in customer design, foundry deck, EDA software, and hardware interaction issues. I strongly encourage you to pipe-cleaner your entire flow before your first commercial design and schedule debug time for issues. (They will arise.)

Also, put NDA agreements in place ahead of time to share data with your key partners to troubleshoot issues. And finally, work with your key EDA and foundry partners. They already have experience that can make your preparations for the battle on the front lines of 28/20 nm a lot less harrowing.

About the Author

Michael White

Michael White is Director of Product Marketing for Mentor Graphics’ Calibre Physical Verification products. Before joining Mentor Graphics, he held various product marketing, strategic marketing and program management roles for Applied Materials, Etec Systems and the Lockheed Skunk Works. Michael received a BS in System Engineering from Harvey Mudd College. He also holds a MBA/BS in engineering management from the University of Southern California.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!