Everyone in the semiconductor industry is aware of one facet of Moore's law: the cost per function decreases every year. This concept allows semiconductor buyers to plan for periodic price reductions and requires semiconductor companies to factor in a learning curve for yield improvement.
As production ramps from first silicon to full volume, the product yield becomes a significant contributor to overall company success. In an internal study, a large global integrated device manufacturer found that a yield increase of 1.7% caused a $2.38 million growth in earnings before interest and taxes over the lifetime of a single typical product.
It's becoming more difficult to achieve adequate yield, much less work on yield improvement, at advanced process technologies. At 130-nm and larger process nodes, the failures were mostly random defects, and the foundry working on its own could localize and fix the process issues that contributed to the defects.
Now, the latest nanometer processes are exhibiting new failure mechanisms. One class of defects includes the physical or mechanical issues related to the size of the features, such as bridging and resistive opens or shorts. Another class of failures is related to electrical parameters and results in problems such as noise, crosstalk, thermal parameter drift, voltage drop, and device variability.
Problems Facing Production
Foundries use test structures in the scratch area of the wafer to monitor production efficiencies. As processes migrate from development to production, these wafer acceptance test patterns show if the base parameters are within acceptable ranges. Now, it is becoming much more difficult to use these monitors in many of the newer processes for problem isolation.
Paradoxically, as the industry aggressively drives toward half- and quarter-node process strategies, it must now halve new process ramp times to maintain adequate time-to-volume production. To ease these problems, the contributors to process bring-up—design, test, and manufacturing—need a common data exchange platform and a data flow that enable quick changes in test vectors to address yield hot spots.
These issues come to a head at the process nodes below 90 nm. The latest processes are experiencing an explosion of design rules that now can number more than 4,000. These rule sets make eventual sign-off even more challenging because the design rules come in flavors—required, recommended, and optional—with some of the rules mutually exclusive to others in the sets.
At the early stages of process development, design rules and device model parameters are constantly changing as engineers make adjustments to the process flows. To cover the potential manufacturing issues, the foundries eventually release their design rules and device specifications in a process design kit that makes most of the parameters excessively conservative to ensure reasonable yields for themselves.
As scaling and material engineering continue to work on Moore's law, the resulting process requires changes in both the horizontal and vertical dimensions, and similar changes also must be applied to the gate-threshold voltages. The sub-90-nm process wafers have the threshold voltages for the high-speed variant set so low that the channel never turns completely off, resulting in fairly high leakage currents. In addition, because the gate oxides are so thin, the gates also contribute some tunneling current to the overall leakage.
To make design closures even more difficult, the problem of creating reproducible images on the photo resist when the images are smaller than the exposing wavelength requires significant modifications to the layout. These layout changes are generically classified as reticule enhance technologies and create other problems for designers.
The many layers of interconnect and their proximity cause the layout to have a large effect on the design quality and timing issues. Now, because of the imaging modifications, there even are rules to define exceptions to other rules and to address the forbidden zones in the mask preparation steps that result from the modifications and physics of light.
In an ideal world, the final device models would correlate exactly with the silicon, and everything would be stable. In reality, the latest processes suffer from variation at all possible levels: wafer-to-wafer, die-to-die, and transistor-to-transistor. These variations result from pattern sensitivity and manufacturing process complexity leading to atomic-level differences and greater parametric disparities. The interaction of the device parameters with the layout further clouds the picture, causing greater levels of uncertainty.
An IP-Driven Solution for 90 nm and 65 nm
Due to their density, regular structures, and increasing popularity on the die, memories commonly are used by foundries and semiconductor companies for process ramping and yield learning. As a part of the process ramp and yield learning curve for 90 nm, 65 nm, and 55 nm, a new approach to semiconductor manufacturing has been deployed, offering a yield-acceleration flow that enables significant reductions in silicon test, silicon bring-up, and time-to-volume production.
To meet the needs of the many types of memory applications, Virage Logic designs memories with very flexible and comprehensive built-in self-test (BIST). The memory BIST (MBIST) includes test algorithm programmability and the capabilities to execute test at the functional speed of the design and change memory-timing parameters such as the self-timed clocks (Figure 1).
Yield-Accelerating Memories
All test chips are designed using an industry-standard semicustom ASIC design flow from the register transfer level (RTL) to the graphic data system (GDS). This flow follows well-defined processes, and each has been optimized to reduce design cycle time.
Traditional practice at the foundries has been to completely separate the various areas of domain expertise. The foundry knows the process and physical issues related to the silicon while the designers know the integrated functionality. This knowledge segmentation resulted from the belief that both parties need to protect their proprietary information and intellectual property.
To provide greater insight into the various yield issues during process bring-up, engineers can rely on new software tools to analyze tester data logs and provide results in a user-friendly format. These tools can automatically search for any error trends and supply guidelines for their correction by analyzing the volumes of data and facilitating cross probing of the design, model, and layout files.
Although the testers and characterization teams generate large volumes of data, not all is relevant to all parties, so the tools have encryption and access restriction capabilities built in. Data is transferred via the ubiquitous Web infrastructure to enable a standards-based data exchange.
A Study in Improved Yield Learning
By deploying a memory-centric silicon verification process that includes the memories, BIST, and other support circuitry as well as the critically important software tools to analyze the volumes of test data, design and process engineers have improved the yield learning curve. Memories become the central design, test, and process evaluation block due to their density, regular structures, and capability to generate various size function blocks on a chip. By reducing the time for test modifications and improving data feedback, chipmakers can significantly reduce the time for production ramp while improving overall yields.
Yield-accelerating IP treats process bring-up as a series of development and test phases. By identifying three separate phases in the evaluation, appropriate resources can be dedicated to the problems.
In Phase 1, quick BIST vectors are executed on the multiple dies to identify yield hot spots at the wafer or die level. A hot spot occurs where the defect density/bit cell is higher than the average of a given threshold value. If Phase 1 indicates a wafer-level yield trend, then a process issue has been identified, and the foundry needs to identify and correct the process problems.
If no wafer level yield trend is identified in Phase 1, further analysis commences by executing additional vectors on the failed dies to collect more detailed information related to the hot spots. The process engineer tries to isolate the problems to the memory IP or to a design issue external to the memory in Phase 2. Non-memory design issues are corrected in the normal design correction flows.
If the problems are identified as memory IP issues, then Phase 3 comes into play. The software performs detailed bitmap and fault isolation.
More intensive test patterns can be generated to completely stress the areas of interest. This flow automatically identifies fault types and provides detailed statistical data of categorized fault occurrences through complementary software tools. This memory-centric manufacturing ramp-up flow is important for early process learning because most of the yield-limiting factors can be addressed and repaired before full-volume manufacturing begins. The capability to identify, evaluate, and address production issues before volume production saves test wafers and reduces time-to-process release.
As a part of the new flow, a secure data communications infrastructure is required to transfer as much data as possible while preserving the barriers and safeguards for proprietary information. In one example, designers shared a BIST database of a 65-nm test chip with a foundry partner so data collection and debug could be performed on packaged devices as well as independently on a wafer tester at the foundry's test facility.
A 65-nm process test chip and its half-node 55-nm version were used as proof of concept for yield learning. The test chip incorporated about 100 memory instances from five different compilers. The memories included the additional circuitry to perform the MBIST functionality and the PLLs for at-speed test.
A full complement of internal capabilities for diagnostic and yield analyses was used, starting with BIST functions and then accessing built-in self-repair (BISR) functions to evaluate the various redundancy schemes. When the memory hot spots were found, internal built-in self-diagnostic (BISD) capabilities were used to assess the details for failures.
The capability to change the vector sets through the on-chip test algorithm programmability proved invaluable in creating and applying finely tuned test to the hot spots. Because much of the testing was at the wafer level, the issue of high-speed testing through serial access to the embedded memory IP through boundary scan chains was bypassed.
65-nm Design Proved in Record Time
The testing flow for the 65-nm process coordinated and correlated the data from the separate testers at the foundry and at the fabless customer. It required not only the memories and other IP but also a full set of integrated tools and databases. Overall, the new flow enabled both companies to zero in on failures and locate the source in X-Y coordinates or in a Verilog file.
One problem identified through this cooperative process was a metal routing issue that caused interlayer coupling and resulted in a hold issue at high Vdd. This problem was not identified by the DRC checks.
The characterization results were compared between the 65-nm and half-node 55-nm test chip versions. The half-node version indicated about 100-mv Vddmin degradation. After instances causing Vddmin degradation were identified and their locations isolated, the read margin was adjusted on the affected circuitry. This fix improved overall Vddmin by about 120 mv.
As a result of Virage Logic's yield-accelerating Silicon Aware IP, the designs for the 65-nm and half-node processes were implemented in record time while simultaneously helping the foundry partner ramp the process and prepare customers for the coming challenges of nanometer memory design and implementation (Figure 2). In the new yield accelerator flow, the turnaround time for vector generation and data analysis results was confirmed to not exceed two days. Silicon debug and product bring-up time also were significantly reduced, and a number of design issues were identified and corrected that helped the foundry bring up the 65-nm process for volume production.
Figure 2. Success With Yield-Accelerating Silicon Aware IP
About the Authors
Yervant Zorian, Ph.D., has been vice president and chief scientist of Virage Logic since joining the company in 2000. Dr. Zorian also serves as the vice president of the IEEE Computer Society for Conferences and Tutorials, chairs the IEEE 1500 standardization working group for embedded core test, and is a Fellow of the IEEE. Previously, Dr. Zorian was a Distinguished Member of the Technical Staff at Lucent Technologies, Bell Laboratories and chief technical advisor to LogicVision. He received an MSc from the University of Southern California and a Ph.D. from McGill University. 510-360-8035, e-mail: [email protected]
Gevorg Torjyan, Ph.D., is the senior embedded test and repair design engineer at Virage Logic. Prior to that, he held positions with ArmenTel JSC, SP LLC, Aybben, and Tellura. Dr. Torjyan earned a Ph.D. CS from Yerevan Computer Research and Development Institute, an M.D.E.E. from State Engineering University, and a B.D.E.E. from Yerevan Polytechnic Institute. 510-360-8032, e-mail: [email protected]
Dan Nenni, director of strategic foundry relationships at Virage Logic, is an electronic design automation and semiconductor industry veteran with 23 years of experience. Before joining Virage Logic in 2007, Mr. Nenni held sales and marketing positions at companies such as Data General, Solbourne Computer, Vadem, GateField, Zycad, Avanti, Sagantec, Prolific, and Predictions. He has authored and co-authored numerous articles and papers on physical design optimization and design for manufacture. 510-360-8035, e-mail: [email protected]
Virage Logic, 47100 Bayside Parkway, Fremont, CA 94538