Today's relentless advances in semiconductor technology, which allow the doubling of chip complexity about every 18 to 24 months, have happened so regularly that we have been able to schedule our use of next-generation technology without any misgivings. System designers can start crafting their solutions well over a year before the process they need will be ready for production.
This confidence in the ability of semiconductor fabricators to keep up the pace of process improvements has, for the most part, been right on target. But as critical dimensions start dropping below 0.13 µm, the fabrication challenges continue to increase. This may delay the transitions to each succeeding process generation. Just scaling the mask dimensions to implement the smaller features is only the first of many changes in the complex fabrication flows that will implement the future generations of VLSI chips.
Walking through the exhibits at last month's Semicon show in San Francisco and San Jose reminded me about the complexity of the overall fabrication process. More than 2000 companies demonstrated their best efforts to move the industry forward, exhibiting developments that ranged from ultrapure liquids, gases, and other materials used during fabrication to the dust-free carrier chambers used to move the wafers from step to step.
But can we continue to expect the industry to keep its improvements on the same 18- to 24-month timetable? Or will that timetable stretch out, so that each new production process is instead introduced on a 24- to 30-month cycle? This may happen due to a number of reasons aside from the current economic slowdown. One might be the metrology needed to achieve and then measure the impurity levels. Another reason could be the lithography systems and the ability to create the fine-featured masks that pattern the wafers.
Many companies in the semiconductor industry are working in concert to achieve the combination of advances necessary to create chips that employ critical features an order of magnitude below those of today. But what will you design over the next decade or so that will require a billion transistors on a chip, aside from a memory array? Today's most complex CPUs, including large on-chip caches, are only 30 to 40 million transistors, and a few specialty graphics engines have already surpassed 50 million transistors. Will future chips be large multiprocessor arrays that implement system-level solutions? Should we rethink system architectures to better leverage large amounts of memory?
Moreover, will the design tools, test tools, and the platforms they run on be up to the challenge, or will they "break" or run out of steam as circuit complexities increase? Are we taking for granted that we will always find ways to overcome these challenges, or are we reaching some technology limits?