Even now, system-on-a-chip (SoC) design is a challenging proposition. Meeting performance, power, and area constraints for designs of 20 million gates or more with multiple asynchronous clock domains isn't exactly easy. Obtaining and integrating the large amounts of internally generated and third-party intellectual property (IP) contained in SoCs is no picnic either. But SoC design challenges are growing rapidly. Let's peer into the crystal ball at where ASIC/SoC technology is heading (Fig. 1).
Not long ago, some feared that Moore's Law was losing steam and that advances in silicon fabrication technology with features beyond 0.10 µm would be hard to attain. Yet by around 2005, today's 0.13-µm fabrication processes will be maturing as the 0.10-µm fabs just now coming online will be the sweet spot for high-end designs. Also, new design starts will take aim at next-generation processes with drawn gate lengths of as small as 0.08 µm and even 0.065 µm. Internal clock speeds in those geometries will hit the 2-GHz ballpark.
Although it's anticipated that most semiconductor dies will stay in the 8- to 10-mm/side range, maximum die sizes will grow from today's 15 mm/side to 20 and 21 mm/side, pushing total gate counts to between 300 and 500 million on high-end SoCs with up to 70 million logic gates. Raw gate logic densities in the 0.065->µm processes will reach up to 900 kgates/mm2 with memory densities of up to 1.4 Mbits/mm2. So if timing closure, signal integrity, and power consumption present problems now, what will it be like then?
One might first ask how all of those gates will be filled within shrinking time-to-market windows. A tag-team approach of combining more streamlined, user-friendly IP reuse, higher abstraction, and hierarchy is the likely method to boost designers' efficiency.
The IP integration issue is now being at least partially addressed by the increasingly popular platform-based design methodologies founded on CPU cores and standard buses. Today's highly configurable processor platforms could see extension into SoCs in their own right, rendering much of the ASIC-based methodology questions moot. But more likely is a mix of large-block IP with custom design of the logic surrounding it.
On the analog/mixed-signal side, there are similar productivity issues to surmount. The need for analog content on SoCs is growing even now, but designer productivity lags behind. Emerging analog synthesis tools may help fill this gap over the next several years. Such tools will be forced to prove themselves to the ever-doubting analog de-signers before they see widespread adoption.
As the configurable platforms would indicate, IP blocks are getting larger and gaining more functionality. In that sense, IP integration promises to become a less arduous task.
Today's IP management lacks front-end, chip-level estimation of IP requirements. A critical need exists to integrate the supply chain of the IP library providers and silicon foundries onto the engineer's desktop. Then, information on the IP that fills system requirements must flow cohesively into the implementation, procurement, and floorplanning stages.
On the abstraction front, market forecasts are for sharp growth in what's sometimes called electronic system-level design, or ESL (Fig. 2). Under the ESL umbrella, the short-term goal is to reach above the register-transfer level (RTL) into system-level algorithmic design using hardware description languages (HDLs) and various flavors of C/C++. This approach has already proven fruitful through Accellera's efforts to extend the Verilog language into the algorithmic realm. Verilog extension offers the advantage of being familiar to many designers.
On the C/C++ side, the Open SystemC Initiative is driving the adoption of a C-based modeling platform that leverages the common ground between hardware and software design. Such an approach is expected to better enable hardware and software coverification at the system level before detailed HDL models are available.
Merging Hardware And Software: More ambitious system-level design advocates predict a highly abstract space where hardware and software design truly converge. The Rosetta initiative could be that space. Potentially serving as a bridge to a true system-level design environment, the Rosetta language enables a design methodology of defined and combined models from multiple domains. Software and hardware designers can work in the environments that they find comfortable. Work on Rosetta is ongoing under the auspices of an Accellera technical subcommittee.
Moving up in abstraction can help with quicker system definition and partitioning as well as early functional verification, albeit often without detailed models. But until designers can go from an algorithmic expression directly to a gate-level representation of a design, the movement through RTL and down into physical implementation will remain a time-consuming and complex challenge.
SoCs are simply growing too large for today's tools to handle them in one gulp. While some vendors of full-chip IC implementation tools chip away at this problem, it's still necessary to break designs up to prevent overlong design cycles, or to at least impose hierarchy internally to the tool. Hierarchy will become a key driver for EDA in coming years, at least until the gap is bridged to a full-chip-capable flow. Coupled with an increased emphasis on hierarchy will be a drive to bring as much physical information about SoC designs into play as possible early in the process.
Hierarchy will be brought into play in physical design. Floorplanning will expand in scope, becoming a vehicle to manage the IP needed on large SoCs. They'll go beyond power estimations and fully encompass the timing closure aspect of physical design. Routers will also have to intelligently and transparently partition the design into a hierarchical structure, assign layers to each level within that structure, and work out the best way to efficiently utilize all routing resources available to the tool.
The Verification Bottleneck: Even if implementation kinks are worked out, there remains the so-called "verification bottleneck." In the coming years, SoC verification and test will only become more problematic as gate counts swell. Incorporation of design-for-test (DFT) functionality is a certainty. Today, DFT is often inserted during the back-end implementation process. In the future, it may make more sense to incorporate DFT during synthesis as a constraint, just as power, area, and speed are today. Doing so would make it a seamless translation from RTL to a gate-level schematic that proves testable.
Presently, verification of IP in a standalone fashion is easy enough, but the industry still seeks a way to pull together test information for IP on a system-wide basis. The blocks that were verified independently of each other never seem to want to get along during the integration phase. Some progress has taken place in today's testbench automation languages and tools, but will these techniques serve SoC integrators in the long term? We'll just have to wait and see. The verification process will continue to comprise a mix of techniques with varying members of the verification team using different techniques, all in the interest of gaining visibility into the design.
To that end, there's a pressing need for a standardized method of communicating test information. The emerging IEEE P1450.6 Core Test Language could serve as a step toward a backbone for common information transfer among EDA tools and software used to configure automatic test equipment.