The EDA industry's worst-kept secret is that the RTL design flow is broken. Implementing complex systems-on-a-chip (SoCs) in RTL simply takes too long and is too error-prone. Add the fact that simulation of RTL is too slow to effectively cover the entire design, and you have a design process that needs fixing.
With traditional ASIC design flows struggling to accommodate complex SoCs, designers are beginning to turn to electronic-system-level (ESL) methodologies. An ESL design description provides just enough detail to architect a complex SoC while being abstract enough to enable fast simulation (Fig. 1).
While everyone agrees on the need for ESL design, the industry has been slow to put these concepts into practice. Extenuating factors have caused this delay. One is the failure of major IP vendors to provide the high-level models that an ESL flow needs to take off. But this is beginning to turn around, with several key technologies coming together to make a real ESL flow viable.
Significant advances also have affected behavioral synthesis. This year, the synthesis of efficient hardware engines from algorithmic specs likely will emerge as a center of gravity—the compelling reason—to implement an ESL methodology. One hallmark of the SoC era is the blurred lines between the traditionally divergent worlds of hardware design and software design. ESL flows demand programmability for both hardware and software. Where the programmability comes from will be determined by the product definition. But whether it's software, hardware, or a fusion of both, it will deliver benefits like optimization of real-time applications, adaptive self-correction, or wholesale reconfiguration.
As designers see increasing software content and algorithmic complexities of chips and wireless multimedia, the trend will be to use mathematical modeling techniques to model and simulate the design, automatically generate the implementation, and verify the implementation. In the future, more designs will have multiple-processing devices, e.g., a blend of FPGA and DSP or multiple processors on one chip.
With growing design complexity, though, comes a parallel growth in the functional verification gap. This megatrend results from how the variety of behaviors an IC exhibits generally grows nonlinearly with the design size, whereas other aspects requiring verification, such as timing, grow linearly.
Resulting from the growing difficulty of functional verification is that simulation alone loses its effectiveness for verifying state-of-the-art IC designs. We'll begin to see development of more focused verification methodologies and tools aimed at specific, well-defined, yet difficult functional verification problems.
As for traditional RTL-to-GDSII flows, for 90 nm and below, only comprehensive implementation platforms will achieve design closure. The logical and physical design environments must converge through a comprehensive front-to-back design flow that can evolve with next-generation technologies, models, and libraries. It's mandatory that design platforms deliver a holistic view of the entire planning, placement, and routing process. They also must be driven by a common family of timing, area, power, and test engines and be supported by a single, integral data infrastructure.