A serious challenge in small-geometry design (0.18-µm and below) has been to eliminate iterations caused by faulty interconnect modeling. There are many sources of interconnect modeling inaccuracy to control, and as abstraction domains are crossed, maintaining the continuity of assumptions made is critical to timing closure (for example, eliminating iterations).
However, optimization quality at each stage has equal importance as timing closure. It does little good to achieve first-pass success on a 200-MHz design if the specification called for 300 MHz. Designers simply can't afford to sacrifice quality to attain predictability. The entire design process must be properly managed to create a path of continuously diminishing uncertainty, rather than upheaval and iteration along the way. This rule applies to design in general, and to interconnect modeling in particular.
Interconnect has become so central to success, it absolutely must be considered from the outset of the hardware design process. One chief failing of early behavioral-synthesis tools was their failure to consider the costs (timing and area) of interconnect at any level. Not only were all physical/topological effects ignored, but the interconnect associated with value multiplexing wasn't considered.
New architectural synthesis tools explicitly consider process accurate multiplexing as well as the physical topology of the implementation. Armed with such capabilities, these tools provide the foundational assumptions for interconnect modeling that are carried throughout the implementation process. They create the framework, in terms of circuit topology and partitioning, on which further detail is added.
Wire prediction, from the RTL and gate-level domains, suffers due to the simplifying assumptions used. Usually, designers employ statistical models for wire-based interconnect. The interplay between these statistical models and established RTL synthesis tools generally creates one of two effects. If the model was pessimistic, then the netlist produced will be a bloated, power-hungry implementation. On the other hand, if the model was optimistic, there will be timing surprises that cause iteration downstream. Stumbling upon the model that's "just right" is as big of a fable as Goldilocks.
Although any multifanout wire on a chip has a tree of capacitive segments to consider, most RTL/gate models use a simplified lumped capacitance model, versus the more accurate RC-tree approach. Along the same lines, capacitance values are calculated based on wire lengths, rather than by using a more accurately extracted capacitive value that includes coupling and other sources of capacitance.
New classes of tools loosely called design planning tools have been introduced specifically to alleviate some timing-closure issues. But these new tools also introduce new sources of assumptions and modeling discontinuities. Estimated routes, such as Steiner trees, are used for wire estimation. Many times, these estimates don't consider congestion, design rules, or blockages. While claiming to answer the routability question, these tools can give false hope and optimistic capacitance values.
Module boundaries may be considered hard and nonoverlapping during planning, but the actual implementation tools may use overlap or bleeding schemes (or vice-versa). For big chips, often a hundred or more blocks are involved, and the cumulative error can be quite significant.
As a sanity check, I suggest detailing your implementation trail, and the associated interconnect modeling assumptions made along the way. Examine relative error size and assumption implications. Happy trails!
Contributed by Steve Carlson, vice president of marketing for Get2Chip, San Jose, Calif.