Watching a demo of electronic system level (ESL) tools from a current revenue leader took me back about 20 years to the first demos of mixed-mode schematic capture capabilities from Daisy, Mentor, Valid -- DMV to those old enough to remember. I saw the same drag and drop of square blocks with port pins that represented off-the-shelf parts and the same push-down to bring up a text editor to code up the functionality of custom logic. It’s the same probing of signals from the graphical capture to a simulation waveform and the same difficulty in correlating textual code to simulation vectors.
The demo made me nostalgic, but left me wondering: Have we just come around in one big circle to where system-on-chip (SoC) designers today are no more than the board designers of yesteryear? If we consider the major functions of any of today’s design environments to be capture, stimulate, simulate, verify, debug, and transform, then the answer is yes. As a matter of fact, the demos I have seen seem to be missing the transform function.
This new ESL proposition postulates a two-model environment: one for high-level functional models and one for implementations. Although design capture tools could bridge these worlds by allowing a manual push-down into register transfer level (RTL), the environments have separate semantics, resource leverage, and reuse domains.
As long as this remains so, then ESL is going to have a tough time delivering adequate value to affect the cost profile of semiconductor R&D. After all, designers don’t write RTL code and then hand code a netlist, do they? If they don’t accept two designs -- RTL and netlist -- where one is not derived from another, should they accept that an ESL model would be unique and distinct from the implementation model?
It is incumbent upon EDA vendors to deliver a flow that can take an ESL design and automatically transform it into a RTL implementation regardless of the design type, while meeting all performance specifications.
What’s needed, then?
Under the ESL proposition, high-level models can serve different uses. There are multiple points on the design spectrum where tradeoff for speed versus accuracy of design need to be made, as well as cost and time versus accuracy to support the objective at hand.
An objective that is garnering attention of late is “virtual platforms” for software testing. A definition of such a platform might include a fast CPU model (50 to 200 MIPS), accurate modeling of the chip communication, and reasonably accurate latency for any co-processor, including accelerators.
With this use model, it is likely that the detailed modeling of the data movers of the chip will govern the ultimate speed at which the design will simulate. This means that tradeoff decisions determine how much time and energy are worth investing to create this model. Additionally, it’s a question of what point on the speed-versus-accuracy tradeoff spectrum is required to meet a specific objective of software testing.
The answer is going to vary from chip to chip, depending upon different target markets, the complexity of the software, whether or not there is an RTOS, and whether there are other programmable processors.
Assume that designers have to be quite accurate in modeling the data movers -- the direct memory-access (DMA) controller, cache controllers, and interconnect. Further, throw in heterogeneous traffic and simultaneous processing by various hardware units, each of which needs to receive and send data to the main memory. The conventional cost of writing a data model for software testing of such a chip would be too high, while the time to write and debug such a model would not be timely.
Years of experience have shown that the RTL code almost always diverges from the model in a two-model environment. That’s due to late changes, different interpretations in implementing the specification, and making detailed decisions in the RTL design that were purposely left out of the specification process.
As you might expect, schematic-capture ESL is not going to cut through this Gordian knot, though there are two possible solutions. One is to take the RTL code and, using tools from EDA vendors, create a higher-level model that accurately reflects that RTL code, but runs much faster than RTL speed.
The other is to use ESL synthesis to write at a higher level of abstraction and automatically generate the RTL code. The untimed behavioral model will run fast and the compiler is known to create RTL code as good as that of a manual effort.
This transformation from a higher level of abstraction to a lower level of detail meets the objective of creating a virtual platform for software testing. It allows the SoC designer to create one in less than half the time it would take to write a conventional model for the same level of accuracy.
ESL is not ESL unless it includes a transformation of the design, eliminates the two-model environment problem, and enables a seamless design spectrum across which to explore different considerations. It also moves the industry light years away from mixed-mode schematic capture. Isn’t it about time?