The debate about design at the electronic system level (ESL) seems to be in full swing again. Some claim there is too much “stuff” in ESL and basically suggest incremental approaches. Others say that today’s approaches aren’t visionary enough and are calling for the “real ESL to please stand up.”
I was just handed video footage of some ESL tools that were presented at the 2001 Design Automation Conference (DAC), now almost eight years ago. At that time, the EDA industry already had high-level synthesis from digital signal processing algorithms to register transfer level (RTL). Model-based design entry and optimization at that level was possible.
I remember preparing for the same DAC a design flow demo in which one could reassign a function from hardware to software, and all the interfaces were automatically reassembled. The flow linked into RTL, and one could see after synthesis where the interfaces that were automatically assembled between hardware and software had ended up. So with ESL basic technologies being available, why didn’t enough users show up to use it?
DIGGING UP CLUES
Let’s first check whether that perception is actually true. On a recent long overseas flight, I reviewed old market data and checked what happened to the predictions. Indeed, as the cynics point out, the predictions were great but never came true. Charting the data over time shows that the hockey-stick curve sowing the ESL market explosion moves out year by year.
However, looking at it “Sine ira et studio” as my university microelectronics professor always suggested, the picture isn’t quite as catastrophic as it’s often portrayed. According to the reported sales to EDAC, the ESL market was at about $200 million in 2007. It had grown from $120 million in 2003 and did show a respectable 14% compound annual growth rate (CAGR). The categories of tools accounted for in the EDAC-ESL market included ESL design, ESL synthesis, ESL verification, and ESL virtual prototyping.
With respect to the CAGR, ESL growth is only beaten by formal verification. On first sight, that’s not too shabby. However, another data point is the speed of growth. According to Gartner Dataquest data presented by Daya Nadamuni at DAC in 2005, the EDA tool revenue for RTL-based tools grew from under $50 million in 1989 to about $300 million in 1993. While $200 million in itself isn’t too bad, the growth rules of earlier markets do not seem to apply to ESL!
CAUSE AND EFFECT
So let’s rephrase the earlier question. With ESL basic technologies being available, why are the growth rates for ESL so different from the ones we have seen for the RTL market? There are many reasons.
First, software enters the mix, adding a new set of users with new price expectations. Second, there are profound differences between intellectual property (IP) development, i.e., the creation of the blocks in a design and IP integration, i.e., the assembly of the different blocks to be a system. Third, what qualifies as a system is a matter of perspective. Pretty much all “systems” become a component in a yet bigger system. The list of issues goes on.
But with respect to the original debate on whether incremental approaches or visionary big steps are the way to go, I am certain that an incremental approach is the right choice. Learning from my own mistakes in 10 years of ESL involvement, I am now firmly convinced that adoptability plays a key role. If the next step in the abstraction ladder cannot be reached incrementally, then it is not the next step.
At DAC back in 2001, to enable the benefits of switching functions from hardware to software, we had to ask users to abstract pretty much everything—their hardware, their software, and all the communications between blocks. We asked them to change just about everything that they still do today. This step was simply too big, even for highly desirable benefits.
A presentation on high-level synthesis the other day stated that on average, one line of C code created between 10 and 50 lines of RTL. This is a feasible, incremental step in abstraction. It directly matches the improvements we have seen in the past when transitioning from transistors to gates and later from gates to RTL. It can be digested by users.
However, I also saw a keynote early last year in which a chart about raising the levels of abstraction directly went from millions of lines of RTL to hundreds of lines of transaction-level models (TLM). That jump of almost a thousand times less code would be unprecedentedly high and is certainly not easily digested by users.
This brings us back to the difference between IP development and IP integration. TLMs work great for IP integration to create virtual platforms for pre-silicon software development. That’s because TLM definition focuses on the integration of the different components. As long as the inside is fast enough for simulation, the integration of TLMs via transactions has to make sure that core speed levels are maintained after integration.
In contrast, when it comes to automated implementation of high-level models, the representation at higher levels of abstraction is primarily important, not the interface to the environment. Different language definitions compete here as design entry. Some high-level synthesis solutions use ANSI-C or synthesizable subsets of SystemC. Others use M, Esterel, or even model based design entry.
It is important to understand that a model for fast simulation needs different information than a model for implementation. This means that there may not be a “single model” for both, as the respective next steps for implementation and integration of blocks may be different. We’ll see.
In any case, for ESL tools and methodologies to find adoption, incremental adoptability will remain a key issue and must not be overlooked.