Good Or No Good? An Insider Look At What Works For ESL

Dec. 15, 2006
The architects of today's most advanced system-level SoC design flows weigh in on models, interoperability, and their wish lists for the future.

Electronic-system-level (ESL) design flows are for real. They're in use right now at some of the world's largest systems houses, and with those flows, chips are being taped out and put into production. So obviously, ESL flows can be made to work. But does this mean that everyone is happy with the state of the ESL art? In a word, no. ESL design, which comprises any tools, languages, models, or methodologies that operate at a level of abstraction higher than the register-transfer level (RTL), is slowly taking shape as users continue their ongoing shakeout of available resources. Standards organizations such as the Open SystemC Initiative and the SPIRIT Consortium, among others, are moving toward a set of standards related to intellectual-property (IP) integration.

Alas, ESL design still isn't for the faint of heart. For systems-on-a-chip (SoC) design teams, many bumps lay in the road between a high-level functional design description, a sheaf of IP-block datasheets, and a finished design that comes out of the fab on time and under budget.

ESL design flows and methodologies don't just appear out of thin air, but rather someone is responsible for putting them together and ironing out the wrinkles. In this report, the architects of ESL flows at six large systems houses will spill the beans about their methodologies. They'll discuss what works for them, where the holes exist in their flows, and what they'd like to see happen to make ESL really fly. Take heed, EDA vendors: Consider this an open letter from your ESL constituents.

ESL AT EMULEX Emulex Corp., based in Costa Mesa, Calif., is a maker of enterprise-level storage-area network (SAN) technology, and Terry Doherty, principal engineer, is responsible for Emulex's ESL efforts. Emulex has now used an ESL approach on three SAN controller projects all told, one of those being a second-generation device.

Emulex's ESL methodology centers around building SystemC hardware models and using them to explore and refine the algorithms meant for SAN traffic management. "Our cycle-approximate SystemC models are accurate at the inter-faces but not necessarily accurate internally," says Doherty.

A continuing issue for Doherty and Emulex is the validation of RTL against the SystemC hardware models and vice versa. "Are the assumptions made in the models valid? Does the RTL accurately reflect the model? These things are still very difficult to prove," says Doherty.

Emulex is weighing the merits of two approaches to this dilemma. One is to adopt usage of SystemC assertions. The other is to employ a code-conversion tool for converting RTL to SystemC and then running the resulting models in its high-level modeling environment (Summit Design's System Architect, Visual Elite, and Vista).

Among the code-conversion tools Emulex has evaluated is Carbon Design Systems' VSP. "Carbon gives us a pretty good speedup, but I'm not 100% convinced it's fast enough to do the job," says Doherty.

Another issue with the code-conversion approach is that any remaining bugs in the RTL will end up in the system-level model. "That's why, in the long run, SystemC assertions may be more promising," says Doherty. "We can insert them into the models and bring them into the application-specific integrated-circuit (ASIC) simulation environment to check against throughout the process."

FREESCALE'S FLOW For most ESL adopters, the technology serves to address the challenges posed by growing design complexity. Such is the case at Freescale Semiconductor's Wireless Modem Products Division, where Ryan Bedwell is system-level design manager.

"The complexity of the systems and software precludes use of a serial flow," says Bedwell. Thus, Freescale's philosophy toward ESL emphasizes concurrency (Fig. 1). Freescale's flow begins with performance models of the hardware blocks for a quick analysis of the architecture in terms of latencies, throughput, queue disciplines, shared resource contention, and power consumption.

As the design gels, the performance models must be gradually extended into a functional executable specification that serves as a system-level virtual platform. "It's important to push the use of untested software on an untested hardware model, or the 'double-blind bringup,' as early as possible," says Bedwell.

That executable specification is assembled starting with areas that are of concern from an architectural standpoint. Its function is architectural validation as well as early firmware/ read-only-memory (ROM) code development, hardware/soft-ware co-verification, and RTL co-simulation. "The executable spec is a timed model, maybe cycle-approximate," says Bed-well. (Some refer to cycle-approximate models as Programmers' View with Timing, or PVT.)

In addition to its design-related uses, the executable spec serves as the foundation for a "fast virtual prototype," which is a "productized" version of the executable spec suited for distribution to software developers. The fast virtual prototype has most of the timing information stripped out to increase simulation speed. It also includes a user interface with which software coders can execute their latest builds.

Yet another model that can be derived from the executable spec is a detailed performance-analysis model. While the fast virtual prototype satisfies the needs of about 95% of the overall software development effort, the detailed model is used to create and optimize "that 5% of really critical code," as Bedwell puts it.

Bedwell cites the need for architects to embrace the use of high-level models for making decisions. "Architects should see their fledgling ESL flows as a new tool for gaining insight," says Bedwell.

To make life easier, Bedwell also has a wish list he'd like filled. From IP vendors, he'd like to see transaction-level models (TLMs) become a standard part of IP deliverables. "I think untimed and cycle-accurate views are required. I'd also like an approximately timed view, but it's harder for everyone to agree on just what that means," he says.

A key wish-list item for Bedwell is the adoption of standard application-programming interfaces (APIs). "I like SystemC and what's come out of the Open SystemC Initiative (OSCI) and its TLM Working Group. But we need more. We need standards at a higher level of abstraction, and we need much faster development of new interfaces," says Bedwell.

From EDA vendors, Bedwell calls for open APIs. "We do not want to be locked into a specific vendor," says Bedwell. "When we write code, it's got to be movable. We can't work in a single-vendor environment."

Vendors, Bedwell opines, should "find differentiators that work within the context of open standards, fill some of our needs without locking us in. That might not be what they want to hear, but that's what we really need."

NOKIA POINTS TO MODELING For Finnish telecommunications giant Nokia, an ESL flow serves three main goals. "There's more software content in our products than ever, and the software developers can't wait for hardware delivery," says Tommi Mäkeläinen of Nokia's Research Center. Like most ESL users, Nokia also adopted its ESL methodology to reduce overall product development time. Moreover, the company understands the growing need for a reusable platform that enables scaling of functionality. In Mäkeläinen's view, "modeling is the cornerstone of the whole thing" where successful deployment of ESL is concerned. "It's fair to say that it flies or doesn't fly with the availability, interoperability and openness of models." Nokia is an active member of the Open Core Protocol International Partnership (OCP-IP), an organization that promotes standards surrounding automatic generation of bus interfaces for IP. Also, Nokia is investigating more active involvement in IP-standards efforts with OSCI and/or the SPIRIT Consortium. A long-term goal for Nokia's modeling efforts is to move to an XML-based (Extensible Markup Language) approach in which the metadata (literally, "data about the data") is the key to the model. This would serve to make the models inherently more portable between tools and flows, as well as more manageable and scalable. "So for a SystemC model, you'd actually have an XML-based description about the model interface, its contents, and other information, and then you could enable generation of the model from that metadata," says Mäkeläinen. "Such a scheme would eliminate requiring different kinds of models for different architectures or compilers." Nokia's interest in moving toward this direction would tend to explain its exploration of membership in SPIRIT, whose activities center on establishing standards for IP metadata. For Mäkeläinen, the issues around the lack of standards for IP models are paramount. "It's better that we (IP consumers) address the issues around the IP itself and let the EDA vendors resolve the tool issues, as opposed to trying to dictate to the vendors what kind of features the tools should have." It's also critical for Mäkeläinen that models retain their equivalence through various levels of abstraction. Nokia is working to build an architectural-exploration environment in which mixed abstraction will work efficiently. SAMSUNG SEEKS SPEED As a major global player in consumer products, Samsung Electronics is sorely in need of ESL's over-all benefits. Ever-larger SoCs with shrinking market windows, cost issues, and growing software content have conspired to make ESL a must for the Korean manufacturer. "We need to overcome the rising cost of design and the accelerating falloff in the price of consumer electronics," says Soo-Kwan Eo, senior vice president at Samsung's SoC R&D Center. "To reduce costs, we have changed our design paradigm by moving to higher levels of abstraction." Samsung adopted SystemC as its primary ESL design language, although it also uses C++ in its flow. Eo likes SystemC, particularly for behavioral synthesis work and for performance evaluation. In the latter case, Samsung uses cycle-accurate models to gain higher correlation between TLMs and RTL. The problem there is the duplication of effort required to arrive at those cycle-accurate models. And, says Eo, the SystemC simulations still aren't fast enough. "I urge EDA vendors to enhance simulation speed. For over 20 years, faster simulation has meant sacrificing accuracy. The tradeoff between accuracy and speed may be true, but we have to move up the slope." In addition to new technology to boost simulation speed, which Eo believes must happen in the simulation kernels themselves, Eo calls for common guidelines for modeling. "The vendors should provide general information about how to model in a way that will increase the tools' performance," says Eo. "Then more users will adopt ESL methodologies." Samsung, like other ESL adopters, has benefited from the concept of virtual platforms, which the company refers to as its ViP methodology. "We are adding an architectural-level power estimator to our ViP technology," says Eo. Power estimation will eventually be extended to power optimization. The company plans to further extend its ViP framework to the embedded software area. Samsung's roadmap for the technology includes development of a Platform Explorer, Platform Integrator, and Platform Verifier. Together, these elements will form a single ViP environment for rapid exploration of architectural variants ( Fig. 2). Before RTL design, Samsung's engineers will be able to use the ViP environment for architecture exploration, performance analysis, embedded-software development, platform integration, and functional verification. Eventually, they'll arrive at a "golden" reference for use throughout the development cycle. The types of models used by Samsung in its ViP environment depend on a given simulation run's intent. "If we are trying to simulate functionality, it's better to use C++ models," says Eo. "For architectural exploration, where we have to measure actual clocks, it must be done with SystemC models for cycle accuracy." In addition to simulation improvements, Eo's wish list for upgrades to his ESL flow includes a library of models for use in power estimation and optimization. Tool interoperability is high on Eo's list as well. He's aware of the efforts of the SPIRIT Consortium in the area of tool integration and IP reuse, but he points out that the organization's efforts at this time are in the RTL domain. "We have to work together to accelerate the practical availability of interoperable tools," says Eo. "If we accept the EDA vendors' proprietary approaches, we're forced to use many different tools, making the cost unacceptably high." Eo's last wish-list item is a unified hardware/software co-design and co-verification environment. Samsung is taking this track with its ViP technology. STMICRO GOES FORMAL For Pascal Urard, manager of the high-level synthesis group within STMicroelectronics' Central CAD organization, there are two basic forms of ESL methodologies. One is model-based design, which involves teams of IP developers and a disparate team of IP integrators. Such flows require preexisting libraries or IP blocks. Any changes in process technology require recreation of the low-level hardware. "Unfortunately, there is still no automated way to go from higher-level models to lower-level representations for model-based design," says Urard. The other ESL methodology style revolves around behavioral synthesis, which begins with high-level models that are synthesized to RTL. Urard has found behavioral synthesis useful in the implementation of new algorithms. "Once the algorithm is synthesizable with some set of constraints, we can explore many different variants automatically," says Urard. Within its flow, STMicroelectronics can handle designs of 300 kgates or larger. For its model-based flow, STMicroelectronics painstakingly created a way to formally prove functional equivalence between behavioral models and RTL within its Matlab-2-RTL flow ( Fig. 3). STMicro applied this flow to some very complex signal-processing ICs. The flow starts from fixed-point Matlab code of an algorithmic function as well as generic RTL representations of the same functions. The first step is to ensure through simulation that the Matlab and RTL descriptions are identical. Then the flow moves on to formal proof of equivalence for a given set of parameters. TI DREAMS BIG "We have a dream that we could simulate extensively at the transaction level," says Loic Le Toumelin, worldwide director of SoC Methodology Process and Tools for the cellular system organization within Texas Instruments. "We don't do so extensively at this time." However, Texas Instruments does make extensive use of virtual platforms. "Some developers create models that only work with certain tools. For us, this is a critical miss in this domain," says Le Toumelin. "We dream of models that operate in several tool environments." Even when there's some inter-operability, explains Le Toumelin, a change in environments can affect model performance. "Linking modeling styles to simulation tools is a huge problem." Le Toumelin has investigated several behavioral-synthesis options for a top-down approach that implements new algorithms for signal processing or other tasks. "Different tools bring different advantages: Some start from System Verilog, which is close to traditional hardware, while others start from ANSI C, which is well adapted for implementation of new algorithms," he continues. "Still others leverage rule-based synthesis and others provide links to formal verification." But Le Toumelin still sees major limitations in behavioral synthesis. You need different tools for different design styles (one for control logic; another for algorithmic synthesis). You must partition the architecture into control logic and algorithm portions. And, a behavioral synthesis approach means starting from many different abstraction levels. In addition, for Le Toumelin, behavioral synthesis flows lack incremental synthesis to accommodate engineering change orders (ECOs). There also are problems with links to implementation. "It's very difficult at high levels of abstraction to implement things like power management, design-for-test circuitry, and the like," says Le Toumelin. "These requirements translate into multiple clocks, multiple power-supply domains, and scan-path connections, all of which are added at RTL or the gate level. There's no link between high-level synthesis and our back-end flow." Le Toumelin's dream is a flow that automatically generates what he calls a "machine-readable spec" for a design. "You'd have a GUI in which you can specify that your platform will have a given processor," he says. "The flow would then automatically generate all views needed for detailed implementation. It would output RTL, XML, SystemC models, and perhaps, someday, English-language documentation." Putting together such a flow is an extensive and ambitious effort, but one that TI has already moved toward. "We already have some of the pieces," says Le Toumelin. Making all of this happen, of course, is contingent upon wide availability of high-quality, reusable IP that's programmable with all of the pertinent SPIRIT-standard XML metadata. TI signaled its interest in supporting standards bodies like SPIRIT with technology donations. "We believe we can improve the SPIRIT metadata definition and bring some new contributions to the XML schema, which we plan to do soon." WHERE TO GO FROM HERE? There is a dominant concern among leading SoC methodology architects: models and standards. Organizations including the SPIRIT Consortium and OSCI are working on standards that, together, will hopefully put a salve on these concerns. The SPIRIT Consortium, which recently became a non-profit organization as opposed to its roots as a loose cooperative between various EDA vendors and IP providers, is striving to create a set of IP and tool-integration standards to proliferate IP reuse. One can see, for instance, the evolution of SPIRIT's IP-XACT metadata specification from RTL support to a move up in the ESL domain ( Fig. 4). OSCI's Transaction-Level Modeling Working Group is readying its TLM 2.0 specification, which will contain modeling guidelines and descriptions to help enable SPIRIT's efforts to describe IP at the transaction level in terms of metadata. The specification provides data structures, classes, and an API for generic modeling of on-chip buses or network-on-chip transport mechanisms.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!