The system-on-a-chip (SoC) era has reached the point where the assembly of such large, complex chips seems almost rote. From a high level, it would appear to be a formula process: choose a processor, choose a bus, bring together your memories and various peripherals, and that's about it. But integrating semiconductor-intellectual property (IP)—the large functional blocks that comprise these various major elements of an SoC—can indeed be a very daunting task.
When contemplating what steps to take to simplify IP integration, three things come immediately to mind. For one, it certainly helps if IP is packaged and described in a standardized fashion. Then the broadest possible range of EDA tools can readily accept it and be able to transfer those descriptions across the design flow. For another, IP blocks must be able to communicate with each other in the system context. Lastly, the quality and pedigree of IP blocks is critical to design engineers (see "The Benefits Of Quality IP" at ED Online 12299 at www.electronicdesign.com).
In this article, we'll take a look at these aspects of IP integration. There's been movement of late in the standards arena, as well as an upsurge in IP quality. We'll look at the activities of the key organizations attempting to bring about industry consensus on IP standards. New choices abound in terms of tools, methodologies, and architectures for the interconnect, which is the lifeblood of an SoC. We'll also look at IP from the perspective of an implementation flow and the special challenges it can pose (see "IP From The Implementation Perspective" at ED Online 12300).
THE IMPORTANCE OF STANDARDS
In any industry, standards lend consistency to the proceedings. Having everyone speak the same language, and use the same terms and concepts to describe the fundamental building blocks and processes that go into putting a product together, is of inestimable importance.
"Using standards gives you confidence that any IP you buy works for that standard," says Keith Clark, vice president of technical marketing at ARM. "It's at least been validated against something that's known. There's that natural advantage, as well as the fact that you can hope there's more IP available for any given standard."
IP standards have come a long way since the earliest days of the merchant-IP market, when formats were plentiful and disagreements raged on how IP should be packaged. Much of the progress in IP standards has come from three key organizations: the Virtual Socket Interface Alliance (VSIA), the Open Core Protocol-International Partnership (OCP-IP), and "Structure for Packaging, Integrating, and Re-using IP within Tool flows," or SPIRIT.
All three organizations come at the goal of easing IP integration for designers a little bit differently, but all manage to avoid working at cross-purposes. In fact, in many cases, their respective efforts are highly complementary. Taken as a whole, all three groups' efforts aim to build out a complex infrastructure for IP integration that requires a great deal of cooperation between IP providers, EDA tool vendors, foundries, integrated device manufacturers (IDMs), and, most critically, design engineers (Fig. 1).
Since its inception in 1996, VSIA's mission is to develop the technical standards required to enable the mixing and matching of IP cores from multiple sources. In the words of Gary Delp, VSIA's chief technology officer (and an LSI Logic Distinguished Engineer who's part of the RapidChip team), VSIA strives to "put together a common vocabulary that can be used to discuss, interconnect, and compare IP."
A 2004 reorganization of VSIA saw a structuring around what the group calls "pillars," or working groups. VSIA's IP Quality Pillar recently announced the release of its QIP Metric v2.0, which can aggressively reduce the time typically required to make an IP purchase decision and to integrate the core. The QIP Metric tool is the result of an extensive beta program that involved a number of leading EDA and semiconductor companies.
The metric helps IP vendors and consumers communicate based on an objective foundation. Besides setting up the basis for measuring a core's characteristics against an industry-approved list of attributes, the metric provides a view of the IP vendor's general approach to IP development. This enables a continuous improvement mechanism. In turn, it also levels the playing field for vendors and allows an integrator to evaluate similar cores from competing vendors.
QIP Metric version 2.0 is easier to use than its predecessor, and it's more streamlined. This version also features simpler IP-qualification metrics covering documentation, deliverables, and information specific to the IP integrator, as well as IP development practices. It includes the newly added vendor assessment, too. And, the requirements for soft IP were restructured and revisited.
"We're working with the Fabless Semiconductor Association on extending the QIP Metric to hard IP," says Delp. The hard-IP metric is expected to see a beta release this summer, after which it will be made publicly available. There's also ongoing work on a quality metric for verification IP and software.
TAG, YOU'RE IT
A standardized set of IP quality metrics that's agreed upon across the IP supply and tool chain would go a long way toward ensuring consistency. The next step, according to Delp, is an IP-deliverables checklist. LSI Logic's internal design teams found two major issues to be a constant hindrance in IP integration.
"You spend a lot of time finding all the deliverables and communicating back and forth on making sure you have the right ones and in the right versions," says Delp. "The other problem is getting a license agreement in place. Those things are part of the context and should be simple."
As part of LSI Logic's engagement with both VSIA and SPIRIT, the company assembled a list of IP deliverables and is working with VSIA and SPIRIT to put them together in electronic databook form. Having the deliverables in an orderly format will help engineers to keep track of what's supposed to be there and whether it's there or not.
VSIA and SPIRIT are cooperating to ensure that commercial IP is described in a standards-compliant way. "Another piece of work for VSIA is IP tagging," says Delp. This is the work of VSIA's IP Protection Pillar.
The soft IP tagging standard takes identification information from the IP source file and provides a process for passing this information through each chip-level design step, including synthesis, timing, placement, wiring, and other steps leading to the chip's GDSII generation. This information can include identification, vendor ID, and, most importantly, version information. This enables a chip designer to explicitly identify a piece of soft IP even after it's absorbed in the overall "sea of gates" of the chip.
IC designers, semiconductor foundries, VC providers, and EDA tool manufacturers can use the methods in the standard to track identification information throughout each level of the chip-development process. At each level, tracking information is obtained from the previous level and transported to the next level using the appropriate output format. This makes the information independent of the design methodology, design tools, and EDA provider.
The main stumbling block with soft tags is that not all design flows produce, protect, and preserve the tags through to the GDSII flow. "One of the activities of VSIA is to encourage the EDA vendors, and to raise designers' sensibilities, so that we as users are all asking for a consistent set of deliverables," says Delp. "Then there's a groundswell for something universal, a common denominator."
MOVED BY THE SPIRIT
SPIRIT is another organization devoted to automating the process of including IP in designs. " SPIRIT is becoming a powerful force, not because of advanced technology, but because of an agreement in the industry about how to interface and describe IP," says Victor Berman, group director of language standards at Cadence Design Systems.
"It's about agreement about how to do delivery and the characteristics that you want to have automatically recognized by your design system. It's also about having design environments that can import a very large cross-section of IP and understand what it can do and then very quickly perform a functional verification," says Berman.
In terms of its progress toward establishing standards that will actualize its vision, "SPIRIT is in an interesting place," says Berman. The big milestone is the release of the 1.2 specification, scheduled for the end of this month (Fig. 2).
"We'll transfer that into an IEEE working group that we've already organized and that I'll be heading. It's called P1685, the SPIRIT IEEE-standard working group," says Berman.
SPIRIT plans to formalize its 1.2 specification, which is basically its RTL specification, as the first real product coming out of SPIRIT. It's hoped that the IEEE standardization will be completed around the end of 2006.
"The exciting thing about it is that by automating that process (of integrating IP), you encourage people to build tools to do this," says Berman. "So I expect in the next year to see all the major EDA companies have SPIRIT-enabled design environments."
Such tools won't replace verification or back-end tools. Rather, they will use the ability to recognize SPIRIT-enabled IP as an organizing principle. "If you're smart," says Berman, "you can start automating the gathering of data you need in a database to get from a high-level ESL design down to GDSII."
It'll be critical for design flows that can handle IP tagging to facilitate a two-way flow of information, says Drew Wingard, chief technology officer at Sonics.
"Information comes with the IP, and the EDA tools can then absorb the information," explains Wingard. "But the tools can't actually output information in SPIRIT form. And that is a big barrier, because it prevents you from hopping between best-in-class tools at each stage in the SPIRIT flow.
"One of the requirements is that SPIRIT-enabled tools have to become parts of what I would call a re-entrant flow, where the tools can use the SPIRIT information, but can also export equivalent SPIRIT information on the way out so that you can choose the best tools at each step in the flow."
That begs the question of how much SPIRIT-enabled IP is available on the market today. "Because the specifications are firming up, we're working on getting our IP SPIRIT-compliant right now," says ARM's Keith Clark. "Our intention is to have all of our IP with SPIRIT views verified as soon as possible. We're on the path to doing that, and we absolutely support it."
CONNECTING THE DOTS
A third aspect of IP standardization centers on OCP-IP's efforts to standardize the core interface between IP blocks. Based largely on a technology donation from Sonics, the Open Core Protocol operates from the premise that it's less important to standardize the IP blocks themselves and more important to isolate, or decouple them, from each other in building the interconnect.
"What we found was a really nice model in the world of networking," says Sonics' Wingard. "If you can isolate the blocks from each other, they can become more independent of each other. When you put the chip together, it's far less likely that you'll need to violate the assumptions that the blocks were designed with."
OCP-IP has gone a long way toward building out a complete infrastructure around the Open Core Protocol. This includes verification IP from Mentor Graphics, English-language compliance checkers, environments for validating OCP implementations, and more. The organization also offers training and technical support for OCP adopters.
One recent example of OCP-IP infrastructure building is JEDA Technologies' donation of an assertion-driven, SystemC-based OCP compliance checker to OCP-IP. The checker, free to all OCP-IP members, is implemented based on the compliance checks released by OCP-IP.
JEDA's OCP checker is constructed using JEDA's NSCa, a native SystemC assertion development, runtime, and debug environment. The checker can be plugged into an existing SystemC modeling or verification framework with minimal effort.
During a simulation, the checker monitors OCP interfaces, checks the protocol compliance, and reports violation conditions on-the-fly. In addition, the assertion summary coverage information can be used to measure a testbench's OCP protocol coverage. Users can download the free checker and a demo version of NSCa at www.jedatechnologies.net.
OTHER SCHEMES EMERGE
For some time, except for the proprietary buses such as AXI and AMBA from ARM, Sonics and OCP-IP stood largely alone in their efforts to provide a framework for assembly of disparate IP blocks. But other technologies have recently begun to emerge.
One example is the Network on Chip (NoC) technology from Arteris, a 2003 startup based in Paris. Arteris' approach to on-chip communication between the IP blocks comprising an SoC involves a packet-based scheme that attaches network interface units (NIUs) to each core (Fig. 3). So much like the Sonics/OCP scheme, the IP blocks are decoupled from each other and need no conversion of any kind to connect to the NoC.
In the NoC architecture, NIUs are connected to the NoC through dual request and response networks. In the example given, the CPU performs a load-store operation to the DRAM controller (Fig. 3, again). Signaling from the CPU is converted by the NIU into the native Arteris protocol and packetized. The request signal is routed to the DRAM controller's own NIU, which depacketizes and converts it to the network protocol of the DRAM converter. The controller fulfills the CPU's request and returns a packet to the CPU via the response network.
One of the more interesting aspects of the Arteris architecture is its use of a globally asynchronous, locally synchronous (GALS) paradigm. Islands of synchronicity are grouped together, and asynchronous links are used between those islands. Local synchronous clusters are useful because most IP blocks are this size or smaller. They provide manageable size for timing convergence, and they may have unique voltages, clocks, or other local features. Meanwhile, point-to-point wires asynchronously connect the clusters.
The NoC uses a layered architecture with a physical layer, a packet-transport layer, and a transaction layer. The physical layer, which comprises the wires, makes the timing closure possible within synchronous clusters. But it also handles the GALS communication between the clusters. This layer deals with wires, clocks, and so forth.
To build a NoC, Arteris supplies two EDA tools. One, NoCexplorer, is an NoC architectural tool that lets system architects explore the parameters of the chip before the IP is qualified. It takes in an input of a bill of materials for the NoC and the anticipated traffic pattern. With NoCexplorer, users can trade off parameters such as bandwidth latency, gate counts, wire efficiencies, power, and quality of service. It can generate multiple design spaces to choose from even before completing all of the constituent IP blocks.
The IP that makes up the NoC resides in the Arteris IP library. That library comprises only a few classes of IP, including NIUs, GALS links, and switches.
A second EDA tool, NoCcompiler, scales the IP blocks based on the design objectives. It connects the blocks and generates the RTL for the NoC, as well as synthesis scripts for RTL-to-GDSII design flows.
An even more recent startup, Silistix, approaches the creation of on-chip interconnects on a synthesis basis. The company's tool suite, CHAINworks, accepts a description of the target and initiator ports of an SoC design and synthesizes a structural netlist for a "CHAIN interconnect." ("CHAIN" stands for CHip-Area INterconnect.)
The key element of CHAINworks is its ability to produce self-timed interconnects. One tool, CHAINdesigner, takes a description of the connectivity and ports of an SoC design and generates the structure of the CHAIN fabric along with link widths and fine-grained pipeline stages to balance tradeoffs between speed, area, and power.
The tool's graphical user interface enables designers to place network gateways, connect ports to clients, and either manually or automatically create a CHAIN topology. CHAINdesigner also generates a Verilog or SystemC description of the fabric for simulation, simulation testbenches, and constrained CHAIN netlists for input to the suite's second tool, CHAINcompiler.
That tool accepts the constrained CHAIN netlist and components from Silistix's CHAINlibrary to produce a structural netlist for inclusion in the target SoC. The structural netlist is then fed into a conventional logic-synthesis tool such as Design Compiler and mapped to standard cells. CHAINcompiler also creates scripts for static timing analysis and provides hints for placement and routing. According to Silistix, use of CHAIN interconnects results in faster SoC timing closure. No synchronous paths enter or leave the interconnect, nor do clocks, so there are no paths from arbitrary design elements. Each endpoint in the design is laid out separately.
The packet-switched network allows replacement of lower-end CPUs or DSPs with faster ones without affecting the remainder of the design. Also, the interconnect offers protocol independence and the ability to mix and match any combination of third-party or internal IP blocks supporting diverse protocols.
Further, the company claims that the interconnect reduces power both at the system and bus levels. At the system level, CHAINworks eliminates issues related to frequency balancing, because any number of unrelated clock domains can be used. This lets each IP block function at its optimal frequency rather than at some artificial derivation of a system clock, thereby consuming less power.
At the bus level, CHAIN fabrics inherently use less power than conventional synchronous buses because of the nature of self-timed circuits. Power consumption is dictated by data traffic load rather than by a clock frequency, providing automatic fine-grained power management.