As system designers work to bring quality products to market on time, increasing complexity and shrinking market windows create significant obstacles. The rising intricacy in interactions between hardware and software, for example, has made it much more difficult to test and verify larger, more complex ASICs and system-on-a-chip (SoC) designs.
Designers faced with creating these products need tools that will allow them to develop and verify hardware and software simultaneously. A hardware/software (HW/SW) codesign methodology that encompasses the concurrent specification, partitioning, implementation, and verification of digital systems into cores and embedded logic, is one viable solution.
The first step in HW/SW codesign is at the system level, where the specification begins with the capture, analysis, optimization, and verification of the product's functionality. At this level, the designer also creates a preliminary architecture of the implementation that delineates how the candidate instruction-set processors, logic, and memory are tied together. Then comes partitioning, which refers to the mapping of functions into instruction-set processors (microcontrollers, microprocessors, digital signal processors) and logic blocks.
The implementation of the partitioned system requires taking the core(s) and logic blocks down to silicon, creating the memory map and object code for the software, and establishing the hardware and software interfaces. Verification refers to the concurrent design and verification of hardware and software blocks as they are refined from the system-level specification into an implementation.
At the system level, the functionality of the product is captured independently of its implementation. The focus here is on creating an executable system-level specification, typically using C or C++, in a block-based, domain-specific design tool. In this phase, designers create stable, well-conditioned systems, and optimize their performance to meet minimum performance standards (e.g., DVB, DVD, or GSM) or perceptive quality (speech and video).
Once the implementation-independent functionality has been captured, designers partition systems into hardware and software and define the HW/SW boundary. At this point, the analog and digital worlds, and their interfaces, are also defined. Often, portions of an old design, such as an algorithm for a motion estimator, a numerical oscillator, or models of interfaces to the analog or digital worlds, are reused.
While design tools help in specification, implementation, and verification, no tool today automatically partitions these systems into hardware and software. Tools do exist, however, that can bring critical information about the complexity of the system-level building blocks back to the HW/SW designers so they can explore possible candidate partitions without having to wait for a hardware platform. This reduces product cycle time and the risk of a HW/SW respin due to poor partitioning. After partitioning, several system-level optimization techniques are employed, such as minimizing word length, using low-complexity number representation systems for hardwired digital functions, or using fixed-point tools to optimize code targeted to instruction-set processors.
The partitioning task is one of the riskiest areas of product design. In fact, this is where market winners and losers are created. One viable option to aid the HW/SW coverification process is Synopsys' COSSAP tool, which contains Processor Developer Kits (PDKs) covering over 50 processors from seven vendors, including Texas Instruments, Lucent, Motorola and the ARM7 controller family from Advanced RISC Machines (ARM). These products are used for trial HW/SW mappings, profiling, and verifying software running on the vendor's own simulator. They also are used for cosimulating with the hardware in COSSAP C models.
NEC Technologies used COSSAP in this way to design the G8 and G9 GSM phones based on the Lucent Sceptre chipset. The design went from scratch to finished product in about 12 months.
Divide And Conquer
Typically, designers use behavioral, RTL, and datapath synthesis for hardware implementation, and hand-coded assembly and compilers for software implementation. Within the system-level environment, designers find feedback about hardware costs (area, speed, power) and software costs (memory, latency, MOPS) using code-generation, synthesis, and cosimulation capabilities. It is critical that the tools that estimate system complexity are closely tied to the implementation tools so that the estimates are accurate and can be realized. While relative complexity measures are important in terms of "time-to-decision," an unrealizable decision could be fatal to the design.
At the system level, the designer can build a virtual prototype consisting of logic and core blocks. Interfaces between the blocks can be abstracted to achieve fast system-level coverification of the functionality, complete with a model of the real-world environment—like base-stations, cable-TV head-ends, and disk-drive channels. Processor cores are incorporated into this model using instruction-set processor models, and interfaced to the rest of the system using memory-mapped I/O interfaces. That simplifies the problem of debugging the assembler implementation of a function mapped to software.
Logic blocks are incorporated using behavioral, RTL, or gate-level models, and interfaced to the rest of the system using cosimulation interfaces. The speed of these interfaces depends closely on the simulation paradigm used in the system-level tools, ranging from dataflow to clock-cycle-based.
As the designer moves down in abstraction from the system-level, the interface models between the cores and logic become better defined. The virtual prototype now actually consists of the cores with a "pins-out" view, and embedded logic together with the buses and memory activity captured. This marks the beginning of the codevelopment and coverification stages of the design, where functions and processes have been mapped into a defined architecture.
Once the design team has decided how to partition the system into hardware and software, the two resulting teams move in separate directions to accomplish their tasks. The hardware team members take the system specification and, based on their choice of processor(s), begin to implement the hardware for the rest of the design. This part of the design is typically entered in VHDL or Verilog, at either the behavioral or RT level. Then, it is synthesized and verified with event- and/or cycle-based HDL simulators.
The latter part of the design cycle, hardware verification, has been complicated by the advent of million-plus-gate designs. In these complex designs, the verification phase can take more than 50% of the total design time. As a result, designers may now spend more time writing HDL code to test their design than it took to create the design itself.
Software designers also start by working from the system specification. Their design environment, however, begins from a processor-centric view. They typically use an instruction-set simulator (ISS) for the target processor to test their software drivers and applications.
However, software designers do not have an accurate model for anything other than the target processor. So, when they need to test software that interacts with hardware that is not part of the processor (for example, peripherals, hard drives, etc.), they create stub programs in C that emulate a hardware response to the software. While this provides fast performance, it's at the expense of accuracy. The real hardware often responds very differently than the stub code for a particular device. Unfortunately, this inaccuracy usually does not show up until integration time, and can result in either a significant rewrite of software or even worse—a respin of the hardware or ASIC if the software cannot be fixed in the target system.
This division between hardware and software designers usually means that both sides have something that the other side wants. The software team has a C test bench that will accurately test the processor and the surrounding system. The hardware team has a virtual implementation of hardware that the software people can use in the place of a stub code or physical prototype for verifying their software drivers and applications. Tools are available today that bridge the gap between the two environments.
Synopsys' Eagle Technology Group, for instance, has developed a set of tools that enables the software and hardware design teams to begin integration of the hardware and software before a physical prototype is available. This allows the software development process to become a parallel task to hardware development.
Siemens Public Communication Networks used this methodology to verify a 5.5-million-gate telecommunications switch system with 24 RISC cores running the software. As part of its methodology, Siemens incorporated the interface between the Eagle tools and Synopsys' Cyclone VHDL cycle-based simulator.
With HW/SW coverification tools, hardware designers can take advantage of the code that software designers develop for verification so they can debug the hardware with real software, and not just an HDL test bench that imitates the actual software.
Software designers benefit because they can now develop software drivers and applications while testing them against the HDL description of the hardware. This process is more accurate than software stub programs developed by hand. All of this can be done without a physical prototype, allowing integration to begin much earlier in the design cycle. If a problem is found, changes can still be made in either the hardware or software because the hardware design is not yet set in silicon.
To bridge the gap between the hardware and software tools, coverification tools provide a model of the processor that coexists in both the hardware and software environments. Because phases of the design require different trade-offs between accuracy and performance, three different models exist.
The first type, a Link-model, provides the fastest possible coverification speed while still allowing full-visibility into the software execution process. The ISS-model, as its name implies, uses the software developers' ISS model and marries it to a bus-functional model (BFM) in the HDL code. This solution enables both software and hardware teams to debug the complete design from either perspective.
The final type is a TAP-model, which uses the actual processor to execute the software. It provides unmatched accuracy in a coverification environment. With three models, the designer has the flexibility to choose which one to use at any point during the development/verification process, depending on the current task, whether it's ASIC verification, software development, or system integration.
A Competitive Advantage
As the marketplace demands for smaller, faster, and cheaper systems escalate, the need for highly integrated systems and systems-on-chips will soar. Without concurrent HW/SW design and verification, tight time-to-market windows will be missed. Project costs will also dramatically increase due to lengthening design cycles, misunderstood specifications, and expensive ASIC respins. Those willing to invest in HW/SW codesign tools and methodologies will have a significant competitive advantage, often seeing their products on the market while the competition is still waiting for the prototype.