New Technology Spurs Performance Optimization

May 1, 2004
During the latter stages of embedded-systems design, engineers may want to analyze the effect of shifting functions from software to hardware. Such analysis helps them achieve greater efficiency and performance. Up until now, lengthy handcrafted...

During the latter stages of embedded-systems design, engineers may want to analyze the effect of shifting functions from software to hardware. Such analysis helps them achieve greater efficiency and performance. Up until now, lengthy handcrafted prototyping and repeated simulations were required for the what-if scenarios that speculated on shifting functions to hardware. This process consumed project resources that were nearly impossible to justify. They fell into conflict with the tight budgets and short schedules that were needed for delivering manufacturable designs.

To evaluate the wisdom of conversion, designers must concern themselves with the effects of the following: clock cycles required to execute the function in hardware; communication overhead between the hardware and CPU; the area that the added hardware will occupy; and quantification of the expected performance improvement. Technology exists that can provide an accurate assessment of these effects. It allows designers to evaluate whether moving a function from software to hardware is worthwhile before committing to that change (SEE FIGURE).

Once the designer's judgment supports an exploration of the effects of this change, the technology automatically converts software functions to hardware. It creates adjunct hardware components that are referred to as "assistant processors." This step is accomplished by converting the software function from standard ANSI C into synthesizable hardware-description-language (HDL) code. Virtually all of the ANSI C constructs and coding techniques used by embedded developers today are supported by the new technology. As a result, the conversion of most embedded C is enabled without adding hardware C constructs.

After the assistant processor is created, the technology generates a software driver (with the same function signature as the original software) to take advantage of the new hardware function. An industry-standard bus interface, such as AMBA or SRAM interface, also is created. That interface allows the newly created assistant processor to be placed on either the main bus, an available local bus, or a co-processor bus.

To verify the functionality of the new hardware, an HDL testbench is then created. This testbench includes stimuli and expected responses. The HDL code is verified on the testbench by executing the bus interfaces and the assistant processor. Results are compared to function call data that was captured in an earlier performance-analysis step. A simple pass/fail is given on equivalence between hardware and software emulations.

By managing changes to the functional makeup of the hardware and software elements in embedded designs, this new approach has the potential to unleash dramatic improvements in system performance. It provides estimates of performance improvements without re-simulation. It verifies new hardware functions. And it creates both hardware and software interfaces. Additional new features include support for pointers to a raise, compatibility with both ASIC and field-programmable-gate-array (FPGA) libraries, and hardware compiling that executes in minutes. By offloading compute-intensive algorithms from the CPU to dedicated hardware, performance goals can be achieved early in the design process. At the same time, no substantial effects are made on project cost or tapeout scheduling milestones.

The future of performance analysis for embedded systems is dawning brightly. The bar for evaluating and choosing flexible and powerful hardware/software co-verification tools is about to be raised.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!