Strategies for EDA tool usage will change course as gate levels, and ultimately costs, rise in programmable logic designs. For designs below 25,000 gates, the basic tools from logic vendors and many standard design tools are available for very little money. This level of design consists of mostly an aggregation of "glue" logic and some smaller hardware accelerators for specialized functions.
Yet it's a different tune for midrange designs between 25,000 and 250,000 gates. In fact, many users are taking the path of the ASIC designers. Design entry exists in some hardware description language (HDL), while the overall methodology calls for successive refinement via synthesis. The average tool cost, though, jumps to over $25,000 per designer as the tool set climbs to over a half-dozen individual tools.
The midrange tool set includes a simulator, synthesis, and "helper" tools, such as static timing analysis, code coverage, and possibly a Lint-type tool and a debugging tool set. Software development is one new function creeping into this range of designs now that embedded processors are available to designers (see the figure).
When system-level designs vault above 250,000 gates, the choices narrow down to either continuing in the ASIC path, or finding some alternative. The ASIC path leads to increasingly complex tool sets that can cost over $100,000 per designer. They will require a support staff of one person for every three to five designers to develop interfaces and scripts for the tools. In this ASIC-type design environment, the design times for the system-on-a-programmable-chip (SoPC) are about the same as the equivalent design size in an ASIC, between six and 12 months.
The alternatives, although mostly much more expensive than the vendor-supplied tools, let designers change levels of abstraction to better visualize the overall design. Some tools can generate an HDL representation of the design directly from higher-level models, allowing the design group to have a reasonable look into the software development at the same time as the hardware.
Ultimately, new EDA tools will be necessary. We'll take a look at some of these tools later on in the article.
Need For Change: Bob Barker, vice president of marketing at Future Design Automation, notes that one issue for FPGA designers is the need to change the design environment and improve user sophistication. ASIC designers have many tens of tools and scripts to invoke the tools in a repeatable sequence. Unless they're migrating from the ASIC world, FPGA designers tend not to have the support and infrastructure in place to help with the complex design environment.
Kent Moffat, product marketing manager at Mentor Graphics, agrees. He adds that in some sense, the changes from small to large FPGAs is not only a technical challenge, but also an internal design culture challenge. The design environment must shift from a small group of people working on a collection of design blocks in the same package to an interacting team. ASIC groups have a large CAD organization to help with scripts and tool flows. FPGA users moving up from smaller FGPAs lack this type of infrastructure.
Another key issue is support, says Axel Tillman, CEO of Novilit. HDLs are complex and as FPGAs get larger, the design task grows exponentially.
Molding To Change: The existing tools and methodologies can't scale and grow to address the changes in the nature of FPGA design. As the chips expand to ASIC size, the methodologies must evolve to either an ASIC style—lots of simulation, vectors, synthesis, etc.—or to a block-based design with 50% to 80% of the design as predesigned function modules.
Even with the use of predesigned function blocks, design complexity is surpassing the tools' capabilities. In simpler times, multiple blocks on an FPGA interacted minimally. The net effect was that of a set of multiple designs within a single piece of silicon. Now the blocks are specifically de-signed to interact.
The very high level of communications between blocks—protocols and transactions—is very time consuming and complex to verify in the design. Interactions between the various protocols is not simple. What does it take to interface the DSP and CPU while concurrently the direct memory access (DMA) and USB demand the bus?
Though beneficial, using an ASIC flow isn't time friendly. Most designs require the designer to spend a lot of time in simulation at the register-transfer level (RTL) with little hope of reaching a well defined endpoint. One question is how to translate from RTL to something else. If the design starts with a high-level model, this model needs a transform to something else and then to RTL.
The ASIC approach requires detailed structural knowledge of the functional blocks. But system design needs a different set of simulations in C or some other high-level language to check algorithms and structures. Unfortunately, the next step in this ASIC design flow is to manually translate the high-level model to HDL, with the attendant errors and gaps in design.
Fred Stones, vice president of sales at Summit Design, says new technologies enable a conceptual level above the hardware. A layer of C can adequately model most operations of any IC until you need to get to the clock-accurate details. In this way, you can verify in SystemC and migrate toward a mixed C-RTL design for synthesis.
Methodologies: The critical focus of system-level FPGA design is changing to one of rapid output and performance tweaking. To accomplish this, users need to explore the system architectures and look at the limitations of the devices to be used. A sub-optimal choice, say a microprocessor with a floating-point unit in a system that's not doing any math, makes the rest of the system unwieldy.
Designers must be able to verify the design in an interactive environment. Unfortunately, the HDL simulation environment generates too much data. It also requires a complete description of the input stimulus vectors, leading to incomplete coverage and an unknown endpoint for verification.
To change the design sequences and reduce time to working silicon, breakpoints should be inserted into the source code and backtracked to the smallest cloud of gates that can originate the errors. The desire is to get into the lab as quickly as possible, to run the design at full speed, and to get to billions of clocks in just minutes rather than in days or weeks.
Jay Gould, product marketing manager at Xilinx, believes the embedded design environment is especially challenging. In many cases, the development of an embedded system is one where the software group starts its design process with minimal information—some port names and minimal characteristics on an Excel spreadsheet. Due to the lack of coordination, the hardware and software integration is difficult and slow. Embedded de-signers have to address standards in looking at time-to-market. The features and support for the design are addressed after the fact, often after the systems ship.
Tower Of Babel: The increasing speed and size of programmable-logic devices are creating a sea of change in design tools and environments. For small designs, tools supplied by the logic vendor are adequate for creating and implementing the designs. But for very large devices, design tools and flows approach those of ASIC designs, with large design teams and design cycles nearing six months.
No common design methodology is out there for FPGAs. The main alternatives to the HDL design entry are function-specific tools. Some in the new set of tools start from C and go directly to placed gates. Others, such as MathWorks and special-purpose tools for DSP or communications systems, start from specialized languages and eventually convert those languages to C.
Domain-specific tools like SPW or MathWorks address the issue of very high-level modeling at the algorithmic level. They help to translate the models to a format that can be imported into a hardware design environment. Overwhelmingly, MatLab is in the hands of DSP designers who are trying to work from a higher level of abstraction.
New Classes Of Tools: In today's systems, developers don't know if the construct will be in an FPGA or mixed design. The largest FPGAs have over 8 million gates. Some variants include a PowerPC, MIPS, or ARM core, or the user can instantiate a soft core for the necessary processing power. In the past, the development flows were too isolated. Designers built models and refined the hardware down to its final implementation. But this flow can't address optimization needs. Now systems must have a mixed hardware-software engineer to address the unified system.
As the largest programmable logic devices (PLDs) continue to grow, they quickly lose the advantages of programmable logic—a quick turn of design and short time to a working chip on a board. To address the need for quicker designs, designers need to move to a higher level of language or go to a specialized application-specific language. Special tools and languages for particular functions (communications, datapath, etc.) have been around for a fairly long time but have not been directed toward the average user.
For platform-based design, users require system-level tools. FPGAs are becoming the system chip that's only converted to an ASIC in volume situations. The tools have helped increase the level of abstraction to where architectural exploration is feasible. One way to improve designer productivity is to handle the increased complexity with corresponding levels of abstraction. In some cases, using a language variant like SystemVerilog for early system functional verification can replace the early modeling with SystemC. System-level verification is much faster than gate-level or logic-level verification.
Tools supporting this move are the improved models that can cover the range from behavioral to bus functional to cycle accurate. In addition, the software code to drive IP modules is much more widely available.
Large FPGAs have design tasks that now include software and many other environmental changes. If the software can be ported directly into hardware, the design can progress to a version of reconfigurable computing (see the table).
Tool flows must increase the level of automation to benefit design teams. Tools need to manage versions, instill documentation, help the team share ideas and changes, and assist in intellectual-property (IP) deployment. In a way similar to the large software-development projects, FPGA designers need to have an integrated development environment (IDE) that aids in deployment, reuse, and debugging of their designs. The tools must also help the designer manage and understand all files associated with the design.
If designers can verify at the C level, they can evaluate the partitioning of the blocks and software, says Jeff Jussel, vice president of marketing at Celoxica. FPGAs can now hold an entire system of functions.
The Need To Debug: Designers are moving from the simulation environment to the laboratory spaces as soon as they can. Behavioral simulation and verification now takes up to 40% of the design cycle, with 35% of that time devoted to compilation and timing analysis and the balance in the lab. Designers of FPGAs want to change the mix and move into the lab earlier, increasing the lab time from 25% to 40% of the total design cycle.
The large number of internal blocks creates other issues. Because most of the blocks are deeply embedded, their internal states aren't easily available for debug tasks. The lack of internal communications visibility makes the debug task much more challenging. So, design teams need to move toward a hardware-software codesign environment.
Design size is breaking the design tools, says Tim Southgate, vice president of software and tools marketing at Altera. IP use, especially encrypted IP, isn't easy to integrate and debug when the total package has only enough pins for 10% of the ports. Most ports and nodes are internal and inaccessible to the outside world. BGA packages exacerbate the problems of access and are almost impossible to use as debug tools.
Logic-analyzer IP and the ability to scan through internal nodes and get the data output via a JTAG port is essential for debugging. Encrypted HDL needs extra attention to expose enough of the IP to use the rest of the debug tools. The hardware debug functions are difficult because the designer needs to link the gates and the HDL, but the source code may not be available. Co-debug tools are required to synchronize the hardware and software debuggers, since the software compiled into systems comes across much more as algorithms and protocols than any previous designs.
In addition to the extreme difficulties of developing the hardware, design teams should focus on developing more software. Some of the latest tools can generate an RTL representation of the design as well as a software representation, usually in C or one of its variants. With this capability, design teams can leave the design in its high-level representation until the last unit of design time and make a relatively simple change to either the hardware or software domain.
All of the design converges at the RTL and is the programmable-logic equivalent of RTL handoff/signoff. By moving toward increasing specialization at the tool level through other languages like C, MatLab, and RTL, designers can take advantage of higher-quality optimizations and faster run times. They can also get to a working system that's functionally correct at the algorithm level and relatively easy to transform to an FPGA.
C For Hardware Design: Most versions of C are reasonable for algorithmic and datapath descriptions but not so good for control logic or structures with high parallelism and concurrency. It's hard to explicitly define concurrency and parallelism in C without special constructs, but the special constructs just define another HDL.
Designers need a structured subset of the language for hardware, so C becomes just another HDL. It's a struggle to get to cycle-accurate constructs without using a structured language subset, such as system C. One strong objection to C or its variants as a design language is that compiler-generated names and connectivity information make it very difficult for designers to read the design when it's time to debug. The only port names that make sense are those generated by the designer. All of the others may as well have been run through a random-number generator or data-encryption engine.
Dan Ganousis, Accelchip's president, says the various versions of "C" are good for general-purpose functions and DSP coding. In fact, a person can do any design in C, but the design won't be very well optimized. Specialized languages like MatLab are much more efficient in developing models of certain types of systems, such as filters through the filter construction set. Even with the specialized languages, developers still need C for the rest of the application code, specialized constructs, and the specific calls to the hardware accelerators.
PLD Vendors To The Rescue? The big FPGA vendors, Altera and Xilinx, have developed specialized tools to address the block-, IP-, and functional-level of design with new tool suites. These tools not only help designers import IP blocks, but they also will create the interconnect and software drivers for all of the blocks.
These proprietary tools can expedite the design process by identifying the blocks to use, the structures, and the interfaces needed to assemble a functional unit, as well as enough code to get the design through the boot phase. The IP blocks supplied by the vendors and their partners are designed to be as parametrizable as possible for the greatest flexibility. Both companies also generate a majority of the interblock interconnect and the source code necessary to begin operating the functions.
One challenge for programmable-logic vendors is to find a way to wean their customers from their preferences for low-priced tools. If a company spends as much money for software development as for a new IC, the costs will run into the $10 million range. If the FPGA company gets 2000 users to buy the tool for as much as $1000 per copy, the $2 million in revenue for the new tools only pays for about one-fifth of the total cost of development.
One reason for the high cost of ASIC tool sets is the ongoing need to fund extensive and expensive R&D for new tool functionality. This disparity in costs and income will continue to be a challenge for PLD vendors who can't afford to subsidize their software development costs forever.
Future system-on-a-chip (SoC) design will require hardware-software profiling and partitioning tools to evaluate the hardware-software mix. Partitioning and synchronization is the difficult part of the design. So far, no tools exist to help in the tasks. Designers will need to evaluate architectures before implementation. As a part of the evaluation, designers will need to map the functions and timing to the implementation platform, then synchronize the various data transfers across the time domains. Such exploration tools would permit the profiling of the software and the associated software calls to the hardware accelerators.
|Need More Information?|
Future Design Automation Corp
The MathWorks Inc.
Mentor Graphics Corp.
Summit Design Inc.