MUNICH, March 10 — It's always a good news/bad news thing when an EDA conference like DATE winds down. The good news: It's time to go home! The bad news: You must say "auf wiedersehen" to your friends and acquaintances in the industry and to Munich, which treated us well in all aspects save for meteorology. But hey, it's three short months until we reconvene at DAC. And we'll do it all over again, only better. But before then, would someone please open a good Bavarian restaurant in Anaheim?
Meanwhile, today was indeed the final day of this year's DATE conference and exhibition. In canvassing a few exhibitors for their impressions of the show, the consensus was that it was definitely better than last year's soiree in Paris, but still not a bang-up event. Some seemed downright disgruntled, vowing not to return. My observation was that some booths (or stands, as the Europeans say) seemed always to be busy, while others were perennially forlorn and empty of even tire kickers. There seemed to be a decent turnout from Germany's technical schools. It's nice to see curiosity concerning EDA from Europe's future engineers, but to the dismay of EDA vendors, like most students, they're all pretty much broke.
Ordinarily, next year's DATE would be back in Paris, but the event's organizers have pulled out of their Paris commitment, citing both high costs and poor support services in the exhibit hall. The show will return to Munich next year. After that, it's up in the air. Rumors flew of Amsterdam, Nice, and Milan. However, toward the end of the day, a reliable (and highly influential) source told me that Amsterdam was out and that a strong dark-horse candidate for a future DATE venue had emerged: Monte Carlo. Let's hope.
Today's most interesting news items comprised a potpourri from far-flung corners of the EDA universe. Let's start with design for test.
Holding Down Test Volumes
Now that 130-nm design is entering the mainstream, designers are faced with new classes of faults that can't be caught with traditional stuck-at test techniques. Resistive vias and bridging faults demand at-speed and bridging tests. Consequently, the volume of test vectors is exploding by 5X or more, meaning higher costs and longer runtimes. Synopsys is entering the test-compression market with its DFT Compiler MAX, a DFT synthesis tool that's invoked with a single Design Compiler script command. Doing so results in a one-pass test synthesis run that brings test data volume compression of from 10X to 50X with no impact on timing or power. There's also just a 0.5% area overhead hit.
The secret sauce behind DFT Compiler MAX is what Synopsys has dubbed adaptive-scan technology. The tool inserts many short scan chains instead of a few longones. It's a fully combinational technique that generates an efficient scan architecture, leading to minimum test application time. Routing congestion is also minimized. Test coverage is identical to that generated using traditional scan techniques.
A one-year license for DFT Compiler MAX starts at $120,000. It's available now in limited release with general release planned for September.
A totally different approach to DFT was unveiled at DATE this week by startup DeFacTo Technologies. Based in Valence, France, with research operations in Grenoble and a U.S. operation established in Palo Alto, DeFacTo aims to bring DFT insertion out of the synthesis loop altogether and into RTL. It will launch its RTL scan-insertion tool at June's Design Automation Conference.
According to DeFacTo's president and CTO, Chouki Aktouf, inserting scan chains in the synthesis flow is too late in the process. "The implementation takes too long and is unpredictable," says Aktouf. "Worse, it doesn't adapt to design reuse. DFT logic is design-dependent and doesn't carry over with process changes."
DeFacTo's tool will insert scan chains in a fashion that is independent of synthesis. "It's a design step, not an implementation step," maintains Aktouf. The technology would cover all DFT methods, including internal and boundary scan, memory BIST, compression, P1500 and all IEEE standards.
Moving DFT above synthesis brings the process closer to the design decisions that can influence how efficient DFT will be, says Aktouf. It'll also make DFT code reusable and technology-independent.
The company's roadmap includes a BIST insertion product later in 2005 and a DFT planning tool in 2006.
Clean Up That RTL
If your ASIC project fits the description "average," then you can expect it to require 2.5 spins to achieve good silicon. Of the errors that cause respins, some 70% are functional. Last year, startup Stelar Tools launched its HDL Explorer, an "RTL design closure" product that's kind of a Swiss Army knife for the RTL designer that helps in myriad ways to clean up RTL and get it ready for a successful synthesis run with fewer iterations. At DATE, Stelar showed an enhanced release of HDL Explorer with some significant new capabilities.
The tool now enables designers to automatically route signals from one point to another in the design hierarchy, eliminating manual signal routing and a major source of errors. It also now addresses the oft-arising scenario in which designers must either move modules within the design or break single modules into multiple modules. In the past, this meant manually removing wire, breaking up the modules, moving them around, creating new wires, pins and ports. In other words, a mess that is extremely error-prone. HDL Explorer now automates this process. A designer need only select a module or modules and drag the selection to a new location. Depending on the intent, the tool automatically encapsulates the group of modules into a mega-module, or just selects the single module, and then connects the new wires.
Lastly, HDL Explorer now supports Verilog, VHDL, and mixed designs. The new release will be available on Linux and Windows XP platforms in the second quarter. Prices start at $7900 for a one-year, single-user license.
Clocks Go Formal
You're looking to attain timing closure for your ASIC, so, as usual, you write Synopsys design-constraint (SDC) files to define false and multi-cycle paths. These timing exceptions guide static-timing-analysis and synthesis tools in identifying timing paths that don't need to complete in a single clock cycle. But how do you know you've found them all? And how do you know the ones you thought you found are really correct?
Real Intent thinks it has the answer to this problem in its PureTime tool, a timing-exception prover that brings the company's formal verification technology to bear. PureTime exhaustively proves the correctness of these exceptions to improve productivity and quality.
Just feed PureTime your structural descriptions, RTL and SDC files. Paths defined as false are proven never to affect the design's outputs. Paths defined as multi-cycle ones are proven only to affect the output in the number of clocks specified. When PureTime finds an exception to be incorrect, counter examples are provided to guide the designer to the exact location and point in time where the problem occurs in the design.
PureTime ships in the third quarter with prices starting at $100,000 for a one-year license.
Soft Ware for Soft Errors
One of the crueler tricks played by atomic physics on electronic circuits is soft errors, or transient faults caused by external radiation (mainly cosmic rays) that affect the logic states of ICs and memories. The latest product in iRoC Technologies' Soft Error Design Solution Platform is called TFIT. The tool lets designers analyze the impact of soft-error strikes on their custom designs to help meet reliability targets.
Today's techniques for this kind of analysis use TCAD/3D modeling for Spice-level soft-error analysis. But accurately modeling a single strike/single angle typically calls for an overnight run. Full analysis of a memory or IP block can take weeks. By leveraging iRoC's soft-error models, a single-strike analysis can be done in seconds, and a full analysis in a few days.
Additionally, designers don't need to develop extra tools or scripts to use TFIT, which is fully interoperable with Spice simulators. The result is a true feedback-loop scenario for design changes to repair soft-error susceptibilities.
The TFIT tool will be available in the second quarter. Contact iRoC for pricing.
Looking To The Horizon
Some organizations look a couple of years out in terms of building for the future, while others look farther. The Belgian research center IMEC generally endeavors to peer from three to 10 years out in probing for technologies to support the chips and systems to be built in the future. In doing so, it balances its own infrastructure and leverages a network of partnerships in the commercial and academic worlds.
At DATE, IMEC announced that it has agreed to collaborate with CoWare in developing a design flow for a futuristic, flexible and programmable platform for multimedia and wireless applications. The flow is part of IMEC's MultiMode MultiMedia (M4) program. It'll be used to develop software-defined radio and multi-format multimedia codecs. Together, IMEC and CoWare plan to close the gap between IMEC's proprietary research tools and CoWare's ESL design tools.
A major target for the design technology segment of the M4 program is to develop an integrated digital design flow for multiprocessor-based platforms. Technology is moving toward mobile embedded systems, such as an M4 terminal that would support multiple protocols and modes such as WLAN, 3G/4G, PAN, DVB-H and others. However, designers face huge challenges in mapping complex digital applications onto such platforms. There's currently no integrated flow for platform creation together with application mapping.
CoWare and IMEC seek to build a flow that comprises application mapping and platform implementation. Starting from a behavioral specification of the platform, the application-mapping flow would comprise three major phases. First, high-level, platform-independent optimization would start from a single-thread system spec. Second, there would be transformation of the sequential description into a concurrent, multitask model. And lastly, platform-dependent optimization would result in a set of concurrent tasks, including communication information.
The result of the application mapping is a completely configured, flexible platform architecture. That architecture would be further implemented by CoWare's hardware/software co-design flow.