Let’s just assume that after adding communications costs and management overhead, the cost to move offshore to a low-cost country is half of the cost of having engineers in a high-cost location. This would argue that, given adequate availability of talent, a productivity improvement of 2x would obviate the need for off shoring.
Unfortunately, the electronics industry has locked itself into a mindset that hardware design today takes as long as it has always taken. And, in the face of growing complexity of the circuits being designed, even accounting for sourcing off-the-shelf IP, it is going to take more engineers, more time, and a bigger budget to get future chips designed, verified, and fabricated.
There has only ever been one real answer to growing complexity, and that is to raise the level of abstraction. Software went through its successive generations of languages, from assembly to Fortran to C/C++, Java, and SQL. Meanwhile, hardware design has been stuck at the register transfer level (RTL) of design for over 15 years.
Several attempts have been made to raise the level of abstraction. Most of them failed. Conventional wisdom rests those failures at the feet of tools such as Behavioral Compiler. I believe that the root cause was an inadequate deconstruction of the challenges of hardware design and an incorrect assessment of the root cause.
Using a taxonomy outlined by Martin et al, a hardware program describes concurrency, communications, structure, and resources. Most errors caused in describing structure and resources can be quickly identified and rapidly corrected. Some of them stem from inadequate language constructs in Verilog—lack of structures and unions, for example, though VHDL did not suffer from these inadequacies. Other ambiguous language semantics allowed tool developers latitude in implementation, such as blocking versus non-blocking assignments and arbitrary processing order.
The real bugs are in complex race conditions with shared resources. These are hard to find, consume enormous verification cycles to flush out, and, as complexity grows, the law of unintended consequences is a killer. If we could abstract concurrency, especially in circuits dominated by complex datapaths or control logic, we have a chance of making dramatic improvements in time, cost and quality of circuits.
Unfolding nested loops in a computationally-intensive algorithm is a fairly straightforward process to create a data pipeline. Companies are beginning to report some success in taking such sequential programs and transforming them to hardware. Control logic is a much tougher problem and requires out-of-the-box thinking to abstract upwards.
The solution may lie in a software technique called Term Rewriting Systems (TRS), which are rule based. The formal semantics that underlie a TRS deliver predictability and transparency. Their biggest benefit may lie not just within a module, but also in composing designs where each module is self-documenting and can assemble itself with other modules, automatically creating the controlling interface logic required for the parts to function smoothly together.
Formal semantics around interfaces may also benefit hardware/software co-design and co-verification. By designing a module harness while leaving out details of implementation, software engineers can get started much earlier with a design structure that can be resilient to change and robust to alternate implementations.
Can this electronic system level (ESL) synthesis deliver the needed 2x improvement in productivity? I think so. Our customers have reported 2x or better on circuits such as DMA controllers, memory controllers, system interconnect, glue logic, peripherals, PHYs, H.264, and 802.11a.
The question then becomes: Are U.S. and European engineers ready and willing to explore new tools and methodologies to improve their productivity and make level the playing field? Or will the offshore engineers move more aggressively to adopt these ESL techniques so that they are not only cheaper, but also faster and better?