Engineers are attracted to the flexibility of FPGAs and the ability to make major changes to their designs during testing and even in the field. These attributes are particularly important for emerging markets, where requirements are changing and there are no application-specific standard product (ASSP) solutions.
The performance of modern FPGAs has enabled these devices to target emerging markets, such as 100-Gbit Ethernet (100GbE) switches and Long-Term Evolution (LTE) basestations, that require very high performance. Engineers can reduce risk and development time by using application-specific intellectual property (IP).
FPGAs are good for markets that are underserved by ASSPs because the low volumes cannot justify the required investment for an ASSP. Additionally, FPGAs often are used on boards to provide glue logic or to bridge between different interconnects. This bridging may address the differences between the physical interfaces and the differences between protocols.
Finally, FPGAs are attractive for applications that need parallel digital-signal processing. These include wireless basestations, which need to process multiple data channels in parallel. In such cases, FPGAs can provide better prices and performance than general-purpose DSPs.
Some potential downsides of using an FPGA include a long development cycle and less efficient design than ASICs or ASSPs. The long design cycle results from designing at the gate level compared to using merchant silicon or developing software for a merchant processor. The gate efficiency results from having a generic layout that can be used for a diverse range of applications. Some current trends seek to address these issues in designing with FPGAs. Other trends focus on increasing the FPGA performance.
DESIGNING WITH FPGAs
The design process using an FPGA is far more complex than that using ASSPs or processors. To use FPGAs, a system designer must learn ASIC design tools instead of the more common and easier software-development tools required for processors. Relative to using an ASSP, using an FPGA takes longer and requires more resources. Even after the engineer becomes familiar with the tools, compiling the register transfer level (RTL) design can take a long time. Leading FPGA vendors offer partial configuration to help reduce compile times.
By making it easier to use FPGAs, vendors hope to increase the available target market for their devices. Altera and Xilinx have already developed tools that enable designers to go from DSP designs in Matlab to an FPGA design. Previously, this was a manual and error-prone task. Look for FPGA vendors to offer more sophisticated tools in the future.
Another approach to reducing development effort is raising the level of abstraction for the design. This task requires tools that let designers use high-level languages such as C and C++ instead of the current VHDL and Verilog. Although this concept is simple, in practice it has been difficult or impossible to accomplish effectively. Thus, the trend is more toward using a combination of VHDL/Verilog and C++, which uses pre-configured IP blocks. Impulse Accelerated is a leading exampleof a company that provides IP and the tools to enable design translation from C to FPGA.
Another way to reduce the development cycle is through the use of third-party IP or by embedding hard IP in the FPGA. For emerging applications such as 100GbE, third-party IP is critical. Often, this IP is first available on an FPGA before any other silicon. Sarance Technology is a good example of a third party delivering leading IP technology such as 40/100GbE media access controllers (MACs) and the Interlaken interface.
Yet another method of delivering IP is to embed it into the FPGA as a hard core. Embedded IP generally results in better die-area utilization and lower power dissipation, and it does not have licensing fees. If the designer does not use these IP blocks, however, the embedded IP is a waste of die area and power. So, FPGA vendors are careful to embed only IP that has broad applicability. Examples of embedded IP include PCI Express cores, Ethernet MACs, multiply-accumulate units, and DRAM controllers. In the next generation of FPGAs, likely embedded blocks will include USB and CPUs (ARM/MIPS).
By embedding more IP, vendors can create application-specific FPGAs instead of general-purpose FPGAs. Unlike an ASSP, an application-specific FPGA can address multiple markets, is configurable for customer requirements, and eliminates much of the glue logic that may be needed in the system. Although this approach narrows the target market, the resulting FPGA optimizes the performance and power dissipation for the targeted application, addressing one of the major issues of using FPGAs. In 2010 and 2011, look for the smaller FPGA vendors to create application-specific FPGAs.
Process technology, greater performance, faster interfaces, and reducing power consumption are the major technology trends. Altera and Xilinx, the leading FPGA vendors, have used process technology to drive density and performance. In 2010, Altera and Xilinx should start shipping large volumes of the 40-nm FPGAs that they sampled in 2009.
Altera and Xilinx are in a race to reach the next process node with their respective foundries. Altera works closely with TSMC, which expects to sample its 28-nm technology in the first half of 2010. Compared to its 40-nm process, TSMC estimates its 28-nm process node will provide twice the density and reduce power dissipation by 30% to 50%. On the basis of TSMC’s plans, Altera could sample an FPGA in 28 nm by the first quarter of 2010.
IBM, Chartered, GlobalFoundries, Infineon, Samsung, and STMicroelectronics are jointly developing 28-nm process technology. These suppliers expect to sample 28-nm technology in the second half of 2010, and they estimate the device in 28 nm will offer 40% more performance and 20% less power dissipation than similar products in 45 nm. Having lost the process-technology lead by working with UMC, Xilinx will presumably work with IBM and its alliance on 28-nm products.
For the leading process nodes, foundries offer a low-power version as well as a high-performance version. High-end FPGAs typically prioritize performance over low power and are likely to select the high-performance version. Specialized players, such as Lattice, may opt for the low-power version, but they are unlikely to adopt 28 nm until late 2010.
By 2011, FPGA vendors will be talking about 22 nm and its benefits. Volume shipments at that level, though, are unlikely to begin before 2013.
POWER AND PERFORMANCE
One downside of moving to the leading-edge process technology and adding more transistors is greater overall power dissipation. Overall power consists of dynamic power from active logic and static power from idle logic. Although high power dissipation is a concern for ASICs and FPGAs, the problem is further exacerbated for FPGAs because many resources such as logic and multipliers will remain unused on an FPGA.
FPGA vendors reduce power dissipation by increasing gate efficiency (using more of the available resources) and by using power-management techniques. FPGA vendors can reduce active power by gating the clocks to resources that are not being used. They will also mix fast and low-power transistors to meet the performance requirements at minimum power.
Static power, however, is a function of the leakage current of each transistor, and more transistors results in greater leakage current. Because foundries offer variants of the process technology that are optimized for either low power or for performance, we expect some vendors to differentiate by using low-power transistors to reduce overall power. In addition, an FPGA can be operated at a lower voltage to reduce power, and the architecture can be designed to consume less power.
Another way to reduce power is to increase the clock speed and use fewer gates to do the same function. Although operating the gates at a faster rate may increase power dissipation per gate, overall power is lower because fewer gates are used. With fewer gates, static power is also reduced. Because of low-power benefits, FPGA devices that have a faster internal interconnect fabric may eventually dominate most FPGA markets. Achronix is leading the way by doubling the internal operation of the fabric.
Emerging applications such as 100G Ethernet and OTU4 are driving the need for transceivers capable of handling around 10 Gbits/s. Altera was the first company to sample an FPGA that embeds 11-Gbit/s transceivers. Achronix also has a 10-Gbit/s transceiver, and Xilinx has announced plans to offer a 10-Gbit/s transceiver in its Virtex family.
General-purpose I/Os have a wide range of operation, and they support multiple voltage and current requirements to satisfy different standards, such as those for double-data-rate (DDR) memory and for PCI Express. To support such broad specifications, vendors must compromise on performance.
Such compromises have resulted in a lack of support for the newest memory-interface specifications. Currently, FPGAs are limited to DDR3-1066. We expect FPGA vendors to enhance their general-purpose I/Os to support DDR3-1333 and eventually DDR-1600.