FPGAs continue to take on more work, to the point where they can truly be called a systems-on-a-chip (SoCs). This new role comes in large part from the 65- and 90-nm process technology dominating today and the 45-nm technology on the horizon this year for Xilinx and Altera. As such, FPGAs now offer more than 300,000 lookup tables (LUTs) and include memory and a microprocessor core to boot (if you’ll pardon the pun).
FPGA vendors are expecting growth in wireless and wired communications, especially thanks to triple-play applications (voice, TV, Internet). Signal processing and other bandwidth-intensive applications will also drive growth due to video becoming virtually ubiquitous on mobile devices.
Throw in new low-power features and a bevy of new high-speed serial bus IP to the mix, and voilá, you have a true SoC capable of taking on these new demanding tasks like video processing. FPGAs will also break ground in new territories, such as high-performance computing, because FPGAs tend to be good at massively parallel operations.
In fact, FPGA’s cousin CPLD could even find its way into new cell-phone designs. The mere thought of using a programmable device in a cell-phone design surely would have gotten you thrown to the wolves in years past. But small, inexpensive, and reasonably low power budgets will allow programmable devices to steal ASSP and ASIC sockets.
Adding High-Speed SERDES
With bandwidth-hungry features like video finding their way into just about everything these days (especially mobile devices), it just makes sense for FPGA vendors to both speed up existing line rates and support additional SERDES protocols.
“SERDES I/O approaches are increasingly driving highperformance interface standards due to reduced pin count, lower power, and higher signal quality,” says Stan Kopec, VP of marketing for Lattice Semiconductor.
“Lattice has taken the unique approach of combining SERDES I/O with a low-cost FPGA fabric to extend the benefits of SERDES to a wider range of cost-conscious applications in the communications, video, and storage arenas,” he adds. “Expectlow-cost FPGAs that address applications like SMPTE, SATA, PCI Express (PCIe), and other high-end applications to become commonplace during 2008.”
Today, the vast majority of SERDES line rates available on FPGAs run at 2.50 or 3.125 Gbits/s, with system rates ranging around 10 to 40 Gbits/s this year and next. In 2010, you can expect systems to achieve 100 Gbits/s and faster. Currently, you can use up to 16 channels at a line rate of 3.125 Gbits/s (which allows 2.5 Gbits/s for data) to achieve 40 Gbits/s. Expect individual line rates to reach 6.5 Gbits/s and then 10 Gbits/s in 2010.
SERDES lines are typically added using IP blocks, though some protocols are supported natively, and vendors like Xilinx provide a programmable SERDES port. The newer protocols currently offered, or coming this year, include:
• SPAUI: Originally developed by Cisco Systems for chipto- chip interconnect, this protocol is a mix of SPI-4 and XAUI protocols. It promises transmission speeds of 100 Gbits/s and higher by using up to 24 lanes running at 5 and 6.25 Gbits/s.
• PCI Express 2.0 (or PCIe 2.0): This protocol doubles the bandwidth of the protocol from 2.5 to 5 Gbits/s, allowing an x32 connector to send/receive data at up to 16 Gbytes/s. PCIe-based devices are backward- and forwardcompatible. Version 2.0 also features an improved pointto- point data-transfer protocol.
• Serial RapidIO 2.0: The main improvement increases the physical layer (PHY) from 3.125 Gbaud to 5.00 and 6.25 Gbaud. Other improvements include lane widths up to 16; enhanced flow control for the link layer; and support for managing up to 16 million virtual streams between two endpoints using end-to-end traffic management.
IP, Cores, Power, and Tools
As we approach 500,000 LUTs, managing designs with so much silicon real estate is quickly becoming a daunting task as FPGAs encroach on ASIC and ASSP territories. It’s not surprising for customers to use hundreds of thousands of FPGAs for a given design’s life cycle.
So for designs below the 50,000 range, ASICs are much less viable, and FPGAs or structured ASICs may be a better bet. FPGAs haven’t exactly excelled at reducing power in the past, but new features in both the tools and the FPGA fabric are helping.
“FPGA systems-on-a-chip are here today. With up to half a million lookup tables in a single FPGA on the horizon, high-level design tools that manage this incredible power are a must,” says Kopec. “FPGA tools must not only translate HDL into logic efficiently, but also manage diverse IP modules, including microprocessors, peripherals, bridges, and DSP engines; verify system-level timing; provide detailed feedback on power consumption; and provide an efficient hardware and software debug environment.”
Most FPGA vendors are also beefing up their tools to help speed the engineering process. Many vendors now provide a logic-analyzer tool that works in conjunction with an IP block.
For example, Lattice offers “Reveal,” a feature that allows for real-time inspection and control of signals. Also, Altera offers a feature that allows you to lock portions of your design down after a successful compile. This saves considerable compile time if you can lock a large portion of your design down, especially for designs with over 100,000 LUTs. Xilinx offers tools such as Chip Scope Pro with transactionleveldebugging.
Packaging & Layout
Like silicon dies, packaging technologies are shrinking. Two years ago, FPGA packages typically were ball-grid arrays (BGAs) with a ball pitch of 1.0 mm. Now it’s common to find packages with higher ball counts using a 0.8-mm pin pitch. This reduction in pitch reduces the package area by about 36% or 20% in each linear direction. Meanwhile, devices with lower pin counts (100 or less) are moving from 0.5- to 0.4-mm pitches.
These reduced pin pitches are requiring more sophisticated (and more expensive) printed-circuit-board (PCB) layout techniques. For instance, due to the finer pin pitch, escaping the inner balls becomes more challenging, which requires blind, buried, and backdrilled vias to be placed directly below the balls. This increases PCB design and manufacturing cost, but saves on board area.