Common Ground: Seeking Pin Assignment Balance in FPGA-Based Boards

Nov. 21, 2011
Integrating these ever-more complicated devices into their host PCBs has proven to be extraordinarily challenging. With better FPGA design-in tools that communicate in several languages, FPGA design-in can be a faster, less frustrating endeavor.

Component placement ignored (bad layout)

Component placement considered (good layout)

DDR3 and QDR II SRAM, FPGA tool-derived pin assignments

One board in a large ASIC prototyping system

Using FPGA knowledge to guide the PCB pin-swapping process

Predicting that IC density and complexity will continue to increase is kind of like predicting that the sun will come up in the morning-you'd literally have to be living under a rock to not be aware of this. And so it is with FPGAs, which are getting more, not less, complex; whose pin counts are going up, not down; and for which the available silicon seems to grow faster than many engineers can figure out how to use it. The only thing that steadfastly refuses to change is design schedules, which bewilderingly remain as constant as the sun.

Integrating these ever-more complicated devices into their host PCBs has proven to be extraordinarily challenging. FPGA designers, schematic engineers, and PCB designers find themselves locked in a divergent battle, struggling to create device pin assignments that satisfy both the FPGA and the PCB. Traditional tools almost encourage each specialist to "throw the design over the wall." As the design progresses, this tool-mandated "not my job" mentality dooms the team to wasting precious time, often late in the project, iterating between the FPGA and PCB design, searching in vain for common ground in pin assignments.

Regrettably, it's the PCB that usually suffers, with more layers and vias being added to accommodate the FPGA. To make matters worse, this typically manual round-trip process introduces errors that may not be exposed until the first prototype is powered up in the lab.

All's Well That Starts Well

Take an average FPGA-based PCB design containing two or three FPGAs. Most processes let the FPGA designer have all of the pin assignment fun (as in ,"It's fun to hit my thumb with a hammer"), using the FPGA tools, scripts, text files, spreadsheets, home-grown utilities, and whatever other tricks are at his/her disposal. Those pin assignments are then passed to the schematic engineer who gets the fun (see above for the definition of "fun") of creating symbols and wiring up the schematic using precisely the same pins as the FPGA designer, including all of the power pins that can number in the hundreds. Not to mention the lingering fear lurking in the back of his/her mind that if anything is misconnected, the system won't work. Or even worse, on first power up the prototype will do nothing more than radiate lots of heat until it melts into a useless pile of expensive FPGAs and fiberglass.

And this doesn't even begin to address the PCB designer, that poor soul at the bottom of the design food chain who just happens to have had all of his/her allotted time chewed up by everyone else, who has little say in the design process, and whois told to "fix it but don't change anything" on a board where the PCB routing was never considered for a fleeting µ-second!

That this situation sets the stage for a lot of hand wringing, late nights, finger pointing, heated battles, and endless loops through the flow-usually at the end of the design cycle, when it can least be afforded-is only too predictable. Oh, and don't forget that all FPGA pin assignment changes have to propagate through the entire flow, every time, with flawless accuracy.

This, with a little embellishment, is what happens with manyaverage FPGA-based boards. Just imagine ASIC prototyping designs, which can contain dozens of FPGAs. The problems listed above become practically unmanageable, and it is a testament to the engineers'-and PCB designers'-skill that any of these prototyping systems are ever made to work correctly in a time frame that ensures the ASIC is taped out before the market renders it obsolete.

Take the following two screenshots (Fig. 1 and Fig. 2), which are admittedly an exaggeration but nevertheless highlight what can happen while picking pins if component placement is either ignored (bad) or considered (good):

Assuming both designs satisfy the needs of the FPGA (the large device in the middle), which one would most likely inspire the PCB designer to request an unpleasant chit-chat with engineering? True, only the rat's nests are represented here. But their implication is clear: the tangled mass of connections scattered across the FPGA in Figure 1 are destined to cause a routing nightmare. To understand the roots of this issue, you need to look no further than the techniques that are generally deployed, and the data that is normally acknowledged, during the FPGA pin assignment process.

Article Sections

The Heart of the Matter

Regardless of how FPGA I/O pins are selected (automatically, manually, or a combination of the two), if the pin assignments have any hope of working well for the entire system, three factors must be considered-and these must be considered by the designer and any tools used in the process:

  • The usage rules of the FPGA. In other words, what are the pins and the underlying silicon capable of, and how does one set of pin assignments affect subsequent pin assignments? (This could be a topic for another article addressing the nonsensical idea that it's possible to efficiently shoehorn a staticswap matrix into the PCB-driven FPGA pin-swapping process.)
  • The characteristics of the signals in the design. Not all signals are created equal. A DDR3 DIMM byte lane, for example, typically contains 8 data bits, a mask bit, and a set of differential strobes. Arbitrarily assigning the data and strobes to random locations on the FPGA is almost guaranteed to fail. The FPGA design tools know this. The FPGA designer knows this. And any tool that assists in the pin assignment process has to know this. This is a perfect example of why a PCB designer, swapping and moving signals based on the rules contained in a static swap matrix, is playing with fire.
  • The placement of the components on the board. Of the three constraints, this is the one that has the biggest effect on the PCB routing. To pick pin assignments while ignoring the board-level component placement is like driving in the dark without headlights: after a few crashes, you'll get somewhere but probably nowhere close to where you want to be. This is also the one constraint that FPGA design tools do not natively or electronically comprehend because they focus on the FPGA - one FPGA. That's not to say that an FPGA designer can't successfully guide the FPGA tools based on his/her knowledge of the PCB component locations. But doing that across multiple FPGAs and then managing the pin assignment changes with each modification to the part placement is like a recurring bad dream.
  • An additional factor that is often disregarded but that needs to be addressed is the link between the FPGA and the schematics. Getting the FPGA connectivity captured in the schematic wouldn't be too painful if it were done only once. But that never happens. In fact, the connectivity changes constantly and radiates from two main sources: the FPGA designer, optimizing the pin outs for performance reasons, and the PCB designer, trying to reduce layer counts or routing time. As far as the hardware engineer is concerned, the source of the change is irrelevant-the FPGA connectivity still has to be updated accurately and reflected in the schematics.

    Since the classic FPGA vendor tools already deal with the first two constraints mentioned above, it's the third one that any new tools or techniques need to focus on if the goal really is system-optimized pin assignments that work well for the FPGA and the board.

    Look At It From My Perspective

    Before jumping to the conclusion that the easy answer is just to force FPGA designers to think outside the FPGA box, look at the problem from their perspective. Tasked with writing, simulating, and synthesizing the RTL; deliberating over the timing (both inside and outside the FPGA); selecting the proper I/O standards; assessing routing and clocking resources; sizing the I/O buffers;floorplanning the device; and, generating the programming files, FPGA designers have their own set of headaches to contend with. Throwing PCB routing into the mix does not help.

    And for some sections of the design, FPGA designers may not have much say in the matter-for the FPGA to work, they could be forced (or at least implored) to use pin assignments derived algorithmically by the FPGA tools. These tools can assist the designer in interfacing the FPGA to DDR3s, QDR II+ SRAMs, RLDRAMs, etc. (Fig. 3), and in producing soft IP cores. Testing an FPGA designer's patience then entails nothing more than proposing modifications to these hard- fought, "golden" pin assignments.

    There are a multitude of other pin allocation decisions that can elicit thoughts of "What are they thinking?!" from those not intimately familiar with the FPGA's requirements, but that make perfect sense to FPGA designers. Some devices, for example, contain hard cores carefully constructed to communicate with specific types of external logic such as DSPs or Ethernet PHYs, and the silicon that implements that logic prefers certain I/O pins. So when FPGA designers rigidly insist that pin assignments are fixed, many times they're absolutely right.

    So what happens when all of this makes its way to the board? This is where the pin assignment rubber meets the PCB routing road. It's at this point that the team is given its first real glimpse of how early design-cycle choices will actually affect the physical system. Signal integrity, mechanical and thermal qualities, manufacturing cost, packaging, reliability, power delivery and consumption-all the things whose effects and tradeoffs have been carefully evaluated throughout the design process-now begin to come together and their interactions play out.

    High-speed components like DDR3 have not only stringent FPGA integration requirements but rigorous PCB requirements as well: distances between parts, maximum allowed signal length, number of layers and vias, crosstalk, etc. For each of the 240 nets on every DDR3 (and of course for every part), the PCB designer has to group the nets into bundles, merge those bundles onto their appropriate layers, and then flowplan and route them around any impeding obstructions-all with an eye toward meeting multiple, often conflicting, design objectives. Poorly chosen FPGA pin locations can turn this process into a veritable nightmare, resulting in round after round of impassioned team-wide negotiations that inevitably push the project's timeline right to its unforgiving edge. Worse, it can add layers to the PCB, which drives up the end product costs.

    Now What?

    Any tools that claim to improve this situation have to be conversant in multiple design-space languages. They also have to be able to integrate, or at least easily communicate, with a range of other products, databases, and file formats. And as much as engineers enjoy deriding the EDA industry, EDA companies have developed new tools and flows to help design teams cope with integrating FPGAs onto PCBs.

    Each of these products approach the problem in unique ways, but the bottom line is that all of them exploit component placement as a constraint in establishing FPGA pin assignments that complement the entire design, not just the FPGA. And all of them have been used effectively by engineers across the globe to reduce design cycles and increase product quality. However, first-generation tools have some limitations. They may require that PCB placement be done first, which implies that un-optimized (manual) pin assignment is also done first. Or, FPGA pin assignment rules may be missing in the PCB layout space, so that pin assignment is done manually.

    A more advanced approach would assemble the FPGA rules, signaling, and part placement data (the 3 constraints listed above) into one location and then, through a "connectivity intent definition" mechanism, utilize I/O synthesis technology to automatically select the best FPGA pins across the entire design. Because the tool asks the designer to supply the connectivity intent instead of the actual pin assignments, the designer enjoys the luxury of knowing that no matter where the components are placed or oriented, the actual connectivity can be re-established by simply re-running synthesis. (Moving or rotating components implies a change only to the actual connectivity, not the connectivity intent.)

    Cadence Allegro FPGA System Planner takes just such a "connectivity intent" approach. One customer was able to take full advantage of this capability when, at the last minute, it was discovered that an FPGA had to be flipped to the opposite side of the board. Using the "connectivity intent with synthesis" functionality, days of re-work was trimmed to a few hours. Another customerproduced a 48-FPGA ASIC prototyping system in half the time of their previous 15-FPGA single-board system (Fig. 4).

    Wait Just a Minute

    However, as with any product, FPGA pin planning tools are not a panacea. They will not, nor are they intended to, replace the FPGA vendor's tools. They can use, and ideally improve, pin assignments selected by the vendor tools-but in some designs, or for a host of other reasons, engineers may want to use existing FPGAtool-defined I/O locations. Again, because the FPGA tools cannot spell "PCB," these pin assignments are often not optimal, at least for the board.

    In these circumstances the EDA tools, just like a machete or a screwdriver or a chainsaw, have to be used as they were intended to be, in the right place and at the right time in the design process. As such, any sufficiently advanced FPGA-PCB planning tool must be able to optimize, or guide the engineer in optimizing, FPGAtool-defined pin locations within an FPGA-constrained boundary, using algorithms that understand and adhere to the original, vendor-driven pin assignment rules.

    There are other restrictions that these tools suffer from as well: they all view the PCB as a two-dimensional object and the routing as a point-to-point Manhattan connection, neither of which is true. (How drastically these assumptions affect I/O assignment quality is directly related to the PCB topology.) But perceiving the PCB to be two dimensional is certainly a lot better than not perceiving the PCB at all. Worse, some of these products, in some cases, view the FPGA as a fixed-pin device, which spawns a rash of problems downstream when swapping pins in the PCB.

    An FPGA is a programmable device. By their very design, an FPGA's pins are dynamic-how one is used immediately and directly affects the others. This cannot be determined or orchestrated through a simple mechanism like a static swap matrix, which has none of this intelligence.Created too coarsely, a swap matrix will allow PCB designers to screw up the FPGA drastically. Too fine and they will be hamstrung, unable to swap anything of any significance.

    One way to get beyond this would be to instill FPGA expertise into the PCB tool, perhaps through an underlying "engine," to lead the PCB designer in the right direction by suggesting FPGA-compliant pin swap candidates. Bringing FPGA awareness to the PCB designer's desktop would help ensure that they are continually steered toward a more predictable (and routable) solution (Fig. 5). So it's not a single tool, but how well the tool flow understands FPGA-based designs that ultimately determines the team's fate and their ability to realize the completed system.

    Summary

    With better FPGA design-in tools that communicate in several languages (FPGA, schematic, and PCB), FPGA design-in can be a faster, less frustrating endeavor. But to properly solve the problems touched on in this article, these tools are going to have to take responsibility for their long-standing inability to adequately endorse the PCB-up front. Tools (and processes) that do not will continue to subject the entire team to a series of time-consuming, exasperating iterations, forcing the team to choose between the lesser of two evils: slip the schedule to allow for more iterations, or increase PCB layer counts (and potentially decrease system performance) to account for inferior FPGA pin assignments.

    Sponsored Recommendations

    Near- and Far-Field Measurements

    April 16, 2024
    In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

    DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

    April 16, 2024
    Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

    Connectivity – The Backbone of Sustainable Automation

    April 16, 2024
    Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

    Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

    April 16, 2024
    Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

    Comments

    To join the conversation, and become an exclusive member of Electronic Design, create an account today!