Skip navigation
alt.embedded
LabBench_promo.jpg

FPGA Boards Stoke Storage, AI Speeds

FPGA boards are being used to implement everything from computational storage to compression and machine learning.

The flexibility of field-programmable gate arrays (FPGAs) allows them to be used for applications ranging from network packet processing to signal processing, where custom ICs can’t keep up with changes or where programmability is useful. FPGAs have moved into the mainstream and the cloud. Xilinx’s Alveo and Intel’s Programmable Acceleration Card (PAC) lines target this space, and cloud service providers like Amazon, Microsoft, and Baidu deploy these cards to provide their customers with an FPGA compute platform.

These platforms provide a standard, OS-based interface that allows developers to take advantage of preconfigured FPGA services to accelerate a wide variety of applications. They can accelerate database processing or analyze data streams like network packets. Such boards typically contain an FPGA and a lot of DRAM.

ScaleFlux’s CDS 2000 (Fig. 1) adds up to 8 TB of 3D NAND flash memory. The FPGA on this computational storage platform can be used for a number of useful chores like transparent data compression, which essentially doubles the capacity of the flash memory to 16 TB. This can cut costs, but that’s not the only task for an FPGA.

Leaders_Lab_Bench_Fig_1_scaleflux_csd2000.jpg

1. ScaleFlux’s CSD 2000 series can double raw capacity and accelerate database operations.

Database acceleration is another job that the FPGA can easily handle, significantly increasing the number of queries per second. The FPGA performance is monitored to keep power and heat within the board slot limitations. The CDS 2000 has a PCI Express (PCIe) Gen x4 interface.

Bittware’s 250-E1S (Fig. 2) puts a Xilinx UltraScale+ Kintex FPGA into an EDSFF E1.S form factor. This short ruler package is also a PCIe-based, NVMe accelerator. The Computational Storage Processor (CSP) is designed to run Eideticom’s NoLoad CSP to provide database acceleration for platforms like MySQL, RocksDB, and Hadoop. The software can run on other FPGA platforms in addition to the 250-E1S.

Leaders_Lab_Bench_Fig_2_Bittware.jpg

2. Bittware’s 250-E1S, which fits into an EDSFF E1.S slot, contains DRAM and a Xilinx UltraScale+ Kintex FPGA to provide NVMe-based acceleration.

The Bittware 250-HMS board adds Samsung zNAND flash memory along with DRAM and MRAM. This full-height, half-length PCIe card is comparable to the CDS 2000.

The other place where FPGA boards are becoming more common is in the network. Achronix’s VectorPath S7t-VG6 Accelerator card (Fig. 3) sports 400-Gb Ethernet QSFP-DD and 100-Gb Ethernet QSFP56 interfaces. This translates to an aggregate bandwidth of 4 Tb/s. At the heart is a 7-nm Speedster 7t AC7t1500 FPGA with 692K 6-input LUTs and a 20-Tb/s, two-dimensional network-on-chip (NoC).

Leaders_Lab_Bench_Fig_3_Achronix_VectorPath_S7t-VG6.jpg

3. Achronix VectorPath S7t-VG6 delivers over 80 TOPS.

This variety of FPGA form factors is putting customizable logic in almost every application space. Custom IP can often be used on these platforms, but these days the IP is likely to be more standardized to accelerate support for things like popular databases. So, don’t overlook FPGA-based solutions because FPGAs are supposed to be hard to program. It’s more likely that someone has already assembled the solution you need in a package that fits your requirements and is also easy to use.

SourceESB banner with caps

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish