Ml Edge 3 609edb922f10a

FPGAs and eFPGAs Accelerate ML Inference at the Edge

May 18, 2021
With ML models in their infancy, there is a need for flexible HW architectures that allow the acceleration of changing models. Learn why FPGAs, that combine high performance and flexibility, are an ideal solution for edge inference applications.

Many industries are rapidly adopting machine learning (ML) to gain insights on the ever increasing data from billions of connected devices. This, combined with a demand for low latency, drives a growing push to move inferencing hardware closer to the location where the data is created. This white paper will describe why FPGA-based hardware accelerators are required in order to eliminate network dependencies, significantly increase performance, and reduce the latency of the ML application.

Sponsored

Stricter guidelines imposed by version 3 of the IEC standard for harmonic current emissions push designers to embrace power-factor-correction methodologies.
The flyback topology is a versatile, widely used, switched-mode power-supply design with some interesting characteristics that brings performance and BOM advantages to many applications...
Designing power supplies that work from wall outlets has multiple challenges. This FAQ should answer many of your questions.
High output current, also referred to as Iout, might be an important spec for your low-dropout (LDO) linear regulator.