Femtosense
Packing Large AI into Small Embedded Systems
June 24, 2025
Related To:
What you’ll learn:
- What is Femtosense’s SPU-001 AI accelerator.
- How sparsity and small-footprint accelerators reduce space and power requirements.
Not every microcontroller can handle artificial-intelligence and machine-learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models (see figure).
In this episode, I get the facts from Sam Fok, CEO of Femtosense, about AI/ML on the edge, the company’s dual-sparsity design, and how the small, low power SPU-001 can augment a host processor.
>>Check out these TechXchanges for more podcasts, and similar articles and videos
Comments
Comments