There is a vast opportunity to propagate artificial intelligence (AI) throughout the network edge, across the automotive, industrial, consumer electronics and other Edge markets. Edge applications need powerful compute and ultra-efficient AI acceleration to run AI and machine learning (ML), using a bare minimum of power, and it all has to be done at a reasonable price for an Edge product. AI technology had to be engineered to fit at the Edge, and now it has been.
Synaptics is satisfying the requirements of Edge AI systems with AI-native processors that run AI/ML workloads far better than general-purpose devices, with greater power-efficiency and cost-efficiency. Equally important, Synaptics has been a leading champion of the development of sophisticated design tools and a burgeoning open-source design environment, two more essential prerequisites for a thriving Edge AI market.
But beyond satisfying the requirements necessary to bring AI to the network edge, it’s our scalable silicon architecture that makes it possible to adapt to the growing needs of Edge AI. By combining open-source software and innovative silicon architecture, developers, users, and customers can tap into the full potential of artificial intelligence at the Edge.
The difference between cloud and Edge
AI has been widely accessible from the network edge. People invoke AI-enabled voice assistants through their smart speakers. When people talk to their TV remote controls searching for shows, it’s AI that makes it possible. People can query AI chat tools on their smartphones. These requests are all forwarded to remote data centers where the AI processing is actually conducted.
It’d be simpler to analyze data where it is both collected and used. Data centers incur service costs, network usage costs, and consume network bandwidth. Local AI alleviates, or eliminates, all of that. The same goes for latency from cloud processing. When signals must travel through the cloud to a data center and back, there is inevitably a lag between query and response. There is also an inherent security risk with every transfer of data — keeping data local serves the interest of data privacy.
OEMs who build products that operate at the network edge should be aware that it is not only technologically feasible to embed local intelligence in Edge products, but that doing so is practical, affordable, and easier than ever before. This includes using AI to support multi-modal input. Reliable, accurate, and safe machine vision, for example, is becoming a highly attractive addition to an expanding range of Edge products.
Edge AI processing done right
Synaptics takes a holistic approach to the challenge of producing effective Edge AI, innovating in both hardware and software, while also taking into consideration tools and support.
Synaptics starts with AI-native processing. Standard processors are designed to be competent at a very broad range of tasks, but AI relies heavily on a subset of very specific, often highly iterative operations. Standard processors become adequate for AI when paired with coprocessor chips optimized to perform AI workloads. This is far from the most efficient approach, however. Synaptics instead integrates AI-optimized circuitry directly in our controllers – in other words we bake AI into the core of our processors. Being AI-native enables Synaptics processors to run AI workloads far more effectively and power-efficiently.
Minimizing power consumption is always important for Edge AI products, but it can be especially critical for battery-operated devices. We have our own design techniques to minimize the amount of power our processors require, but we also make use of AI ourselves to make sure our processors operate more power-efficiently once deployed. Many Edge AI systems need to operate at full power only on occasion. AI-based contextual awareness and ultra-efficient wake-word detection assure minimal energy usage for any electronic system.
A smart doorbell, for example, need only be active when it detects nearby human activity. We use AI to analyze input data to determine if and when our processors should emerge from sleep mode, and whether or not to spin up fully. An AI-enabled doorbell that detects movements should wake up enough to determine what the moving object is. If the object is a bird or a car, the device should go back to sleep; if it’s a human, then, and only then, should it initiate the image capture function and the wireless module to alert the homeowner.
Different AI-enabled features and functions require different combinations of processing power, AI acceleration capability, and on-board memory. Edge products frequently also require wireless connectivity, and need to support a wide range of peripherals. Synaptics has built a portfolio of AI-native processors with different balances of compute resources, wireless connectivity options, and other useful features, so that OEMs will have options available sized for their specific products and tailored for the AI-enabled features and functions they want to run.
Different AI-enabled features and functions also require different AI models. Modern chat tools are based on large language models (LLMs) that are enormous, largely because they must be broad enough and sophisticated enough to field queries regarding every aspect of searchable knowledge.
The features and functions that AI would enable in any given Edge AI product is certain to be very narrow in comparison, which means the models can be much smaller and far more manageable. Synaptics has developed unexcelled expertise in identifying the most useful existing AI models for a given task or, if an appropriate model is lacking, advising the best way to devise a new model suitable for the task.
Scalable Silicon Architecture: The framework to grow and evolve
The need to adapt in the rapidly changing Edge AI market is essential. Those that deploy Edge AI on fixed-function silicon are out of luck if things need to change in six months. It's why Synaptics is spearheading a new class of edge AI silicon. Our scalable silicon architecture can be seen in product lines designed to solve endemic fragmentation in IoT devices, delivering more intelligence at every power level.
Designed for scalability and Edge AI future-proofing, this silicon features innovative hardware and software architecture to power the Neural Processor Unit (NPU) subsystems within the SoC product lines. Foundational for large language models (LLMs) and critical for multimodal AI interactions like natural language processing, transformers are key to this acceleration. Hardware support for accelerating transformers is also essential for state-of-the-art models, along with traditional Convolutional Neural Network (CNN)-based image, video, and audio models — all featured in our scalable silicon architecture.
Localized processing based on standard core compute can efficiently run newer or unsupported operators, without offloading to host CPUs. Allowing customers to leverage their silicon investments for longer periods, this low-latency, high-performance approach safeguards against rapidly shifting Edge AI processing paradigms.
When it comes to memory, we offer a combination of configurable internal memory and external system memory that can store compiled models, instructions, and descriptors, as well as input/output data and any intermediate data states.
Leading the way in open-source Edge AI
To accelerate market growth in Edge AI, Synaptics is working alongside Google as its first strategic silicon partner. Our Astra™ SL2610 line of AI-native IoT processors featuring our Torq™ NPU subsystem are the industry’s first production implementation of Google's open-source Coral NPU ML core. The NPU's design is transformer-capable and supports dynamic operators, enabling developers to build future-ready Edge AI systems for consumer and industrial IoT. Synaptics is the first to adapt Coral for scalable silicon.
This partnership also supports our commitment to a unified developer experience. The Synaptics Torq™ Edge AI platform is built on an open-source compiler and runtime based on IREE/MLIR. This collaboration is a significant step toward building a shared, open standard for intelligent, context-aware devices.
New SoC product lines and the AI-native processors of the Synaptics Astra series deliver significant competitive advantages in overall performance-per-watt for Edge AI IoT workloads.
Edge AI moving forward
All of the elements necessary to energize the Edge AI market are now in place. Processors that combine standard logic, DSP, and NPU provide a powerful platform for embedded edge applications and a variety of AI/ML workloads, including multimode sensing. Models sized for Edge applications exist. Open-source technologies and tools are available. Add scalable silicon architecture to the list of attributes for Edge AI can adapt and evolve, it’s the opportunity for AI/ML in everything from the internet of things (IoT) and the industrial IoT (IIoT) to smart buildings, smart cities, and smart homes is clear.

