Artificial Intelligence Dreamstime Daniil Peshkov 229665158

Developing Neuromorphic Devices for TinyML

Nov. 30, 2022
The current generation of neural networks doesn’t analogously reflect the actual operation of the brain, which has led neuroscientists to research networks that more closely resembled how the brain operates.

What you’ll learn:

  • A look at the latest generation of neural networks called spike neural networks (SNNs), their operation, and the hardware necessary to run those algorithms.
  • The variety of advantages SNNs have over conventional artificial neural networks.
  • How the 3.0 generation of neural networks will play a huge role in the upcoming era of TinyML, along with a variety of use cases/industries those devices can target.

Neural networks (NNs) have been inspired by the brain. However, using neuroscience terminologies (neurons and synapses) to explain neural networks has always given grievance to neuroscientists since the current generation of NNs are poles apart from how the brain operates.

Despite the inspiration, the general structure, neural computations, and learning techniques between the current second generation of neural networks and the brain vastly differ. This comparison annoyed neuroscientists so much that they started working on the third generation of networks that more closely resembled the brain, called spiking neural networks (SNNs), with hardware capable of executing them—namely, neuromorphic architecture.

Spiking Neural Networks

SNNs, a type of artificial neural network (ANN), are more closely inspired by the brain than their second-generation counterpart. A key difference is that SNNs are spatiotemporal NNs, i.e., they consider timing in their operation. SNNs operate on discrete spikes determined by a differential equation representing various biological processes (Fig. 1).

The critical one is firing after the neuron's membrane potential (“firing” threshold) is reached, which involves spikes being fired at that neuron at specific time sequences. Analogously, the brain consists of 86 billion computational units called neurons that receive input from other neurons via dendrites. Once the inputs exceed a certain threshold, the neuron fires and sends an electric pulse via a synapse; synaptic weight controls the extent of the pulse sent to the next neuron.

Unlike other artificial neural networks, SNN neurons fire asynchronously at different layers throughout the network, arriving at different times. Traditionally, the information propagates throughout the layers dictated by the system’s clock.

The spatiotemporal property of SNNs along with the discontinuous nature of spikes means that the models can be more sparsely distributed. Neurons only connect to relevant neurons and use time as a variable, allowing for information to be encoded more densely versus traditional binary encoding of ANNs (Fig. 2). This leads to SNNs being more powerful computationally as well as more efficient.

The asynchronistic behavior of SNNs along with the need to execute differential equations is computationally demanding on traditional hardware, so new architecture had to be developed. This led to the creation of neuromorphic architecture.

Neuromorphic Architecture

Neuromorphic architecture, a non-Von Neumann, architecture inspired by the brain, is composed of neurons and synapses. In neuromorphic computers, the processing and storing of the data occurs in the same region. This alleviates the Von Neumann bottleneck, which slows down the maximum throughput achievable with traditional architectures due to the need of moving data from memory to processing units at relatively slow rates. In addition, neuromorphic architecture natively supports SNNs and accepts spikes as inputs, allowing information to be encoded in the spikes’ time of arrival, magnitude, and shape.

Thus, the key features of neuromorphic devices include their inherent scalability, event-driven computation, and stochasticity. Neurons firing can have a sense of randomness, making neuromorphic architecture attractive due to their ultra-low-power operation, usually operating magnitudes less than traditional computing systems.

Neuromorphic Market Forecast

Technologically, neuromorphic devices could play a big role in the coming age of edge and endpoint AI. According to a report by Sheer Analytics & Insights, the worldwide market for neuromorphic computing will reach $780 million, with a compound annual growth rate (CAGR) of 50.3% by 2028.1 Mordor Intelligence, on the other hand, expects the market to reach $366 million by 2026, with a CAGR of 47.4%.2 Additional market research reports can be found online projecting similar numbers.

While the forecast numbers aren’t entirely consistent with each other, one thing is clear: The demand for neuromorphic devices is expected to drastically increase in the coming years. Market research companies expect various industries, such as industrial, automotive, mobile, and medical, to adopt neuromorphic devices for a range of applications.

Neuromorphic TinyML

TinyML (tiny machine learning) is all about executing ML and NNs on tightly memory/processor constrained devices such as microcontrollers (MCUs). As a result, it’s a natural step to incorporate a neuromorphic core for TinyML use cases due to several distinct advantages.

Neuromorphic devices are event-based processors operating on non-zero events. Event-based convolution and dot products are significantly less computationally expensive since zeroes aren’t processed.

Event-based convolution performance improves further with the larger number of zeroes in the filter channels or kernels. This along with activation functions such as Relu being centered around zero provides the property of event-based processors’ inherent activation sparsity, thus reducing effective MAC requirements.

Furthermore, as a neuromorphic device’s process spikes, more constrained quantization can be used, such as 1-,2- and 4-bit quantization, versus the conventional 8-bit quantization on ANNs. Moreover, because SNNs are incorporated into hardware, neuromorphic devices (such as Akida from Brainchip) have the unique capability of on-edge learning.

That’s not possible with conventional devices. They only simulate a neural network with Von Neumann architecture, leading to on-edge learning being computationally expensive with large memory overheads in a TinyML systems budget. In addition, to train a NN model, integers would not provide enough range to train a model accurately. Therefore, training with 8 bits isn’t currently feasible on traditional architectures.

For traditional architectures, a few on-edge learning implementations with machine-learning algorithms (autoencoders, decision trees) have reached a production stage for simple real-time analytics use cases, whereas NNs are still under research.

To summarize, the advantages of using neuromorphic devices and SNNs at the endpoint include:

  • Ultra-low power consumption (millijoule to microjoule per inference)
  • Lower MAC requirements as compared to conventional NNs
  • Lower parameter memory usage as compared to conventional NNs
  • On-edge learning capabilities

Neuromorphic TinyML Use Cases

Microcontrollers with neuromorphic cores can excel in use cases throughout the industry (Fig. 3) thanks to their distinct characteristics of on-edge learning, such as:

  • In anomaly-detection applications for existing industrial equipment, using the cloud to train a model is inefficient. Adding an endpoint AI device on the motor and training on the edge would allow for ease of scalability, as equipment aging tends to differ from machine to machine even if they’re the same model.
  • In robotics, as time passes, the joints of robotic arms tend to wear down, becoming untuned and stop operating as needed. Re-tuning the controller on the edge without human intervention mitigates the need to call a professional, reducing downtime and saving time and money.
  • In face-recognition applications, a user would have to add their face to the dataset and retrain the model on the cloud. With a few snaps of a person’s face, the neuromorphic device can identify the end-user via on-edge learning. Thus, users’ data can be secured on the device, and there’s a more seamless experience. This can be employed in cars, where different users have different preferences on seat position, climate control, etc.
  • In keyword-spotting applications, extra words can be added to your device to recognize on the edge. It can be used in biometric applications, where a person would add a “secret word” that they would want to keep secure on the device.

The balance of ultra-low-power neuromorphic endpoint devices and enhanced performance makes them suitable for prolonged battery-powered applications, executing algorithms not possible on other low-power devices due to them being computationally constrained (Fig. 4). Or they can be applied to higher-end devices capable of similar processing power that’s too power-hungry. Use cases include:

  • Smartwatches that monitor and process the data at the endpoint, sending only relevant information to the cloud.
  • Smart camera sensors for people detection to execute a logical command. For instance, automated door opening when a person is approaching, as current technology is based on proximity sensors.
  • Area with no connectivity or charging capabilities, such as in forests for smart animal tracking or monitoring under ocean pipes for any potential cracks using real-time vibration, vision, and sound data.
  • For infrastructure monitoring use cases, where a neuromorphic MCU can be used to continuously monitor movements, vibrations, and structural changes in bridges (via images) to identify potential failures.

On this front, Renesas has acknowledged the vast potential of neuromorphic devices and SNNs. The company licensed a neuromorphic core from Brainchip,3,4 the world’s first commercial producer of neuromorphic IP.


1. “Neuromorphic computing market –industry analysis, size, share, growth, trends, and forecast, 2020-2028,” sheeranalyticsandinsights.com. https://www.sheeranalyticsandinsights.com/market-report-research/neuromorphic-computing-market-21/. 

2. “Neuromorphic Chip Market Growth, Forecast (2022-27)” | Industry Trends.

3. “BrainChip’s Akida set for spaceflight via NASA as Renesas Electronics America signs first IP agreement”.

4. “ARM battles RISC-V at Renesas,” eeNews Europe.


To join the conversation, and become an exclusive member of Electronic Design, create an account today!