Skip navigation
SNN News: BrainChip Unveils Akida Architecture

SNN News: BrainChip Unveils Akida Architecture

Armed with approximately 1.2 million neurons and 10 billion synapses, the Akida NSoC spiking-neural-network chip takes on training and inference tasks.

BrainChip has revealed the architecture for its Akida Neuromorphic System-on-Chip (NSoC), which is based on spiking-neural-network (SNN) technology. The self-contained processing system, designed for embedded applications, could also act as a coprocessor. Multiple chips can be combined to handle larger SNNs as well as multiple SNNs.

Spiking neural networks are an alternative to convolutional neural networks (CNNs) that have become very popular in recent years (Fig. 1). SNNs differ from CNNs in their operation, training, and inference-process functionality. At a high level, though, they are very similar. A trained inference engine can identify inputs and provide outputs that indicate the characteristics configured as part of the network model. Outputs are probabilities, but a high probability is a good indication of proper identification.

1. Spiking neural networks offer an alternative to convolutional neural networks that have become very popular.

CNNs and SNNs have an input layer and an output layer with one or more hidden layers in between. A CNN accepts inputs and the data flows through the network being modified by the weights associated with the neurons within the layers. The weights are determined by training the model. This can take some time and require a lot of samples.

SNNs translate data into a stream of spikes that also flow through the neural network. These are discrete events rather than the CNN’s array of values. Differential equations actually define how the spikes operate. One of the requirements of an SNN is the translation of input data into spike streams. The data-to-spike converters can be done in hardware or software.

SNNs also require training but the overhead is significantly less, allowing for in-field training that would be impractical for a CNN. SNNs also require less computational muscle compared to a CNN that helps reduce the power and performance requirements compared to a CNN.

Closer Look at the NSoC

The BrainChip’s Akida NSoC (Fig. 2) includes a conventional processor, enabling the system to be used as a standalone device. It can handle peripherals and communications, but the rest of the chip is dedicated to SNN support. Communication interfaces include PCI Express, USB 3.0, Ethernet, CAN, and serial ports. The Akida NSoC essentially has 1.2 million neurons and 10 billion synapses. The chip can handle training as well as inference chores.

2. BrainChip’s Akida NSoC is a self-contained chip with a conventional processor plus the Akida neuron fabric.

The system supports a range of sensor inputs including analog, digital, audio, pixel, and dynamic-vision sensors (DVS). The latter is a camera that only sends changes that occur in a frame versus the entire frame. This is handy for identifying objects and gestures. The NSoC’s DMA engine is designed to handle these inputs that supply the data-to-spike conversions (DSCs) that in turn deliver a stream of spikes stored by the SNN models in the neural network fabric. A number of common DSCs are built into the hardware, including support for graphical pixel data, audio data, and DVS data.

A high-speed serial, chip-to-chip interface can be used to link up to 1024 chips together in a larger network. Data and spikes flow through these connections, so each chip only needs about half-a-dozen serial links. Additional switches aren’t required. There’s a unified address system within a multichip complex.

Though pricing and final configurations haven’t been set, the cost per chip is on the order of $10 in quantity. Likewise, the power requirements are on the order of a conventional SoC, allowing SNNs to operate on the edge in an industrial Internet of Things (IIoT) node or as standalone systems.

BrainChip focused on a range of design aspects when creating its architecture. The fixed neuron model allows for more compact memory—on the order of 6 MB—as well as the use of programmable training and firing thresholds. The neural processor cores, which have been optimized to perform convolutions, are fully connected via the fabric. A global spike bus connects to all of the cores.

3. The Akida NSoC is moving up the ladder in terms of complexity with 1.2 million neurons and 10 billion synapses in a multichip system.

Neural networks have come a long way, but there’s still lots of room for improvement (Fig. 3). Nevertheless, significant applications have now become practical because of them. Neuromorphic computing using SSNs target applications like vision systems, cybersecurity, and even financial systems. Multiple chips are needed to hit the 1.2 billion neuron mark, but a single chip is sufficient to handle many SNN chores such as video processing.

The chips aren’t available yet; however, testing has been done using simulation. It shows that the chip will fare very well with common datasets like Cifar-10, which is a common test of neural-network hardware and software (Fig. 4). The chip can process the Cifar-10 model at 6 kframe/s/W.

4. The Akida does very well with the popular Cifar-10 dataset. The Cifar-10, which identifies 10 common objects, uses less power and is significantly less costly.

The chip is supported by BrainChip’s Akida Development Environment. The Python-based platform can be used to generate models for software platforms like the Akida Execution Engine as well as the Akida SoC. The system will handle supervised and unsupervised learning.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.