11myths Promo

11 Myths About Analog Compute

Nov. 17, 2021
In the beginning there was analog. Then digital computing appeared. But analog never went away.

What you'll learn:

  • Comparing digital compute with analog compute
  • What strides have been taken to make analog a better alternative to digital?
  • How deep neural networks come into play.

In 1974, Theodore Nelson, the inventor of hypertext, wrote in his book “Computer Lib/Dream” that “analog computers are so unimportant compared to digital computers that we will polish them off in a couple of paragraphs.” This popular attitude toward analog computing hasn’t shifted much in the decades since then, despite the incredible advances made in analog computing technology.

The computational speeds and power efficiency of analog compared to digital have been promising for a long time. The problem is developing analog systems has been traditionally beset by a number of hurdles, including the size and cost of analog processors. The explosion of the IoT and the growth of AI applications have retriggered interest in developing new approaches of analog computing to solve some of the challenges associated with increasingly complex workloads.

Edge AI applications need to be low-cost, small-form-factor devices with low latency, high performance, and low power (see figure). It might surprise many people that analog solutions offer a very compelling solution to these challenges. Recent advances in analog technology, combined with the use of non-volatile memory like flash memory, have eliminated the traditional hurdles. 

What follows are 11 common myths associated with analog computing.

1. Digital compute is better than analog compute.

Digital computing solutions have ushered in the Information Age and transformed what once were room-sized computers into incredibly powerful machines that can fit in the palm of our hands. It’s fair to say that for a long time, digital computing solutions were superior to analog solutions for most applications. However, times have changed and when we look at the needs of the future—one where every device will be equipped with powerful AI at the edge—it’s clear that digital compute won’t be able to keep up. With analog compute, algorithms typically requiring a large, power-hungry GPU will be run on a small, low-power, cost-effective chip that can be integrated into any device.

2. Moore’s Law will continue scaling.

Today, only a few manufacturers can follow the Moore’s Law trend—down from dozens in the 1990s—as it’s simply too cost-prohibitive. Process node improvements have slowed down while manufacturing costs have been dramatically rising. Simply put, it’s no longer business as usual with Moore’s Law scaling; new approaches are needed for the next generation of AI processing.

3. Analog systems are too complex to design.

Modern electronic-design-automation (EDA) tools have come a long way to enable high-speed simulation of analog circuits with a high level of fidelity. In addition, the ability for analog circuits to automatically calibrate and compensate for error has progressed by leaps and bounds. This calibration technology allows designers to build analog compute systems modularly and not worry about how other parts of the system affect the analog circuits.

4. Analog compute is mainly a research effort.

In the 1950s and 1960s, analog computers started to become obsolete for commercial applications, although analog computing was still used in research studies and certain industrial and military applications. Of course, a lot has changed since then. Companies like Mythic are taking analog processors to production, proving that analog is not only viable for commercial applications, but also offers an optimized solution for the computing challenges of AI today and in the future.

5. Analog systems aren’t capable of high performance.

Analog circuits can be incredibly fast, since they don’t need to rely on logic propagating through digital logic gates, or digital values pulled out of memory banks. By using tiny electrical currents steered through flash-memory arrays, massively parallel matrix operations can be performed in less than one microsecond.

Such performance makes analog systems ideal for compute-intensive workloads like video-analytics applications that use object detection, classification, and depth estimation. These capabilities are extremely useful for industrial machine vision, autonomous drones, surveillance cameras, and network video recorder (NVR) applications.

6. Analog is power-hungry.

One under-the-radar problem is that digital systems are forced to store neural networks in DRAM, which is an expensive, inconvenient, and power-hungry approach. DRAM consumes lots of power both during active use and during idle periods, so system architects spend a great deal of time and effort to maximize the utilization of the processors.

Another issue with digital systems is that they’re extremely precise, which comes at a huge cost in performance and power, especially when it comes to neutral networks. Just think about a system having to read trillions of weights out of a large stack of 3D non-volatile memory for instant computation of an AI algorithm.

In practice, AI doesn’t need that level of precision. In fact, some analog processors, such as Mythic’s Analog Matrix Processor, which perform analog compute inside of very dense non-volatile memory, are already up to 10X more energy-efficient than digital systems (with the potential to be 100X to 1000X more energy-efficient for certain use cases). They’re also much faster and can pack 8X more information into the memory. One big advantage of analog being more energy-efficient is that it can support extremely high processing densities without the need for advanced cooling or power-supply infrastructure, which is particularly important for industrial and enterprise applications.

7. Analog chips are expensive to design and manufacture.

There has long been a perception that analog is much more expensive to design and manufacture than digital systems. However, the truth is that it’s becoming increasingly difficult for digital systems to keep up with the increasing costs of manufacturing and mask-set prices, which can reach beyond $100 million for the 1- to 3-nm range. These costs must be amortized, making it harder to achieve improvements in functionality per dollar. For digital systems to keep up with the growing computing demands of the AI industry, everything on the chip would need to realize massive performance, cost, and power improvements.

Analog systems offer a host of performance and power advantages, while also being incredibly cost-efficient. This is because high performance and incredible memory density can be achieved on older process nodes with analog compute. These process nodes are significantly lower cost in terms of mask sets and wafer prices, are mature and stable, and have far greater manufacturing capacity compared to bleeding-edge nodes

8. Analog systems—like digital systems—must store neural networks in DRAM.

One of the most important aspects of hardware is how much memory can be packed into a processor per millimeter square, and how much power is drawn by the memory. For digital systems, the mainstream memories—SRAM and DRAM—tend to consume too much power, take up too much chip area, and aren’t improving fast enough to drive the improvements needed for today’s AI era.

Analog systems have the advantage of being able to use non-volatile memory (NVM), which offers impressive densities and solves the power leakage problem. Some analog systems employ flash memory, one of the most common types of NVM, since it has incredible density, is tiny compared to hard-disk drives, and can retain information with no power applied. With analog compute-in-memory, the arithmetic is performed inside NVM cells by manipulating and combining small electrical currents, which happens across the entire memory bank in a fast and low-power manner.

9. Analog can’t run complex deep neural networks.

Conventional digital processing systems support complex deep neural networks (DNNs). The problem is that these platforms take up considerable silicon real estate, require DRAM, and consume lots of energy, which is why many AI applications offload most of the deep-learning work to remote cloud servers. For systems that require real-time processing for DNNs, the data must be processed locally.

When analog compute is combined with flash technology, processors can run multiple large, complex DNNs on-chip. This eliminates the need for DRAM chips and enables incredibly dense weight storage inside a single-chip accelerator. Processors can further maximize inference performance by having many of the compute-in-memory elements operate in parallel. With the growing demand for real-time processing, this type of on-chip execution of complex DNN models will become increasingly critical.

10. Analog systems aren’t as compact as digital systems.

It’s true that analog systems have traditionally been far too big. However, new approaches make it possible to design incredibly compact systems. One reason is the high density of flash, so by combining analog compute with flash memory, it’s possible to use a single flash transistor as a storage medium, and multiplier, and an adder (accumulator) circuit. 

11. Analog systems aren’t resilient to changing environmental conditions.

One strength of digital is that it has a wide tolerance for changing environmental conditions, such as changes in temperature and fluctuating supply voltages. In analog systems of the past, any tiny variations in voltage could result in errors when being processed.

However, some approaches can make it possible for analog to have the same resiliency to different environmental conditions, and to deliver this at scale. Most modern analog circuits are software-controlled and use a bevy of compensation and calibration techniques. As a result, they can be manufactured in modern digital processes that sometimes exhibit a high degree of variation. These techniques also can compensate for changing temperatures and voltages, which has enabled modern high-speed analog circuits that power critical functions in all of our electronic devices.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!