11 Myths About Artificial Intelligence and the Edge

11 Myths About Artificial Intelligence and the Edge

March 6, 2019
AI at the edge is growing rapidly. Don’t let these misguided notions get in your way.

This article is part of TechXchange: AI on the Edge

1. AI is science fiction.

True, AI began as a sci-fi fantasy popularized by visionary writers, but AI is here and now. There are many current applications, depending on how you define artificial intelligence. Although, after solving what was once a complex AI problem, it quickly seems obvious and therefore less “intelligent.” In the U.S., one of the first examples of AI being used at the edge concerned handwriting recognition of checks.

The smart home is full of AI at the edge with devices that learn behavior patterns: ovens that pre-heat when you leave work; thermostats that save money by not heating the home when no one is home; and lights that learn preferences based on different activities humans are engaged in within a room.

2. AI must be edge-device-driven or cloud-based.

Not so fast. Turns out there are a lot of cool hybrid implementations that combine the two approaches. Oftentimes, whether an AI implementation is either at the edge or in the cloud, it’s governed by considerations concerning bandwidth, processing costs, privacy, and regulation. Take, for example, the monitoring of front-door security monitoring. Streaming a 24/7 live feed from a camera to the cloud is wasteful and expensive when nothing is happening. But if significant activity is detected by edge-based AI, then cloud services can be activated to identify the caller or determine the action required.

3. AI at the Edge must be fast.

No doubt, this is sometimes true. For example, if AI is being used to control an autonomous vehicle, then it must be super-fast. At 55 mph, a vehicle will travel over 80 feet in one second, so AI must refresh in tens of milliseconds. Thus, the latency of a cloud-based system isn’t acceptable. But as mentioned above, there are lots of reasons to process at the edge. In many cases, a few seconds of latency is more than adequate. System speed requirements are a condition of the application, not of the location of the AI implementation.

4. Humans will always beat the best AI.

Much as our ego would love us to believe this, it’s not true. Humans are fantastically adaptable, quick, and intuitive learners. But in some cases, such as the identification of tumors on a scan, AI-based systems have proven to be more reliable. It’s been over 20 years since Deep Blue beat the then World Chess ChampionGarry Kasparov. And very recently, AI researchers in Singapore have managed to teach industrial robots to do a task that’s beyond many of us—the ability to assemble pre-packed furniture.

5. AI is a threat to our privacy.

Actually, many view the use of AI as a shield to our privacy by avoiding need for human interaction with sensitive data and images. Advocates of edge AI are excited that this technology can avoid the needless streaming of sensitive audio and image data to the cloud. Over time, this may also lead to easier compliance with new privacy regulations such as Europe’s new GDPR (General Data Protection Regulation).

6. High-speed hardware and cloud technologies are required by all AI systems.

False. AI training systems definitely require very fast, data-center-based processing using the latest GPUs or other acceleration hardware. But, AI inferencing systems can be deployed using lower-cost hardware at the edge, often without cloud connectivity, for reasons of bandwidth, cost, privacy, and regulation.

7. AI at the edge is expensive.

Certainly, expensive is relative, but embedded vision systems that implement AI are now being developed using cost-effective FPGA chips suitable for production volumes in the millions. It’s possible to implement meaningful AI functions for less than half of the cost of a cup of coffee in a major city.

8. AI is too power-hungry to deploy at the edge.

There are two reasons for this misconception. First, people mix training and inferencing. No way around it, training is very compute-intensive with current approaches, and has power requirements that make it difficult if not impossible to implement at the edge. Second, many early implementations of AI used CPUs and GPUs with limited parallelism and the need to operate at high clock speeds (and hence at high power) to achieve acceptable performance for many applications.

However, massively parallel implementations like those possible in ASICs or FPGAs provide power levels that are well-suited for edge applications. Recently, FPGA implementations of functions such as face detection and key-word detection have been demonstrated with power consumption below 1 mW.

9. Systems using AI at the edge are complex to design.

This was true five years ago, when researchers starting using convolutional neural networks (CNNs) for image processing. Implementing AI was not for the faint of heart. However, today, tools such as TensorFlow and Caffe make it easy to design and train networks—even more so as many researchers have released example networks that can be used as the starting point for a design. This has been complemented by a number of embedded hardware suppliers providing compilers that let developers implement networks on hardware suitable for edge applications. It’s now possible to go from concept to implementation within one or two weeks.

10. Edge computing can be a server.

It’s true—for many people, edge computing means an industrialized server sitting in their factory processing data. This definitely has many advantages relative to processing data in the cloud. However, moving processing to the sensor further reduces data traffic, and minimizes requirements for upstream servers while reducing latency.

11. Edge AI operates on high-resolution images.

Some new to the field assume that the AI algorithms require high-resolution images for good performance. However, this usually isn’t the case. Many of the latest AI algorithms use images that are 224 × 224 pixels or 448 × 448 pixels in size. And many practitioners have demonstrated useful capabilities at much lower resolutions. For example, one company recently demonstrated face-detection systems designed using 32- × 32-pixel images.

Gordon Hands is Director of Product Marketing at Lattice Semiconductor.

Read more articles on this topic at the TechXchange: AI on the Edge

About the Author

Gordon Hands | Director of Product Marketing

Gordon Hands manages product marketing at Lattice Semiconductor. For over 20 years, he has held a number of marketing positions at Lattice. In these roles, he managed the definition and execution of Lattice’s strategy to enter the mobile consumer space and has been involved in the definition and launch of many of Lattice’s low-power, low-cost FPGAs and CPLDs. Mr. Hands holds a Bachelor’s in Engineering from the University of Birmingham, England and an MBA from Arizona State University.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!