Promo (2)

Advancing Intelligence at the Edge with AI Vision Processors

June 5, 2023
Sponsored by Texas Instruments. A neural network has an extensive set of parameters that are trained using a set of input images—the network "learns" the rules used to perform tasks like object detection or facial recognition on future images.

Members can download this article in PDF format.

This year is giving every indication of becoming a watershed period in the development of AI-based vision processing. And, if things happen as expected, the results could be as big as, or bigger than, consumer PCs were in the 1970s, the web was in the 1990s and smartphones became during this century. The artificial-intelligence (AI) vision market is expected to be valued at $17.2 billion in 2023, growing at a CAGR of 21.5% from 2023 to 2028 (Source: MarketsandMarkets).

The question is not whether it will happen, but rather how do we want to do it? How do we want to develop vision-based AI for collision avoidance, hazard detection, route planning, and warehouse and factory efficiency, to name just a few use cases?

We know a surveillance camera can be smarter with edge AI functionality. And, when we say smarter, we mean the ability to identify objects and respond accordingly in real-time.

Traditional vision analytics uses predefined rules to solve tasks such as object detection, facial recognition, or red-eye detection. Deep learning employs neural networks to process the images. A neural network has a set of parameters that are trained using input images so that the network "learns" the rules, which are then applied to perform tasks like object detection or facial recognition on future images.

AI at the edge happens when AI algorithms are processed on local devices instead of in the cloud and where deep neural networks (DNNs) are the main algorithm component. Edge AI applications require high-speed and low-power processing, along with advanced integration unique to the application and its tasks.

TI’s vision processors make it possible to execute facial recognition, object detection, pose estimation, and other AI features in real-time using the same software. With scalable performance for up to 12 cameras, you can build smart security cameras, autonomous mobile robots, and everything in between (Fig. 1).

The scalable and efficient family of vision processors enables higher system performance with hardware accelerators and faster development with hardware-agnostic programming for vision and multimedia analytics.

Deep-Learning Accelerator

TI’s AM6xA family uses Arm Cortex-A MPUs to offload computationally intense tasks such as deep-learning inference, imaging, vision, video, and graphics processing. An accelerator called MMA—Matrix Multiply Accelerator—is employed for deep-learning computations. The MMA along with TI's C7x digital signal processor can perform efficient tensor, vector, and scalar processing. The accelerator is self-contained and doesn’t depend on the host Arm CPU.

Each edge AI device in the processor family, such as the AM62A, AM68A, etc. (the A at the end means it is an AI accelerated series of processors), has a different version of the C7xMMA deep-learning accelerator.

For instance, the AM68A (for up to eight cameras) and AM69A (up to 12 cameras) use a 256-bit variant of the C7xMMA, which can compute 1024 MAC operations per cycle, resulting in a maximum 2 TOPS capability. Training a deep-learning model typically requires a very high teraoperations-per-second processing engine, but most edge AI inference applications require performance in the range of 2 to 8 TOPS.

The AM62A enables one to two cameras. It can extend to four cameras, though, and is designed to operate at 2 to 3 W in a form factor small enough for use in power-efficient battery-operated applications. The processor can handle up to 5-Mpixel cameras. These are more than enough for in-house indoor usage, ranging from video doorbells to smart retail apps (Fig. 2).

AM6xA edge AI software architecture makes it possible for developers to develop applications completely in Python or C++ language. There’s no need to learn any special language to take advantage of the performance and energy efficiency of the deep-learning accelerator.

The SK-AM62A-LP starter kit (SK) evaluation module (EVM) is built around the AM62A AI vision processor, which includes an image signal processor (ISP) supporting up to 5 Mpixels at 60 fps, a 2-TOPS AI accelerator, a quad-core 64-bit Arm Cortex-A53 microprocessor, a single-core Arm Cortex-R5F, and an H.264/H.265 video encode/decode. Similarly, the SK-AM68 Starter Kit/EVM is based on the AM68x vision SoC.

Easier Design Using Package Videos and ModelZoo

TI's vision AI processors, with accelerated deep learning, vision and video processing, purpose-built system integration, and advanced component integration, enables commercially viable edge AI systems optimized for performance, power, size, weight, and system costs. And such AI processors help in the process of designing efficient edge AI systems thanks to their heterogeneous architecture and scalable AI execution.

Supplied along with the camera options discussed in this article are package videos—choose one to view performance of the various models.

In addition, TI continues to extend its “ModelZoo” to support the latest AI models on its embedded processors. ModelZoo is a large collection of pre-compiled models trained on industry-standard datasets; the models are optimized for inference, speed, and low power consumption. These runtime libraries can be used both for deep-learning model compilation and deployment to TI’s edge AI SoCs.

Sponsored

A Deep Dive into Audio Jack Switches and Configurations

The audio jack is an industry-standard connector that has many potential uses in addition to providing basic audio connectivity.

What is the Most Effective Way to Commutate a BLDC Motor?

Brushless direct current electric motors, or BLDC motors for short, are electronically commutated motors powered by a dc electric source via an external motor controller. Unlike...

MEMS versus ECM: Comparing Microphone Technologies

Increasing numbers of devices utilize microphones to capture sound. Two of the most commonly used microphone technologies are MEMS and ECM.

A Designer's Guide to Lithium (Li-ion) Battery Charging

This designer's guide helps you discover how you can safely and rapidly charge lithium (LI-ion) batteries to 20%-70% capacity in about 20-30 minutes.