My daughter competed in the Intel International Science and Engineering Fair for three years using custom-built robots that incorporated a Parallax Basic Stamp and Java Stamp along with a VGA camera that did basic object recognition with an 8 by 8 resolution. The reduced resolution was needed to process at only a few frames per second. The last project earned a second-place award. This was less than 20 years ago.
Contrast that platform with the JetBot (Fig. 1), an open-source project that’s built on NVIDIA’s new Jetson Nano, a DIMM with the functionality of the Jetson TX1. The Jetson Nano costs only $129 in 1000+ quantities, but it can process up to eight 1080p video streams while running multiple neural networks doing object recognition in real time. Performance in this space has increased by more than a factor of 1000 in a little over a decade.
1. The JetBot open-source project takes advantage of NVIDIA’s Jetson Nano.
The thing is, the Jetson Nano just continues the artificial-intelligence/machine-learning (ML) trend that’s essentially based in deep neural networks (DNNs). Platforms like the new BeagleBone AI (Fig. 2) take advantage of Texas Instruments’ AM5729 with C66x DSP cores and an embedded vision engine (EVE) to handle ML applications. Though targeting a different class of applications, it can still process video streams in real time.
2. Texas Instruments’ AM5729 with C66x DSP cores, plus an embedded vision engine, help the BeagleBone AI platform tackle machine-learning chores.
Image processing is a computationally demanding ML application, but far from the only AI application these days. Renesas’ RX66T microcontroller is supported by the company’s Failure Detection e-AI Solution. This provides ML motor-control support for up to four motors, allowing developers to add preventative maintenance support while reducing the amount of information that would be sent to the cloud by massaging data locally.
Microcontrollers are gaining ML support as well. For example, STMicroelectronics’ STM32Cube.AI helps developers create ML software for a range of STM32 platforms based on Arm’s 32-bit Cortex-M family.
These examples are merely the tip of the iceberg when it comes to ML support. AI certainly isn’t needed for all applications. However, a greater number can benefit from the technology even for applications where space and power are limited. The key to success doesn’t simply surround hardware acceleration—it also involves the software support that’s extensive when it comes to AI tools. It’s the software support that makes these solutions and others like them stand out.