Electronicdesign 24208 Nxp Ai Promo
Electronicdesign 24208 Nxp Ai Promo
Electronicdesign 24208 Nxp Ai Promo
Electronicdesign 24208 Nxp Ai Promo
Electronicdesign 24208 Nxp Ai Promo

NXP’s eIQ Brings Inference to the Edge

Oct. 16, 2018
Inference arrives at the edge: The eIQ machine-learning framework and development tools facilitate the transfer of AI from the cloud to devices on the edge.

This article is part of TechXchange: AI on the Edge

NXP delivers a wide range of processing solutions from the compact Kinetis and LPC microcontrollers to high performance SoCs like is i.MX and Layerscape application processors. What most developers may not know is that machine-learning (ML) applications can run on all of these. Of course, developers will need the associated software and development tools to make them work. This is where NXP’s new eIQ framework and development tools come into play (see figure).

“Having long recognized that processing at the edge node is really the driver for customer adoption of machine learning,” says Geoff Lees, senior vice president and GM of Microcontrollers, “we created scalable ML solutions and eIQ tools, to make transferring artificial-intelligence capabilities from the Cloud to the Edge even more accessible and easy to use.”

NXP’s eIQ framework and development tools brings machine-learning applications to its family of microcontrollers and application processors.

NXP’s eIQ is designed to bring ML to every NXP developer that typically uses stock hardware without ML-specific hardware acceleration. The solution takes advantage of existing hardware that can accelerate ML applications, and is useful for other chores such as graphics processing or real-time system control. This means using hardware such as NEON GPUs and DSPs in addition to using CPUs. Of course, your mileage may vary because ML tends to be compute-heavy. Still, even a microcontroller can implement ML applications that have been suitably scaled to match the system’s resources.

The company is looking to black box many of the applications and services that use ML techniques, such as vision-, voice- and sensor-processing applications where deep neural networks (DNNs) and convolutional neural networks (CNNs) are used for interference for applications like facial recognition, speech recognition, and anomaly detection. The eIQ framework is designed to work with hardware abstraction layers like OpenCL, OpenVX, and the Arm Compute Library, as well as inference engines like the Arm NN (neural net), Android NN, GLOW, and OpenCV.

The system will handle model conversion for platforms such as TensorFlow Lite, Caffe2, and PyTorch. It will also address other classical ML algorithms including Support Vector Machine (SVM) and random forest.

NXP is moving toward more ML-specific hardware while trying to support the wide variety of existing and new ML models. For example, its latest LPC5500 Cortex-M33 systems incorporate a MAC co-processor. The co-processor can accelerate ML and DSP functions including convolution, correlation, matrix operations, transfer functions, and filtering. It delivers 10 times the performance of the Cortex-M33 core for these types of services.

Read more articles on this topic at the TechXchange: AI on the Edge

Sponsored Recommendations

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Bidirectional power for EVs: The practical and creative opportunities using power modules

March 18, 2024
Bidirectional power modules enable vehicle-to-grid energy flow and other imaginative power opportunities. Learn more about Vicor power modules for EVs

Article: Tesla commits to 48V automotive electrics

March 18, 2024
48V is soon to be the new 12V according to Tesla. Size and weight reduction and enhanced power efficiency are a few of the benefits.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!