Ekkasit919 | Dreamstime.com
Smart Factory Promo

AI Chipsets: Key Players in the Cloud-to-Edge Shift

Jan. 12, 2021
As the world becomes more connected, advanced technologies like AI increase their stronghold on the industrial spectrum. In turn, AI chipsets will emerge as solutions for large-scale data processing.

What you’ll learn:

  • AI chipsets and their significance in the modern industrial landscape.
  • Why edge AI chips are surpassing cloud-based solutions as the preferred option for AI processing.
  • The latest developments in the AI chipset industry.

 

Artificial intelligence (AI) is driving rapid transformation across the industrial landscape. With companies growing more data-driven, the demand for AI grows in parallel. Speech recognition, recommendation systems, medical imaging, and better supply-chain management are just a few of the ways in which AI technology has empowered organizations with the tools, algorithms, and computing power to execute their work efficiently.

The success of these modern AI technologies, however, hinges on computation at extremely large scales. To address this computation challenge, computer chips packed with numerous transistors and customized to perform calculations specific to AI systems are gaining ground as the preferred solution. These specialized, cutting-edge AI chipsets are now an integral part of cost-effective AI implementation at scale.

AI chipsets, also known as AI accelerators, are essentially specialized computer chips designed to achieve the high speeds and efficiencies necessary for large-scale AI-specific calculations.

With regard to product type, the AI chipset market is categorized broadly into field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) specialized for AI applications. While simpler AI tasks can be carried out by general-purpose chips such as CPUs, these systems are becoming less prevalent in recent years as more advanced chip technologies emerge.

Different AI-optimized chips are used for different tasks. For instance, GPUs show the most potential in the initial development and refinement of AI algorithms, a process known as “training.” FPGAs are used commonly in applying “trained” AI algorithms to data inputs in real-world applications, through a process known as “inference.” Meanwhile, ASICs demonstrate strong potential in both training and inference applications.

Edge AI Chips Take Over from Their Cloud Counterparts

Until recently, most AI computations have been carried out predominantly in data centers, on telecom edge processors, or enterprise core appliances, rather than locally on devices, owing to highly processor-intensive requirements. Cloud-based centralization of workloads was traditionally preferred due to scalability and flexibility benefits.

However, in the modern AI paradigm, the industry is undergoing a significant shift, led by edge AI chipsets. As one of the most prominent trends in chip technology, edge AI chips help run AI processing “on the edge,” i.e., on devices that aren’t connected to a cloud network.

Such chip technologies have distinct advantages. By executing AI processing tasks locally on a device, rather than on a remote server, edge AI chips can significantly increase the speed and privacy of the process.

Given their burgeoning popularity, edge AI chipsets are rapidly making their way into a multitude of applications, from consumer devices such as tablets, smartphones, speakers, and wearable devices, to more complex technologies such as cameras, sensors, robots, and other IoT-enabled devices.

These AI-optimized chipsets are designed to be smaller, more economical, less power-consuming, and considerably less heat-generating than their conventional counterparts, making their integration into handheld as well as non-consumer devices much simpler. By enabling the local implementation of processor-intensive AI computations, edge AI chipsets can considerably reduce or mitigate the need for sending large data volumes to remote locations. This, in turn, can deliver myriad benefits in terms of speed, usability, privacy, and security of data.

While edge AI chips show significant promise in consumer device applications such as smart speakers, smartphones, and tablets, among others, they’re also gaining rapid traction in other, more complex devices.

For instance, Arm’s chip such as the Arm Cortex-M55 and the neural processing unit Ethos-U55 are designed to go beyond the traditional phones and tablets—they’re more applicable in complex use cases like IoT devices. For example, 360-deg. cameras in walking sticks to identify applications, and advanced train sensors to identify problems locally and mitigate delays, are among the major use cases on the horizon for the company’s AI-optimized chipsets.

Outlook: Startup Investments, Novel Technologies Help Foster Growth

As the demand for leading-edge AI chipset technologies continues to surge, many well-heeled startups are beginning to emerge as strong competitors to market leaders such as Nvidia. A notable example of this is Hailo, an Israel-based startup that unveiled a funding round worth $60 million in March 2020, geared toward the rollout of its deep learning chip.

The investment & VC arm of software company Intel, Intel Capital, also invested over $132 million to fund 11 startups across the automation, AI, and chip design domains, including Hypersonix, Spectrum Materials, Axonne, ProPlus Electronics, Anodot, MemVerge, Astera Labs, Lilt, KFBIO, Retrace, and Xsight Labs.

Meanwhile, existing software and technology entities are also making significant strides in developing sophisticated AI-optimized chipset technologies. For example, Apple is paving its way into the AI future by integrating its A11 and A12 “Bionic” chips into its newer iPad and iPhone models. These chip technologies leverage the company’s Neural Engine. The A12 chip is also claimed to be almost 15% faster than its predecessor, at only 50% of its power consumption.

Sony, on the other hand, is focusing on a hybrid technology that involves an image sensor with a built-in AI processing system. Given the rapid merging of code and imagery, the technology is touted to deliver broad application scope and benefits by facilitating a single electronic assembly that can conduct substantial processing on photos before they’re sent to the GPU, main logic board, or cloud.

Sponsored Recommendations

The Importance of PCB Design in Consumer Products

April 25, 2024
Explore the importance of PCB design and how Fusion 360 can help your team react to evolving consumer demands.

PCB Design Mastery for Assembly & Fabrication

April 25, 2024
This guide explores PCB circuit board design, focusing on both Design For Assembly (DFA) and Design For Fabrication (DFab) perspectives.

What is Design Rule Checking in PCBs?

April 25, 2024
Explore the importance of Design Rule Checking (DRC) in manufacturing and how Autodesk Fusion 360 enhances the process.

Unlocking the Power of IoT Integration for Elevated PCB Designs

April 25, 2024
What does it take to add IoT into your product? What advantages does IoT have in PCB related projects? Read to find answers to your IoT design questions.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!