Robotic_Stock
(Image courtesy of Thinkstock).

Wading into the Battle over Embedded Brains

LeapMind is trying to compress deep neural networks so that they can fit inside custom chips embedded in everything from cameras that interpret what they see and headphones that translate what they hear into other languages.

And the company, based in Tokyo, Japan, recently raised $10 million from investors including Intel Capital to give it a fighting chance. With it, the company plans to further improve its technology platform, called Juiz, which makes the algorithms used in deep learning five hundred times smaller without sacrificing accuracy.

The algorithms then can run inside embedded devices without hemorrhaging battery life and without communicating with the cloud, where they are trained on vast amounts of data like internet search terms or human speech. After training, the algorithms typically run in data centers in a process called inferencing.

LeapMind, which was founded in 2012, is trying to move inferencing into edge devices. That would improve privacy and preserve latency lost from talking with the cloud, which are usually owned by Google, Amazon, and other corporations. LeapMind traces these algorithms onto FPGAs that can chew through less power than embedded CPUs or GPUs.

LeapMind said that the silicon can be used to prototype a custom chip – called an application-specific integrated circuit or ASIC – that runs even more efficiently. In the future, the company wants to start selling chips based on its own hardware architecture. For now, it plans to expand its sales team and hire engineering staff.

The funding round was led by Intel, which is also betting on strange silicon to sidestep the slowing pace of improvement in traditional chips. Intel’s Movidius unit sells Myriad, a line of chips used in cameras that can, for instance, detect cancerous moles on a person’s skin. It is also planning to sell a server chip to compete with Nvidia’s accelerator chips for deep learning.

Technology companies like Apple, Google, Huawei, and Microsoft are also pouring billions of dollars into custom chips that handle deep learning tasks in smartphones and augmented reality glasses. And start-ups are taking advantage of the slowing pace of Moore’s Law to make a convincing case that deep learning chips can be used in embedded devices.

They include Reduced Energy Microsystems, which has gutted the clocks in its processors to make them run at lower voltages, and which was the first chipmaker accepted into the start-up incubator Y Combinator. With $7.7 million in venture capital funding, Mythic is aiming to sell chips that merge computer logic and memory to cut power consumption.

Horizon Robotics recently raised $100 million from Intel Capital and others to architect chips for everything from autonomous cars to smart speakers. The firm was founded by Baidu’s former head of deep learning research Kai Yu, and it has partnered with Bosch and others while putting the finishing touches on its brain processing unit (BPU).

New chips from ThinCI – funded by Intel’s former chief product officer Dadi Perlmutter and venture capitalist Dado Banatao – can be installed in a network of security cameras to track packages or identify criminals. Tenstorrent, founded by former Nvidia and AMD engineers, is creating a chip architecture for servers that can also be scaled down for sensors.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish