(Image courtesy of Thinkstock).
(Image courtesy of Thinkstock).
(Image courtesy of Thinkstock).
(Image courtesy of Thinkstock).
(Image courtesy of Thinkstock).

A Closer Look at Machine Learning Chip Maker Mythic

April 16, 2018
A Closer Look at Machine Learning Chip Maker Mythic

Before it started scampering after the machine learning chip market in 2016, but after it was founded at the University of Michigan in 2012, Mythic was trying to build embedded chips that would let surveillance drones run software modeled after the human brain. Part of the funding for the company, then known as Isocline, came from the Department of Defense.

But after relaunching two years ago, Mythic refocused on embedded devices like autonomous cars and security cameras. Now the company is only a few months from sampling chips based on an aggressively ambitious architecture, which uses analog computing inside flash memory cells to accelerate machine learning tasks like facial recognition.

Helping it over the finish line is $40 million raised last month from new and existing investors, including SoftBank Ventures, Draper Fisher Jurvetson and Lux Capital. Other investors financing Mythic’s bid for volume production in early 2019 were Lockheed Martin and Andy Bechtolsheim, co-founder of Sun Microsystems and Arista Networks.

Unlike Nvidia, which has dominated server chips used in training, Mythic is focused on embedded inference. Training algorithms, for instance, can scroll through millions of photographs to teach a machine learning model to identify a certain object, like a cat. The process of inferencing involves using the model to interpret never-before-seen images.

“We always thought that inference was a more relevant problem than training,” said Dave Fick, Mythic’s co-founder and chief technology officer, in an interview at the company’s co-headquarters in Austin, Texas. “The capabilities of the inference platform dictate your ability to deploy algorithms in the field. You could just double the size of your server farm, but that doesn’t really affect anything happening out in the field.”

Fick told Electronic Design that the processor will be about a hundred times more efficient than Nvidia’s Titan XP GPU. That performance could enable security cameras to run facial recognition without consulting the cloud, improving privacy and conserving network bandwidth. It could also pay off with lower latency for autonomous cars or industrial controls.

The company, whose other headquarters is in Redwood City, California, performs processing inside blocks of flash memory, running machine learning faster and more efficiently than other chips. That in-memory processing lets it eliminate the power consumed by traditional chips pumping information in and out of external memory. “Moving data costs energy,” said Fick.

The memory bottleneck has loomed over chip companies as the cost benefits of etching smaller and smaller transistors onto silicon have faded. While many companies cram chips with expensive caches where common instruction are stored, others have moved memory as close to processing as possible – a strategy used by IBM's True North. Others are trying to compute inside blocks of DRAM.

Mythic keeps movement to a minimum by running inference algorithms in the same flash memory cells that hold millions of weights hammered into the machine learning model. These weights determine the strength of the association between the features of an object in a photograph, for instance, with the word used to describe it.

Inference is “actually a lot simpler than training,” Fick said. The algorithms are basically long lines of matrix multiplications, in which an input is multiplied by the weight stored in the network. The product is added to the result of another input multiplied by yet another weight, and so on for thousands of operations, each of which cost a little power.

The company dismissed the idea of using a multicore architecture to perform that arithmetic digitally. The challenge with that would be mapping all the processing cores. “We thought Intel and Nvidia would be able to create something like that relatively easily, so we wanted to do something different,” Fick said.

Mythic opted for analog computing. Using analog currents instead of digital signals lets it save power normally wasted by constantly switching transistor voltages. Instead, all the memory cells in Mythic’s chips drive simultaneously, which not only shortens the travel distance for data into the processor but also handles addition operations automatically, without using any power.

Mythic stitches together blocks of memory into a parallel processor. Each block uses digital circuits to convert instructions into analog current. An analog-to-digital converter at the end of the memory cell spits out results. The digital circuits are reconfigurable and prevent analog currents from cascading over other parts of the processor.

These standalone chips are based on 40-nanometer embedded flash memory fabricated by Fujitsu. Mythic is staying away from more advanced manufacturing to keep costs low. Last month, the company taped out the processor structure and now it is finishing up the digital architecture, which helps determine the types of neural networks the chip can accelerate – a wider range than competing chips, Fick said.

Whether Mythic will succeed should crystallize next year, as customers start playing with the new 8-bit processor. The company has raised $55.7 million in venture capital funding over the last two years and plans to expand its 35-person team. “In the embedded space, it is going to be a year after you get your chip out that you start seeing revenue,” Fick said.

While Fick helms hardware development in Austin, Mike Henry, Mythic’s chief executive, oversees a software team in Redwood City. The company, like its rivals, is working on software tools to ease embedded engineers into programming Mythic’s custom chips. “We are taking TensorFlow and compiling it directly to the chip,” Fick said.

These kinds of software tools could level the playing field in an increasingly crowded market. “Until someone comes out with a serious intelligence processing unit, Nvidia is sitting pretty,” Fick told Electronic Design. “But once someone comes out with a competitive product, Nvidia’s sales are going to plummet. Not a whole lot is making them sticky in that market.”

Dethroning Nvidia is not exactly Mythic’s goal. The company plans to start supplying chips that can be installed in networks of security cameras before moving into the general artificial intelligence space. Mythic could face challenges from startups such as Reduced Energy Microsystems and Horizon Robotics, which both target embedded inferencing.

Other companies cast longer shadows. Following its acquisition of Movidius, Intel is crafting chips that can handle the machine learning algorithms replacing traditional computer vision, which was Movidius's original focus. Qualcomm is sampling two new ARM-based processors that can be used in security cameras that track packages or transmit faces they have identified.

“Arm is definitely our biggest competitor,” Fick said. He added that while Mythic is not competing directly with Arm’s IP, the company will have to contend with “good enough” solutions from companies like Qualcomm and NXP Semiconductors that pay to license Arm cores. If potential customers can get enough out of established silicon for machine learning, they won’t need Mythic’s more specialized solution, he said.

The machine learning hardware market is still relatively new, so even Arm, whose designs dominate Internet of Things devices, has also targeted embedded inference. The company recently unveiled an object detection processor, machine learning accelerator and related software tools for security cameras and smartphones. Overseeing the technology, called Project Trillium, is the IP Products Group led by Rene Haas, who also happens to be on Mythic’s board.

Mythic may ultimately also clash with Nvidia, which has designed an open source architecture for inference based on its Xavier autonomous car processor. Last month, Nvidia said that the new architecture, NVDLA, would be available as part of ARM’s machine learning products, putting it in competition with the early players in embedded inference: Ceva, Cadence and Synopsys.

“[Nvidia's] strategy in creating the open-source project is to foster more-widespread adoption of neural-network inference engines,” Mike Demler, an analyst at semiconductor research firm The Linley Group, said in a newsletter last month. “It expects to thereby benefit from greater demand for its expensive GPU-based training platforms.”

Fick said that Mythic plans to release a server card with the same computing power of $30,000 worth of graphics chips. One reason is that it wants to get around the slower pace of the embedded market. Before using Mythic’s chips in actual products, embedded developers will put them through relentless and time-consuming qualification. “They are very risk averse,” Fick said.

The company’s PCIe-based card could be priced higher and let Mythic make money faster. But it could face stiffer competition than in the embedded market, particularly from startups such as Graphcore and Wave Computing. Meanwhile, cloud computing giants like Google and Microsoft have also waded into custom chips for inferencing.

In Fick's mind, Mythic’s analog computers have the upper hand. “You can build a system that it about 10 times more efficient [than a GPU] because of specialization, but we take that another 10 times further using analog compute,” he said. “Versus the Groqs and the Graphcores of the world, we are 10 times more efficient.”

But as more companies enter the machine learning market, Mythic’s margin for error could narrow. “If you look historically at the semiconductor industry, there are usually two players in each market,” Fick told Electronic Design. The assumption is that the inferencing and training markets will each have two winners.

Mythic, he said, is focused on being one of them.

Sponsored Recommendations

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Bidirectional power for EVs: The practical and creative opportunities using power modules

March 18, 2024
Bidirectional power modules enable vehicle-to-grid energy flow and other imaginative power opportunities. Learn more about Vicor power modules for EVs

Article: Tesla commits to 48V automotive electrics

March 18, 2024
48V is soon to be the new 12V according to Tesla. Size and weight reduction and enhanced power efficiency are a few of the benefits.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!