(Image courtesy of Graphcore).

Graphcore Prepares Machine Learning Silicon for This Year

July 27, 2017
Graphcore recently closed a $30 million funding round from investors including artificial intelligence researchers Demis Hassabis, Zoubin Ghahramani, and Greg Brockman.

Graphcore, a start-up making chips that not only train machine learning algorithms on vast amounts of raw data but also apply them to new problems, is bracing for a fight to power software that can classify images or trade stocks.

No one knows whether these applications will be handled by more traditional CPUs, GPUs, and FPGAs, or custom silicon from Graphcore and other start-ups. But the market for customized chips with specific machine learning features is growing crowded with giants like Nvidia and Intel, as well as the likes of Google and Microsoft.

Graphcore, based in Bristol, U.K., announced last week that it had raised $30 million in a funding round led by investment firm Atomico and from investors that included researchers from Alphabet, Uber, and OpenAI. The company plans to release its first processor later this year and increase production for data center and cloud customers in 2018.

The company was founded in 2015 by chief executive Nigel Toon, a former vice president of Altera and founder of Icera Semiconductor, and chief technology officer Simon Knowles, a former head of microprocessor development at STMicroelectronics and also one of Icera’s founders.

For the last year, Graphcore has been sharing details of its intelligence processing unit – what it calls the IPU – which can be used to cycle through reams of training data and then make inferences without explicit programming. The design excels at handling computational graphs, which can represent correlations and other relationships in data.

These graphs organize data perfectly for Graphcore's parallel processors, which will plug into PCIe buses of standard servers. The first chip, code named Colossus, will split up workloads among more than 1,000 cores. The number-crunching chip uses mixed-precision floating point (16- and 32-bit) to run machine learning software faster and more efficiently than more precise chips.

The company will also provide software tools to efficiently compile graphs for Graphcore’s chips, which will first be manufactured with TSMC's 16-nanometer process. The Poplar tools can translate applications written in machine learning frameworks like TensorFlow and Caffe2 into a form that can be exploited by IPUs.

Graphcore is not straying far from the industry’s imperative to reduce memory bandwidth. It is aiming to reduce the latency and power required to constantly retrieve data from memory. Graphcore keeps the entire machine learning model inside its thousands of cores, using no external memory and cutting latency that has dogged engineers for years.

That appears to be sending good vibes to investors, who have long hesitated to invest in semiconductor start-ups strapped with high development costs. But artificial intelligence researchers are also sniffing around Graphcore’s chips, which could shorten the wait for generating machine learning models to hours and days from weeks and months.

The investors from the latest round of funding include Demis Hassabis, one of the founders of DeepMind, whose AlphaGo software mastered the arcane board game of Go. Others throwing weight behind the start-up include Zoubin Ghahramani, Uber’s chief scientist, as well as Greg Brockman and Ilya Sutskever, founders of the OpenAI laboratory.

“Deep neural networks have allowed us to make massive progress over the last few years, but there are also many other machine learning approaches that could help us achieve radical leaps forward in machine intelligence,” Uber’s Ghahramani said in a statement. “Current hardware is holding us back from exploring these different approaches.”

Graphcore is on a collision course with lots of chip suppliers. Wave Computing claims that its unique chips hit a sweet spot between ASICs and FPGAs for data centers. Others include stealthy companies like Cerebras Systems and Groq, founded by Jonathan Ross, who helped invent the tensor processing unit (TPU) that Google recently updated for both training and inferencing.  

This year, Intel plans to release its Lake Crest chip built using expertise it acquired last year from start-up Nervana Systems. Advanced Micro Devices is prepping a new graphics chip for the data center, while Xilinx is throwing weight behind FPGAs for inferencing. In the cross hairs is Nvidia, whose graphics chips have given it a big head start in deep learning accelerators.

Nvidia wired hundreds of specialized tensor cores in its latest Volta accelerator, which can perform 120 trillion operations per second on deep learning workloads. In addition, the company has also built a daunting lead in software tools for its graphics chips, supporting almost every framework for processing neural networks used in deep learning.

The question is whether Graphcore can compete with larger rivals like Nvidia and new start-ups pouring hundreds of millions of dollars into unique computer chips. Another question is whether it can survive at all given that chip companies are signing blank checks to acquire artificial intelligence expertise.

Graphcore’s edge comes from its tailored architecture without the baggage of Nvidia’s chips, which were invented for rendering video games. Experts say that mapping chips to specific algorithms is necessary for powering up programs that identify diseases, understand speech, and prevent car accidents.


To join the conversation, and become an exclusive member of Electronic Design, create an account today!