Krishna Rangasayee, Xilinx’s former executive vice president of global sales, has taken the chief operating officer job at Groq, a secretive semiconductor start-up with roots in the engineering cabal behind Google’s machine learning chip, the tensor processing unit.
Rangasayee announced the job change on his LinkedIn page. At Xilinx for 18 years, he served as senior vice president of global sales before being promoted to executive vice president in April as part of a succession plan for chief executive Moshe Gavrielov.
Rangasayee was in charge of sales when both Amazon and Baidu announced that they would install Xilinx’s products – field-programmable gate arrays (FPGAs) – in data centers rented out to their cloud customers. Xilinx is betting that FPGAs can accelerate software that judges the similarity of two photos, for instance, or tailors search results based on a person’s internet history.
Groq is targeting customized chips for handling these tasks. It was founded by Douglas Wightman – a former engineer at Google X – and Jonathan Ross, who helped invent the tensor processing unit (TPU), which underpins Google’s machine learning software for image recognition and other tasks. It was also at the heart of the AlphaGo program, which has defeated the world's top players in the board game Go.
Ross, who worked at Google until last September, says on his LinkedIn page that he started the TPU as a 20% project, a policy that lets engineers devote a fifth of their time to side projects that could benefit Google. Ross is also named as an inventor on four patents underlying the specialized silicon.
Groq, which still has no website, secured $10.3 million in funding last year from venture capitalist Chamath Palihapitiya. The funding, which the start-up revealed in filings with the Securities and Exchange Commission, was first reported by CNBC. The company’s name may not even be set in stone.
Rangasayee declined to comment for this article because the company is still in stealth. He informed Xilinx earlier this month that he would be leaving to pursue other “professional opportunities.” His resignation went into effect on August 18, according to an SEC filing.
Neither Ross nor Wightman replied to requests for comment via LinkedIn.
Groq is plotting a new processor but nothing else is known about it. It could take years for Groq to release a final product – though Google’s engineering team finished the first generation of the TPU in around 14 months. The company has hired eight of the first 10 members of the TPU team, CNBC reported.
Google’s chief executive Sundar Pichai announced the first TPU a year ago, and it sent ripples through the chip industry. It instantly lent credence to Graphcore, Wave Computing, and other companies that have built ASICs for machine learning. And it further fanned the funeral pyre of Moore’s Law, which has guided the industry for decades.
The announcement also reflected the industry’s heightened creative energy. For years, Nvidia’s graphic chips have been the gold standard for accelerating CPUs, running the repetitive math required to train software and allowing it to make inferences based on what it has learned. For now, Nvidia’s chips are considered the fastest at training and inferencing.
These graphics chips, however, were first invented for rendering video games. Nvidia’s rivals argue that they have too much baggage to run complex software that learns to identify diseases and prevent car accidents. Specialized silicon could train models and apply them faster and more efficiently than traditional chips, experts say.
The first generation TPU only did inferencing, but Google claimed that it was around seven years – or three processor generations – ahead of general-purpose architectures at machine learning. In May, Google unveiled its second TPU, which stands out for handling both training and inferencing.
In April, the company published a paper with 75 authors – including Ross and Google's Norm Jouppi – that went further into the TPU's architecture. The paper claimed that it was 15 to 30 times faster on inferencing tasks and 30 to 80 times more energy efficient per trillion operations than Nvidia’s GPUs and Intel’s CPUs.
Internet companies are among the biggest buyers of computer chips for data centers. And the financial threat posed by them making their own has lit fires under traditional chipmakers. This year, Intel tested its new chip called Lake Crest chip, which uses custom silicon it acquired last year from Nervana Systems. Naveen Rao, Nervana's chief, now leads Intel’s artificial intelligence unit.
Xilinx has released software tools to make programming easier for engineers using FPGAs as machine learning accelerants. In July, it reported first quarter revenue of $615 million, half of which came from advanced products sold for data centers and other applications. Revenue from that business rose 33% from last year’s first quarter.
These strategy shifts can take a toll. Nvidia poured around $3 billion into its latest graphics chip for machine learning, which contains specialized cores to run deep neural networks. The foundry it used, TSMC, squeezed the 21 billion transistors onto the largest die possible – the so-called reticle limit – with its photolithography tools.
Groq has a tough road ahead. It will require lots of financial firepower to compete with rivals pouring billions into the development of new chips. Groq’s other priorities could include software that enable its processors to read code written in common machine learning frameworks, like TensorFlow and Caffe.
Correction August 28th, 2017: This article mistakenly said that Douglas Wightman, Groq's chief executive and one of its founders, was named as an author on Google's TPU paper released in April. He was not. Jonathan Ross, Norm Jouppi, and 73 others were.