(Image courtesy of Google).
(Image courtesy of Google).
(Image courtesy of Google).
(Image courtesy of Google).
(Image courtesy of Google).

Groq Outlines Potential Power of Artificial Intelligence Chip

Nov. 15, 2017
The secretive start-up rooted in Google made its plans a little clearer and shared what its future chips will be capable of.

Groq, the secretive semiconductor start-up rooted in Google’s effort to create an artificial intelligence chip for data centers, indicated that it would share details about its first product next year, according to its website. It is unclear if the company plans to start shipping next year as well.

The website claims that the processor will run 400 trillion operations per second, more than twice Google’s latest tensor processing unit, which supports 180 trillion operations per second for training a type of software based on deep learning algorithms. It will perform eight trillion operations per watt, the website said.

Groq is tapping into a creative revival in the semiconductor industry to make custom chips for machine learning. Like others, it is attempting to unseat Nvidia, whose graphics chips are currently the gold standard for running the intense calculations required to train machine learning software and then make inferences about new data with it.

The start-up, funded with $10.3 million from venture capitalist Chamath Palihapitiya, is staffed with eight of the first ten members of the team that created Google’s TPU, including Groq’s founder Jonathan Ross. It also recently hired Xilinx’s vice president of sales Krishna Rangasayee as chief operating officer.

It would be an accomplishment in itself for Groq to release its first product less than two years after it was founded. But the company’s chip designers met tight deadlines at Google as well. They taped out the first machine learning chip in only 14 months. The second generation came out a year later in time for Google’s I/O conference.

Groq is not only battling Nvidia for the hearts and minds of data scientists. It is fighting Google, which offers the tensor processing unit over the cloud. It will also compete with Intel, which plans to release a processor before the end of the year that provides 55 trillion operations per second for training neural networks, which are algorithms used in deep learning, a subclass of machine learning.

Every chip company has painted a giant target on Nvidia, which dominates the market for machine learning hardware. On Monday, Nvidia said that most major server manufacturers and cloud computing firms were using graphics chips based on its new Volta architecture. However, it did not acknowledge Google as one of its customers.

Nvidia created Volta to handle machine learning software faster and more efficiently than its previous designs. Like rivals, the company built it to take advantage of lower precision numbers that require less computing power and memory to algorithms that diagnose skin cancer, for instance, or train self-driving cars. Inside the chips are hundreds of unique tensor cores that can perform 120 trillion operations per second.

The changes can be extremely costly. Nvidia’s chief executive Jensen Huang said that the company poured $3 billion into the Volta architecture, while Intel is rumored to have acquired neural network chipmaker Nervana Systems for $400 million. And rival chipmakers are raising hundreds of millions of dollars to stay within striking distance.

Founded in 2010, Wave Computing has poured around $60 million into its coarse-grained reconfigurable array architecture, which acts like a hybrid of programmable chips called FPGAs and custom ones called ASICs. The founders of Cerebras Systems have raised $112 million, giving it a valuation of around $860 million, according to a report by Forbes.

Graphcore just raised another $50 million from venture capital firm Sequoia Capital. Last month, the chipmaker shared preliminary benchmarks that claim its intelligence processing unit – also called the IPU – handles training and inferencing tasks 10 to 100 times faster than Nvidia’s previous Pascal architecture. 

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!