Cs21 7nm Planview Dinner

Cerebras Systems Raises $250 Million in Funding for Colossal AI Chips

Nov. 15, 2021
The funding brings its total amount raised to $720 million to date, giving the company a valuation of about $4 billion. That is up from its $2.4 billion valuation following a funding round in November 2019.

Cerebras Systems, a startup that has developed the world's largest chip for artificial intelligence, raised an additional $250 million in venture capital as it aims to ramp up hiring and attract new customers. 

The funds, which bring its total amount raised to $720 million to date, gives it more financial firepower to take on Nvidia and Intel’s early lead in AI chips for training and inferencing in the data center. Cerebras said the Series F round values it at around $4 billion, up from its $2.4 billion valuation after its Series E in 2019.

The Silicon Valley startup is on its second generation of chips, WSE-2, which spans an entire silicon wafer and integrates 2.6 trillion transistors based on TSMC’s 7-nm node. Depending on the workload, Cerebras said WSE-2 can run up to 1,000 times faster than the chips it competes against from Nvidia and Intel at a fraction of the power use. It also uses the space it takes up in data centers more efficiently, Cerebras said. 

It is not selling the WSE-2 directly to server manufacturers because cooling and connecting such a colossal silicon chip is a challenge. Instead, the startup developed a server platform called CS-2 with a WSE-2 inside.

The investment will help fund Cerebras’s global expansion and deploy its CS-2 system to new customers, CEO and founder Andrew Feldman said. Following the funding round, Cerebras said it plans to increase its headcount from 400 to 600 by the end of next year. The startup said it would place an emphasis on hiring engineers to fuel its hardware and software development and cover the production costs for its products. 

It recently expanded its footprint outside the US with new offices in Tokyo, Japan, and Toronto, Canada.

Cerebras is one of many startups with ambitions in AI silicon, such as Graphcore, Samba Nova, and Groq. But it stands out for the unique architecture it designed for WSE-2, which stands for Wafer Scale Engine 2. 

Traditionally, tens or hundreds of chips are scorched onto a silicon wafer, which is then sliced into separate processors. Cerebras is using a very different method: The startup keeps all the chips together on the same 300-mm silicon wafer instead of slicing them into smaller chips. Any incomplete chips clipped at the curved edges of the wafer are removed, resulting in a die area of 46,225 mm2.

Cerebras said it worked closely with the chip’s manufacturer, TSMC, to resolve many of the manufacturing challenges with its unique architecture, such as connectivity, cooling, power delivery, packaging, and yields.

The trillions of transistors inside the WSE-2 are arranged into 850,000 cores, up from 400,000 in its previous generation, the WSE-1, and more than 100 times the cores in the most advanced graphics chip from Nvidia. According to Cerebras, the chip's cores are specifically designed to run the operations at the heart of neural networks, the fundamental building block of machine learning.

The WSE-2 also incorporates 40 GB of high-speed, onboard SRAM spread out evenly on the surface of the wafer. This is 1000 times more on-chip memory than Nvidia’s flagship Ampere GPU, which only has 40 MB.

The Cerebras chips are used by supercomputing sites such as Argonne National Laboratory and Lawrence Livermore National Lab, which use them to try to understand the origins of the universe and develop better battery chemistries. Other customers include GlaxoSmithKline and AstraZeneca, which use the WSE chips to make faster predictions about potential drugs. Chip-making gear giant Tokyo Electron is another buyer.

To match the performance of the Cerebras chip, customers would have to use up to hundreds of GPUs that have to share data and coordinate with each other over wires and cables, leading to delays in computations.

But with WSE-2, the data is not forced to travel between different servers but only from one group of cores to another on the wafer. The shorter distance reduces delays that can damage the chip’s performance and power efficiency. The cores are bundled together with a proprietary interconnect scheme that moves data between the cores at 220 PB/s. That allows the WSE-2 to faster and more efficiently execute AI workloads.

Cerebras partnered with TSMC to roll out a completely unique interconnect technology so that the cores clustered in each pseudo-chip on the wafer can communicate fast and with a high level of power efficiency.

The hundreds of thousands of AI-focused cores are fed by 12 100-GB Ethernet ports for a total of 1.2 TB/s.

Where WSE-2 also stands out is with its 40 GB of SRAM. Large machine learning models are often stored in separate pools of memory because the chip running the workload is unable to accommodate all of the data. Data must travel out of the chip to separate memory banks to be processed, which hurts performance. But the WSE-2 has enough memory to keep data being processed by a machine learning model in a single chip.

With all the communications and memory on the same slab of silicon, data can travel unimpeded. The WSE-2 feeds data from memory to the cores with a memory bandwidth of around 20 PB/s, which is thousands of times faster than Nvidia’s GPUs and Intel’s CPUs, and more than double its WSE-1 chip. The large amount of memory in WSE-2 keeps the data close to the AI cores, so that memory bandwidth is no longer a bottleneck.

Cerebras will use the funds to further invest in its silicon and hardware as well as the software libraries and other tools that make them useful. It is also investing in innovations that bolster performance at the system level, such as its MemoryX technology for linking larger pools of memory to the WSE-2 chip and SwarmX, a high-performance interconnect fabric that lets it weave up to 192 of its chips together to train huge neural networks.

Part of the company’s strategy over the long term is to expand out of the supercomputer business and land slots in colossal cloud data centers run by the likes of Amazon Web Services (AWS), Google, and Microsoft.

In September, Cerebras said Cirrascale will offer the first cloud service powered by its CS-2 system.

The round announced last week was led by Alpha Wave Ventures and the Abu Dhabi Growth Fund. 

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!