NVIDIA to Supply Millions of GPUs and CPUs to Meta in New AI Deal

The AI chip giant said the collaboration will produce the first "large-scale Grace-only deployment" of CPUs.
Feb. 18, 2026
3 min read

Meta Platforms is betting big on NVIDIA silicon as it builds the backbone for a new generation of AI data centers. 

NVIDIA said it has signed a "multiyear, multigenerational" deal with Meta to supply its current and next-generation AI chips, including CPUs that are competing directly with processors from Intel and AMD.

Under the new deal, Meta will build hyperscale data centers optimized for both training and inferencing in support of the company’s long-term AI infrastructure roadmap. The partnership will see Meta deploy “millions” of NVIDIA's current Blackwell and future Rubin GPUs, along with large-scale installations of its in-house Grace CPUs and the integration of its Ethernet switches.

“No one deploys AI at Meta’s scale — integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said NVIDIA CEO Jensen Huang in a statement. “Through deep codesign across CPUs, GPUs, networking, and software, we are bringing the full NVIDIA platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier.”

Meta is also working to deploy NVIDIA's Grace CPUs that have been cutting into the domiance of Intel and AMD. Meta said Grace delivers significant performance-per-watt improvements in its data centers largely due to it leveraging the Arm architecture. According to NVIDIA, the new collaboration with Meta will produce the first "large-scale Grace-only deployment" of CPUs.

The new effort will also be supported by codesign and software optimization investments in CPU ecosystem libraries to improve performance per watt with every new generation, said NVIDIA.

The companies are also working to integrate NVIDIA's future Vera CPUs into Meta’s hyperscale data centers, with the potential for large-scale deployment in 2027, further extending Meta’s focus on energy-efficient AI computing. The companies said the effort will inevitably bolster the broader software ecosystem surrounding Arm CPUs, which are all the rage in hyperscale data centers.

Under the partnership, Meta plans to adopt systems based on the GB300 Grace Blackwell superchip, which connects a pair of high-performance Blackwell GPUs and a Grace CPU with NVIDIA's NVLink interconnect. Meta intends to expand its use of NVIDIA's Ethernet networking hardware within its infrastructure to provide AI-scale networking. The goal is to deliver low-latency connectivity while maximizing processor utilization and improving power efficiency.

Besides the hardware side of the deal, engineering teams from both companies are engaged in deep codesign efforts to optimize and accelerate state-of-the-art AI models across Meta’s core workloads. The work combines NVIDIA's full-stack platform with Meta’s large-scale production workloads to wring out more performance and efficiency for new AI capabilities.

About the Author

James Morra

Senior Editor

James Morra is the senior editor for Electronic Design, covering the semiconductor industry and new technology trends, with a focus on power electronics and power management. He also reports on the business behind electrical engineering, including the electronics supply chain. He joined Electronic Design in 2015 and is based in Chicago, Illinois.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Electronic Design, create an account today!