Meta Unveils Helios Open Rack for AI Infrastructure

Open standards set a new standard for AI data center infrastructure.
Oct. 21, 2025
2 min read

What you'll learn:

  • Insight into Meta’s Open Rack Wide design and why it sets a new standard for open AI data center infrastructure.
  • How AMD’s Helios rack takes advantage of the MI400 Series GPUs for increased AI performance.
  • How open standards are key for scalability and efficiency.

During the Open Compute Project (OCP) Global Summit in San Jose, Meta unveiled specifications for an open rack designed for artificial-intelligence (AI) systems. Called Open Rack Wide (ORW), the design is based on open standards and features Helios, AMD's advanced rack-scale reference system. ORW aims to boost scalability and efficiency in large-scale AI data centers.

Meta's ORW design serves as the framework for AMD's Helios AI rack. Fueled by AMD's next-gen Instinct MI400 Series GPUs, the system demonstrates how open standards support powerful AI performance across data centers.

The Helios AI rack is built to handle demanding AI and high-performance computing (HPC) workloads. Powered by AMD's CDNA architecture, each MI450 Series GPU provides up to 432 GB of HBM4 memory and 19.6 TB/s of memory bandwidth.

At full scale, a Helios rack with 72 MI450 Series GPUs provides up to 1.4 exaFLOPS FP8 and 2.9 exaFLOPS FP4 performance, with 1.4-PB/s aggregate bandwidth — sufficient for trillion-parameter AI models. It also offers up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, delivering fast communication across GPUs, nodes, and racks.

This is AMD's first rack-scale system that can handle the interoperability, power, and cooling requirements of AI-scaling data centers. By following open standards, Helios enables hyperscalers and enterprises to deploy scalable AI systems without locking into proprietary designs.

Meta's ORW specification sets a new standard for interoperable AI infrastructure. Helios implements that standard, providing ODMs, OEMs, and enterprises with a rack-scale system that supports trillion-parameter AI models and exascale-class HPC workloads.

Open Compute Project
promo_ocp__open_compute_project
Check out technology on display at the Open Compute Project summit.

About the Author

Cabe Atwell

Technology Editor, Electronic Design

Cabe is a Technology Editor for Electronic Design. 

Engineer, Machinist, Maker, Writer. A graduate Electrical Engineer actively plying his expertise in the industry and at his company, Gunhead. When not designing/building, he creates a steady torrent of projects and content in the media world. Many of his projects and articles are online at element14 & SolidSmack, industry-focused work at EETimes & EDN, and offbeat articles at Make Magazine. Currently, you can find him hosting webinars and contributing to Electronic Design and Machine Design.

Cabe is an electrical engineer, design consultant and author with 25 years’ experience. His most recent book is “Essential 555 IC: Design, Configure, and Create Clever Circuits

Cabe writes the Engineering on Friday blog on Electronic Design. 

Sign up for Electronic Design Newsletters
Get the latest news and updates.

Voice Your Opinion!

To join the conversation, and become an exclusive member of Electronic Design, create an account today!