During the Open Compute Project (OCP) Global Summit in San Jose, Meta unveiled specifications for an open rack designed for artificial-intelligence (AI) systems. Called Open Rack Wide (ORW), the design is based on open standards and features Helios, AMD's advanced rack-scale reference system. ORW aims to boost scalability and efficiency in large-scale AI data centers.
Meta's ORW design serves as the framework for AMD's Helios AI rack. Fueled by AMD's next-gen Instinct MI400 Series GPUs, the system demonstrates how open standards support powerful AI performance across data centers.
The Helios AI rack is built to handle demanding AI and high-performance computing (HPC) workloads. Powered by AMD's CDNA architecture, each MI450 Series GPU provides up to 432 GB of HBM4 memory and 19.6 TB/s of memory bandwidth.
At full scale, a Helios rack with 72 MI450 Series GPUs provides up to 1.4 exaFLOPS FP8 and 2.9 exaFLOPS FP4 performance, with 1.4-PB/s aggregate bandwidth — sufficient for trillion-parameter AI models. It also offers up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, delivering fast communication across GPUs, nodes, and racks.
This is AMD's first rack-scale system that can handle the interoperability, power, and cooling requirements of AI-scaling data centers. By following open standards, Helios enables hyperscalers and enterprises to deploy scalable AI systems without locking into proprietary designs.
Meta's ORW specification sets a new standard for interoperable AI infrastructure. Helios implements that standard, providing ODMs, OEMs, and enterprises with a rack-scale system that supports trillion-parameter AI models and exascale-class HPC workloads.