Navin Shenoy, Intel executive vice president and general manager of the Data Center Group, displayed a wafer containing Intel Xeon processors during the keynote at Intel’s data-centric product announcement. The latest crop of Xeons was one of many new products and technologies revealed at the event, which also included new networking and storage announcements (Fig. 1).
1. Intel’s new Xeon processors incorporate network and AI acceleration, as well as support from Optane DC (foreground).
The Xeon Cascade Lake family features the top-end Xeon Platinum 9200 processor with 56 cores and 12 DDR4 memory channels down to the 8-core, Xeon D-1600 system-on-chip (SoC) designed for edge computing and compact servers. The top-end system has a whopping 400 W TDP, but the 32-core Xeon 9221 is only 250 W. The latest systems target the cloud, allowing systems with up to eight chips to be connected together in a glueless configuration. In addition, network-optimized Xeon SKUs target network-function-virtualized (NFV) infrastructures.
A dual-core D-1600 SoC requires only 27 W (Fig. 2). This family incorporates 10G Ethernet, USB, SATA, and PCI Express (PCIe) support. Incorporating two DDR4 memory controllers with ECC support, the SoC targets embedded systems, base stations, and network devices such as firewalls.
2. The Xeon D-1600 integrates peripherals like 10 G Ethernet, SATA and USB on-chip.
The Xeon systems have been augmented to handle networking and artificial-intelligence (AI) chores with Intel Deep Learning Boost (Intel DL Boost) technology. The AVX-512 Vector Neural Network Instructions (VNNI) are designed to accelerate machine-learning (ML) applications that employ deep neural networks. Intel intends to challenge GPGPUs like those from NVIDIA when it comes to inference jobs that have smaller batch sizes.
Intel DL Boost is optimized to accelerate AI inference workloads like image recognition, object detection, and image segmentation within data-center, enterprise, and intelligent edge-computing environments. Intel’s OpenVINO framework can take advantage of DL Boost in addition to other Intel platforms like Movidius and Nervana, as well as conventional CPUs. The framework supports models developed using popular platforms like TensorFlow, PyTorch, Caffe, MXNet, and Paddle Paddle.
The new chips provide additional hardware-based protections against Spectre and Meltdown attacks. They also have a unique Enhanced Privacy ID (EPID) that can be used by Intel’s Secure Device Onboard (SDO) for deployment of IoT devices.
Another key data-centric announcement was the Optane DC. The new DIMMs are supported on the memory channel, providing persistent-memory support. Persistent-memory support is now part of major operating systems like Linux and Windows Server. Optane DC provides higher capacity, non-volatile memory that also delivers faster boot-up while retaining data when a system is powered down.
Optane DC is available in a number of form factors, such as SSDs, in addition to the memory-channel-based DIMMs (Fig. 3).
3. Intel’s Optane DC comes in a variety of form factors from SSDs (shown) to DIMMs.
Intel will also be delivering flash memory in the removable, compact NVMe ruler form factor (Fig. 4). The storage devices employ 32-layer QLC NAND flash. The rulers are available in short, EDSFF form factors as well as full length rulers. Intel flash memory also comes in M.2, U.2 and PCIe board form factors.
4. The ruler form factor provides removable, dense QLC NAND flash storage for servers.
On another front, the company introduced the Ethernet 800 Series adapter (Fig. 5) that support speeds up to 100 Gb/s. The latest innovation includes Application Device Queues (ADQ) technology. ADQ is designed to increase application response-time predictability while reducing application latency and improving throughput.
5. The Intel Ethernet 800 Series adapter supports the new Application Device Queues (ADQ) technology.
Finally, Intel included news about its 10-nm Agilex FPGA, which we covered earlier. The Agilex F-Series will be available first with DDR4 and 56-Gb/s transceiver support and an optional quad-core, Arm Cortex-A53 SoC (Fig. 6). The I-Series and M-Series will be available later with DDR5 support and Compute Express Link (CXL) support. CXL, which will be found in future Xeon processors, provides a low-latency, peer-to-peer, cache-coherent environment for hardware accelerators like FPGAs.
6. Agilex 10-nm FPGAs will kick off with the F-Series, which has DDR4 and 56-Gb/s transceiver support.
Overall, it’s an impressive array of new products designed to support data-centric computing, including ML applications from the cloud to the edge.