Nvidia A100 Promo Web2 5ec584172454b

Sparsity Augments AI Acceleration on Nvidia’s A100 GPU

May 20, 2020
The A100 platform pushes the limits of machine learning from the edge to the enterprise.

The big news at this year’s virtual GPU Technology Conference (GTC) was Nvidia’s release of the A100 (Fig. 1). The A100 with its 54.2 billion transistors can get a bit toasty with a max thermal design power (TDP) of 400 W, and a large array of modules can be connected using the built-in NVLinks. Each module supports 600 GB/s of NVLink bandwidth in addition to the 64 GB/s PCI Express (PCIe) Gen 4 interfaces. The PCIe interfaces support SR-IOV.  

The A100 is based on the company’s new Ampere architecture that provides a significant performance boost compared to the earlier V100 Volta and Turing architectures. Included is 40 GB of HBM2 on-module memory for the A100 with a memory bandwidth of 1555 GB/s. There’s also a 40-MB L2 cache that’s almost seven times that of the V100.

Seven GPU processing clusters (GPCs) and seven texture processing clusters (TPCs) are incorporated into the A100, along with 16 streaming multiprocessors (SMs) per GPC (Fig. 2). The ten 512-bit memory controllers support five HBM2 memory stacks. The SMs support all data types. A new shared-memory-based barrier unit provides asynchronous barriers to handle new copy instructions. The systems support 32 threads/warp and 64 warps/SM.

Usually, multiple A100s are tied to the NVLink interface, enabling very large models to be run across an array of chips. A new feature is Multi-Instance GPU (MIG), which allows the opposite to occur by splitting up the GPU resources into dedicated and protected islands of computation. Up to seven instances can be defined running CUDA applications. CUDA 11 is Nvidia’s latest programming environment.

Each MIG instance has separate and isolated paths through the entire memory system. Other resources like the on-chip crossbar ports, L2 cache banks, memory controllers, and DRAM address buses are also allocated to these logical islands. This provides predictable throughput and latency. L2 cache allocation and DRAM utilization will not be affected by the operation of other instances. Error and fault isolation are maintained within each instance.

The A100 supports a range of numeric formats, including Tensor Float 32 (Fig. 3). It provides the range of FP32 but trims the number of significant bits, thereby allowing A100 designers to implement matrix calculations more efficiently. The system also supports FP16 and BFLOAT16 along with a host of integer formats. All of these are available and optimized for machine-learning (ML) model acceleration.

Peak performance for 64-bit floating point is 9.7 teraFLOPs (TFLOPS). At the other end of the spectrum is the INT4 performance of 1248 TOPS. These numbers are impressive, but Nvidia’s platform also implements sparsity optimization.

ML and artificial-intelligence (AI) applications are a prime target for the A100. The neural-network models for ML/AI applications typically utilize sparse matrix operations. The sparsity support allows these operations to be performed more quickly. For the Tensor Float 32, the standard 156 TFLOPS doubles to 312 TFLOPS with sparsity support. FP16 and BFLOAT16 also benefit in a similar fashion, in addition to the integer support.

The A100 is being used in a number of form factors, such as the DGX A100 (Fig. 4). The box delivers 5 petaFLOPS of performance in a single box. The DGX A100 consists of eight A100 modules, six NVSwitches with 4.8 TB/s of bandwidth, nine Mellanox ConnectX-6 200-Gb/s interface cards, dual 64-core AMD CPUs, and 15 TB of Gen 3, NVMe SSDs with a peak bandwidth of 250 GB/s. It’s based on the HGX A100 motherboard.

The HGX A100 motherboard hosts eight A100 modules connected by a new, faster NVSwitch matrix (Fig. 5). The platform also has PCI Express switches that support 200-Gb Ethernet NICs like the Mellanox ConnectX-6 as well as NVMe storage. A pair of high-performance CPUs can be connected to the PCIe switches. A four-A100 version is also available. Both support GPUDirect Storage, which lets the GPUs work with the NVMe storage, bypassing the CPU.

Nvidia continues to deliver more performance by orders or magnitude every couple of years. That alone would be impressive, but it’s really the software infrastructure that the company has built around its hardware that makes it work. This starts with CUDA and includes everything from the Isaac SDK for robotics, to cuDNN, to its TensorFlow support. Even the new MIG support is integrated with its Kubernetes container management system.

About the Author

William G. Wong | Senior Content Director - Electronic Design and Microwaves & RF

I am Editor of Electronic Design focusing on embedded, software, and systems. As Senior Content Director, I also manage Microwaves & RF and I work with a great team of editors to provide engineers, programmers, developers and technical managers with interesting and useful articles and videos on a regular basis. Check out our free newsletters to see the latest content.

You can send press releases for new products for possible coverage on the website. I am also interested in receiving contributed articles for publishing on our website. Use our template and send to me along with a signed release form. 

Check out my blog, AltEmbedded on Electronic Design, as well as his latest articles on this site that are listed below. 

You can visit my social media via these links:

I earned a Bachelor of Electrical Engineering at the Georgia Institute of Technology and a Masters in Computer Science from Rutgers University. I still do a bit of programming using everything from C and C++ to Rust and Ada/SPARK. I do a bit of PHP programming for Drupal websites. I have posted a few Drupal modules.  

I still get a hand on software and electronic hardware. Some of this can be found on our Kit Close-Up video series. You can also see me on many of our TechXchange Talk videos. I am interested in a range of projects from robotics to artificial intelligence. 

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!