310078846 © Aleksandr Zamuruev | Dreamstime.com
666c7a296941c51f8b1bd404 Promo Dreamstime L 310078846

NVIDIA Unveils New Technologies to Accelerate Everything

June 14, 2024
From rendering humans to simplified robot development, new generative AI technologies look to revolutionize industries from gaming to data centers.

What you’ll learn:

  • Insight into NVIDIA’s latest AI advances.
  • The company’s hardware and software to power that venture.
  • Applications for these new and updated technologies.


In a keynote address delivered at Computex 2024, Jensen Huang, CEO of NVIDIA, detailed the tech company’s roadmap for generative AI technologies, accelerated platforms, AI-driven PCs and consumer devices, robotics, and the deployment of AI-powered factories.

Generative AI allows users to quickly generate content based on different inputs to produce everything from text and 3D models to images and animations. Neural networks are used to identify patterns and structures within existing data to create new content. NVIDIA has capitalized on generative AI to create its next-gen platforms.

“Generative AI is reshaping industries and opening new opportunities for innovation and growth,” stated Huang in his keynote address. “Today, we’re at the cusp of a major shift in computing; the intersection of AI and accelerated computing is set to redefine the future. The future of computing is accelerated. With our innovations in AI and accelerated computing, we’re pushing the boundaries of what’s possible and driving the next wave of technological advancement.”

Blackwell Platform for AI-Powered Factories, Data Centers

One of the highlights of his keynote address centered on the company’s Blackwell platform (Fig. 1), which allows organizations to build AI-powered factories and data centers to drive the next wave of generative AI breakthroughs.

The platform packs eight B200 Tensor Core GPUs that can be combined with the company’s Vera CPUs to create a new generation of DGX SuperPOD computers. Such computers would be capable of processing up to 11.5 billion floating-point operations (exaFLOPS) of AI computing using the new Blackwell architecture. What’s more, the Blackwell cluster will consume 15 kW in its current configuration and can use massive heatsinks to cool the chips or take advantage of water cooling for lower temperatures.

NVIDIA also provided some new details on Blackwell’s successor, code-named Rubin, which is expected to be released in 2026. The platform is rumored to include the company’s Arm-based Vera CPU and will take advantage of HBM4, the 6th generation of high bandwidth memory with 12- and 16-layer DRAM designs. The Rubin R100 GPUs will leverage a 4x reticle design and TSMC’s CoWoS-L packaging technology on the N3 processing node. The chips are slated to hit production in late 2025.

A few key features of TSMC’s CoWoS-L include:

  • LSI chips for high routing density die-to-die interconnect through multiple layers of submicron Cu lines. The chips will also feature a variety of connection architectures (SoC to SoC, SoC to chiplet, SoC to HBM, etc.).
  • A molding-based interposer with a wide pitch of RDL layers on both the front and back sides and through interposer via (TIV) for signal and power delivery with low loss of high-frequency signals in high-speed transmissions.
  • Capable of integrating additional elements, such as a standalone integrated passive device (IPD), on the bottom of the SoC die to support its signal communication with better power/signal integrity.

Inference Microservices AI Generator

Huang also introduced NVIDIA’s Inference Microservices (NIM), which allows businesses to build and deploy custom AI applications on their own platforms while retaining full control of their intellectual property. Designed around the company’s CUDA platform, the catalog of cloud-based microservices helps shorten time-to-market and simplify the deployment of generative AI models anywhere, including the cloud, data centers, and GPU-accelerated workstations.

Developers can deploy NIM (Fig. 2) anywhere, enabling model structures to be installed across myriad infrastructures, including workstations, the cloud, and data centers. Users can also test the latest generative AI models using ‌NVIDIA-managed cloud APIs. They can self-host the models by downloading NIM and rapidly deploy them with Kubernetes on major cloud providers or on-premises for production, which reduces development time, design complexities, and costs.

ACE Microservice Targets Lifelike Digital Human Creation

Also highlighted in the keynote was the Avatar Cloud Engine (ACE), a microservice designed to simplify creating, animating, and operating lifelike digital humans across customer service, telehealth, gaming, and entertainment. Imagine a non-playable character (NPC) in a game that can respond to the player’s voice in almost a lifelike fashion.

ACE leverages NVIDIA’s Omniverse Audio2Face and Riva ASR for AI-powered animation and speech. It’s used to generate facial expressions on NPCs to match voice and speech tracks. The microservice is already slated for use in S.T.A.L.K.E.R. 2: Heart of Chernobyl from GSC Game World, and the indie sci-fi Fort Solis from Fallen Leaf.

The ACE platform (Fig. 3) will support deployment across 100 million RTX AI PCs with a simplified integration process that makes use of the company’s AI Inference Manager SDK. Firms, including AWW Inc., Inventec, Perfect World Games, and Service Now, are already utilizing the microservice to create next-gen digital avatars. NVIDIA’s art team also used the platform to showcase its abilities at Computex 2024 by creating a digital Huang, which was generated by video from text inputs. 

MGX Speeds AI Integration

An update to the MGX platform is designed to facilitate AI integration in next-gen manufacturing plants and data centers. It uses a modular reference architecture to quickly and cost-effectively build more than 100 server variations for a wide range of applications.

Companies begin with a basic system architecture and then can select the ideal CPU, GPU, and DPU. Once the selection is made, they’re able to optimize the design for the specific workload requirements. According to NVIDIA, multiple tasks like AI training and 5G can be handled on a single MGX platform and even accommodates future hardware generations without issue.

MGX supports different modular chassis formats, including 1U, 2U, and 4U (air or liquid-cooled); NVIDIA’s complete GPU portfolio, including the latest H100, L40, L4 GPUs; CPUs; and networking hardware such as the company’s Bluefield-3 DPU as well as ConnectX-7 network adapters. MGX is also supported by NVIDIA’s software stack, including AI Enterprise, which features more than 100 frameworks, pre-trained models, and development tools.

Issac 3.0 Updates Further Simplify ROS Application Dev

During Huang’s keynote, he touched on the latest update to its Issac AI robotic development platform, which is designed to simplify the building and testing of AI-based robotic applications for ROS developers. The platform offers accelerated libraries, application frameworks, and AI models to help drive the development of AI robots, such as autonomous mobile robots (AMRs), arms and manipulators, and humanoids.

Issac 3.0 includes a number of improvements and new packages to help streamline the robotic development process, including AI perception, image and LiDAR processing, navigation, and more. The cuMotion package for MoveIt 2 provides hardware-accelerated motion planning with collision avoidance and obstacles, while the new FoundationPose deep neural network (DNN) allows for pose estimation and tracking of unseen objects from a 3D model. It also offers multi-camera visual odometry and nvBlox for robust visual tracking, as well as a host of other improvements and packages to make robot development efficient and streamlined.

The Issac platform has already gained widespread adoption among robotics companies, including BYD Electronics, Intrinsic (an Alphabet company), Siemens, and Teradyne Robotics. These companies are using the platform for advanced AI models and simulation tools to develop next-gen AI-powered autonomous machines. The ultimate goal is to boost production efficiency and safety in factories, warehouses, and distribution centers by assisting with repetitive or precision tasks.

Pushing AI's Potential

The keynote address at Computex 2024 highlighted its efforts to bring AI to the forefront of new technologies and applications, with Huang emphasizing generative AI’s transformative potential across a wide range of industries. With platforms like Blackwell for AI-powered factories, NIM for custom AI applications, ACE for improving digital humans, Issac 3.0 for robotic development, and MGX for modular AI integration, NVIDIA is pushing the boundaries of AI’s potential.

From transforming industrial automation to enhancing content creation, the company continues to innovate and redefine the future of computing and opens up new horizons for technology advances.

Read more articles in the TechXchange: Generating AI.

About the Author

Cabe Atwell | Technology Editor, Electronic Design

Cabe is a Technology Editor for Electronic Design. 

Engineer, Machinist, Maker, Writer. A graduate Electrical Engineer actively plying his expertise in the industry and at his company, Gunhead. When not designing/building, he creates a steady torrent of projects and content in the media world. Many of his projects and articles are online at element14 & SolidSmack, industry-focused work at EETimes & EDN, and offbeat articles at Make Magazine. Currently, you can find him hosting webinars and contributing to Electronic Design and Machine Design.

Cabe is an electrical engineer, design consultant and author with 25 years’ experience. His most recent book is “Essential 555 IC: Design, Configure, and Create Clever Circuits

Cabe writes the Engineering on Friday blog on Electronic Design. 

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!