Nvidia is trying to expand its graphics processors into warehouse, logistics and manufacturing robots with its newest Jetson board, which can handle machine learning tasks like real-time sensor processing, localization, perception and path planning. The board could serve as an early brain for autonomous machines, running inference algorithms and potentially boosting demand for its more expensive training chips in servers.
The new hardware is based on Nvidia's Xavier chipset, which is also packaged inside the company's latest system for autonomous driving applications. The board, Jetson Xavier, supports more processing power than the previous Jetson TX2, a smaller and lower power module capable of handling inference tasks in applications with a tighter power budget, such as security cameras and crop-monitoring systems.
When Nvidia announced the new board at the Computex conference in Taipei, Taiwan, the company's chief executive Jensen Huang said that it was a major overhaul of the Jetson platform. The company reportedly worked on the board's design over the last five years, with around 8,000 engineers touching the development and design. Early access to the module starts in August, while it will be generally available in October.
Nvidia's Xavier system-on-chip contains nine billion transistors and can handle 30 trillion operations per second, while the Jetson board supports several operating modes at 10 watts, 15 watts and 30 watts, which is half of the electricity used by the average light bulb. The company said that new Jetson is an order of magnitude more energy efficient than its previous credit card-sized model, which operates down to 7.5 watts.
Xavier takes advantage of a single graphics processor based on Nvidia’s Volta architecture, which contains custom tensor cores for processing neural networks, the basic building blocks of machine learning software. It contains separate processors for image, video and vision applications as well as an eight-core, 64-bit ARM CPU that sends instructions to accelerators based on Nvidia’s NVDLA inference architecture.
The accelerators are supported by Nvidia's TensorRT software, which can translate neural networks trained in data centers into the low-precision language required for running machine learning in embedded devices. The chipset is paired with 16 gigabytes of memory and 32 gigabytes of storage. The board, which will cost $1,300, also supports up to 16 simultaneous cameras as well as many other I/O hookups.
The company also introduced a set of software tools for programming the new board, which will be sold out of Nvidia's Tegra business unit, which grew 33 percent over the last year with revenues of $442 million in the first quarter. The software, Isaac, is supposed to assist robotics engineers with the training, verification and deployment of algorithms. The toolset includes Isaac Sim, a virtual environment in which robots can be trained and subjected to hardware-in-the-loop tests.