Self-driving car

Nvidia: Counting on AI to Win a Place in Self-Driving Cars

The company’s artificial intelligence technology is inching self-driving automobiles closer to practicality.

In putting together an Olympic-quality relay team to bring autonomous cars to market, Nvidia has positioned itself well to compete for the gold medal. Last week the company announced that Toyota Motor Corp. would use Nvidia artificial intelligence (AI) technology to develop self-driving vehicle systems planned for the next few years. This follows previously announced tie-ins with German carmakers Audi and Mercedes.

In January, Audi said it would use Nvidia’s Drive PX AI car computing platform to help it put autonomous vehicle on the road starting in 2020. Meanwhile, Mercedes has announced that it is co-developing with Nvidia an autonomous vehicle project to come to market within the next 12 months.

At the same time Bosch and Nvidia are working on an autopilot platform for mass-market vehicles. To build the systems, the two companies will use Nvidia’s Drive PX platform based on its forthcoming Xavier SoC (more on this shortly).

Drive PX fuses incoming data from a car’s cameras, radar, and ultrasonic sensors using AI to help the car understand and react to its environment.

The latest Nvidia computer platform designed to crunch AI algorithms is designated Drive PX2. All Tesla Motors vehicles manufactured from mid-October 2016 include a Drive PX2, which will be used for neural net processing to enable “Enhanced Autopilot” and (eventually) full self-driving functionality.

Drive PX2 scales from a palm-sized, single-processor configuration operating at 10W for Auto Cruise capabilities to an AI supercomputer capable of autonomous driving (“Autochaffeur” mode, in Nvidia parlance). The latter is a multi-chip configuration with two Tegra Parker processors and two discrete Pascal GPUs delivering 24 trillion deep learning operations per second.

To better understand how a self-driving car learns to become an accomplished driver, a bit of background is in order. If artificial intelligence can be looked at as human intelligence mimicked by machines, then machine learning can be thought of as developing the algorithms needed to analyze data, learn from it, and then reach a conclusion or make a prediction about something.

To achieve the level of machine learning needed for autonomous cars, a technique called deep learning is implemented, usually by running massive amounts of data through a system in order to train it.

Nvidia employs its DGX-1integrated system for deep learning. DGX-1 features eight Tesla P100 GPU accelerators interconnected in a hybrid cube-mesh network. Together with Intel Xeon CPUs and four 100Gb InfiniBand network interface cards, DGX-1 is said to reduce neural network training in the data center from months to just days.

NVidia also reports it has trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly into steering commands. The CNN goes beyond pattern recognition. It learns the entire process needed to operate an automobile, using data from multiple cameras and sensors to understand in real-time what’s happening around the vehicle, locate itself on a map, and plan a safe path forward.

Nvidia

Xavier boasts 7 billion transistors and will be made using a 16nm FinFET process.

Self driving vehicles will only work as well as the level at which they understand how to drive. This will require a massive amount of computing power to interpret all the data coming from the various sensor systems involved.  To enable Level 4 autonomous capabilities (vehicles designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip), Nvidia will offer its Xavier SoC.

The single-chip board is said to possess a similar level of processing power to the Drive PX 2 in Autochauffeur configuration. Xavier features eight general-purpose in-house-designed custom ARMv8-A cores, a GPU based on the Volta architecture with 512 stream processors, and hardware-based encoders/decoders for video streams with up to 7,680 × 4,320 resolution.

From an AI performance point of view, Xavier is expected to deliver 30 deep learning tera-ops (a metric for measuring 8-bit integer operations), which is 50% higher than Nvidia’s Drive PX 2 while consuming only 20W of power. Xavier samples will be available in the fourth quarter of 2017.

Nvidia will not be alone in the race to provide technology for self-driving cars. In  the past week Intel, one of Nvidia’s principal challengers, joined the Partnership on AI (with Amazon, Apple, DeepMind, Google, Facebook, IBM, and Microsoft). In addition, the existing team of Intel, BMW, and MobilEye Group announced they would bring Delphi onboard as a development partner and system integrator for its autonomous driving platform.

So with apologies to our friends at the Indianapolis 500, taking place on Sunday of Memorial Day weekend, the most famous words in self-driving may soon be:

“Gentleman (and ladies): start your computers.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish