High-definition maps, like this one created by Here, are a core technology for self-driving cars. (Image courtesy of Here).
High-definition maps, like this one created by Here, are a core technology for self-driving cars. (Image courtesy of Here).
High-definition maps, like this one created by Here, are a core technology for self-driving cars. (Image courtesy of Here).
High-definition maps, like this one created by Here, are a core technology for self-driving cars. (Image courtesy of Here).
High-definition maps, like this one created by Here, are a core technology for self-driving cars. (Image courtesy of Here).

Mentor Graphics Bets on Raw Sensor Data for Autonomous Driving

April 4, 2017
A new platform from Mentor Graphics pumps unfiltered sensor data to a central processing unit where processing and driving decisions take place.

Mentor Graphics, as a major supplier of electronic design automation tools, likes to help engineers with complex systems. That explains why its automotive division, which sells operating systems for dashboard displays and other tools, wants to iron out the complexity of self-driving cars.

It will try doing that with a new system called DRS360 that the company unveiled this week at the SAE World Congress in Detroit. It aims to help automakers build everything from fully autonomous cars to advanced safety features like lane departure warnings and adaptive cruise control.

But its defining characteristic is not an ultrafast computer chip for machine learning like many other platforms for self-driving cars. Instead, the system is based on a unique architecture for fusing raw sensor data inside a central processing unit. The car makes decisions based on the centralized data.

In advanced driver assisted systems, the microcontrollers inside camera, radar, and other sensors filter through raw data. These sensor modules send some of that data onto separate modules, which enable safety applications like blind spot warnings or cross traffic alerts. But this distributed system is not ideal for self-driving cars because useful sensor information is at risk of being lost in translation.

“If we think about [fully] autonomous cars that don’t have steering wheels or brake pedals, this architecture won’t work,” Glen Perry, vice president of Mentor’s embedded systems division, said in an interview last week. It heaps cost, complexity, and latency onto the system, he added.

The new platform uses specialized sensors to pump unfiltered data to a central processing unit. High-speed sensor links, such as lvds or FPD III Link, transport the raw data to the central processor, where algorithms fuse it into a detailed view of the car’s environment, filling in the blind spots of each sensor in the vehicle.

This is almost like traditional sensor fusion, but the raw data allow cars to view the road more clearly. The raw sensor data lets system sensors fact-check each other’s readings more accurately, providing what is known in the automotive industry as "redundancy." It is also more efficient, said Perry.

This translates into fewer hardware interfaces and lower latency between the sensors and the central processing unit. The new architecture also cuts down on power consumption, cost, and overall complexity. In addition, the company says that machine learning programs run faster and more efficiently on raw data, resulting in a power envelope around 100 watts.

The benefits are not without trade-offs. It is extremely difficult to write algorithms that combine the three-dimensional models captured by lidar, the coordinates from radar, camera images, and other types of raw sensor data. The central processor also must be powerful enough to crunch those data, which could get expensive.

Lidar, a type of laser scanning technology, is one of the many sensors that Mentor's new autonomous driving platform supports. (Image courtesy of Here).

Mentor’s view that sensor fusion holds the key to automated driving might seem counterintuitive in the context of new advances in machine learning. But the company believes that DRS360 provides a pathway to driverless cars, making it easier for automakers personalize code for their vehicles.

In this way, Mentor is aiming to give automakers more freedom to develop cars that stand out from competitors. They can slip their own algorithms into the DSR360 development board, which supports more than 15 raw data sensors. That offers more customization than closed off systems from the likes of Google, Mobileye, and Nvidia.

The company underlined that idea in the DRS360 hardware, said Ian Riches, director for automotive electronics at research firm Strategy Analytics. It uses an FPGA to handle the sensor fusion algorithms, but customers can attach either an x86 or ARM chip to the decision-making side of the board.

But the system’s flexibility might also isolate lower-end automakers. It might provide more freedom than what small engineering teams are looking for, Riches said. “With freedom comes a big blanket empty space that you need to fill,” he added.

Nvidia, for instance, is no longer just making graphics chips but also the autonomous driving software that runs on them. Its software can make driving decisions and create high-definition maps of lane markings and street signs – both extremely critical types of data for self-driving cars.

Nvidia is tuning its graphics chips for "end-to-end" deep learning, in which software teaches itself how to drive by watching the road through a front-facing camera. This approach requires little human training but potentially makes it more difficult to understand why programs make decisions on the road.

Others guard the data streaming through cars even more closely. Intel’s reasoning for its $15.3 billion acquisition of Mobileye last month was that the firm holds exclusive rights to the driving data gathered by its vision sensors. Mobileye uses those data to teach automated driving software and create high-definition maps, which Intel can sell to automakers.

But locking all that information in a black box limits how much automakers can customize their vehicles, Perry said. “You can’t let the customer work with the algorithms, you can’t integrate additional sensors into the platform, you can’t access the raw data or the data that it sees,” he added.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!