Powerelectronics 5469 Cadillac Super Cruise Self Driving Cars Stock 0

We Need to Do Our Homework on How to Test Mass-Produced Autonomous Vehicles

June 7, 2017
Autonomous vehicles present challenges to automakers, drivers, and the federal government, as well as associated software and hardware designers. Therefore, mass-produced autonomous vehicles must be rigorously tested.

If predictions are correct, we can expect to see commercial autonomous vehicles on our streets within five to 10 years (or less). There are already a small number of these prototype vehicles from Google and its spinoff, Waymo. As of June 2016[update], Google had road tested its fleet of vehicles, in autonomous mode, a total of 1,725,911 miles. Based on Google’s own accident reports, its test cars have been involved in 14 collisions, of which human drivers were at fault 13 times. It was not until 2016 that the car’s software caused a crash.

Road tests are not normally used used on mass-produced conventional vehicles. A few of these vehicles are put through a well-defined, standardized crash test using dummies and sensors. Some cars undergo rollover tests that are performed with a few well-defined steering maneuvers. The results are easily measured to the satisfaction of vehicle buyers, government regulators, and insurance companies. Manufacturers usually run a series of short tests to check each car before they are shipped.

Mass-produced autonomous vehicles are a different story; manufacturers can run crash tests and rollover tests, but a road test isn’t practical or cost-effective for every car that comes off the production line. Instead, autonomous vehicles will require extensive test procedures to ensure safe and reliable operation. For best results these test procedures should be developed in parallel with the design stage so they’ll be ready when production starts.

The reason for comprehensive testing is that autonomous vehicles are actually computer systems with a network of diverse sensors. The sensors may be from several different companies, each with its own packaging and software, which complicates testing. Thus, testing of autonomous vehicles is by no means “a walk in the park.” Rigorous testing will be needed to ensure mass-produced autonomous vehicles are safe and reliable. People will be skeptical of their performance, so they better work properly.

Proper autonomous vehicle operation will depend on hardware and software testing. Ideally, hardware testing should uncover problems before they are found in the field. However, with conventional vehicles it usually turns out that the user performs the final tests by notifying the manufacturer if there is a hardware problem. You can envision the same hardware situation with an autonomous vehicle. To verify this, all you have to do is look at all the product recalls for virtually every conventional vehicle. And, initial software testing must be thorough to eliminate any “glitches.” No matter how thorough the testing is, it will always be necessary to produce software updates. As an example, you constantly get software updates for your PC. Because autonomous vehicles can include different types of software for its sensors, there may be software conflicts that require updates. Software updates are certain to be an important part of the lifetime of an autonomous vehicle.

Because mass-produced autonomous vehicle testing must involve hardware and software, it can take a considerable amount of time and effort. In testing there will always be arguments between hardware and software people when it comes to deciding who is responsible for a specific problem. It’s even worse with an autonomous vehicle because of the complexity caused by the number of different technologies and software that are involved. Therefore, testing can be frustrating and time consuming.

Mass-produced automated vehicles present a totally different situation. To ensure they will operate safely, vehicles must be tested for different driving scenarios. For example, they will have to be able to avoid other cars bicycles, and pedestrians. What happens when it sees a red or green traffic light? What if a bus swerves in front of the car? Because of the multitude of possible driving situations, testing of an autonomous vehicle requires monitoring many functions.

There is another complication. There will probably be at least 10 commercially available autonomous vehicles. Each one might have its own computer and sensors in a unique configuration. It may be possible to have a standard format for the test in which you enter the specifics of your particular vehicle. On the other hand, test systems might have to be unique for each vehicle.

The various functions performed by autonomous vehicles became apparent to the Federal government so it decided to get involved and established policies for these self-driving vehicles. On Sept. 20, 2016, the U.S. Department of Transportation issued a federal policy for automated vehicles, laying a path for the safe testing and deployment of new auto technologies. The policy sets a proactive approach to providing safety assurance and facilitating innovation through four key parts, summarized:

·      15-Point Safety Assessment –The Vehicle Performance Guidance for Automated Vehicles for manufacturers, developers and other organizations includes a 15-point “Safety Assessment” for the safe design, development, testing, and deployment of automated vehicles.

·      Model State Policy–This policy section presents a clear distinction between federal and state responsibilities for regulation of highly automated vehicles, and suggests recommended policy areas for states to consider with a goal of generating a consistent national framework for the testing and deployment of highly automated vehicles.

·      NHTSA’s Current Regulatory Tools–This discussion outlines NHTSA’s (National Highway Transportation Safety Administration) current regulatory tools that can be used to ensure the safe development of new technologies, such as interpreting current rules to allow for greater flexibility in design and providing limited exemptions to allow for testing of nontraditional vehicle designs in a more timely fashion.

·      Modern Regulatory Tools–This section identifies new regulatory tools and statutory authorities that policymakers may consider in the future to aid the safe and efficient deployment of new lifesaving technologies.

The primary focus of the policy is on highly automated vehicles, or those in which the vehicle can take full control of the driving task in at least some circumstances. Portions of the policy also apply to lower levels of automation, including some of the driver-assistance systems already being deployed by automakers today.

Evaluating Autonomous Vehicles

Federal policies can define the characteristics of self-driving cars and establish safety requirements, but it does not dictate how these vehicles will be tested and evaluated. Evaluation procedures that can measure the safety and reliability of these driverless cars must develop far beyond existing safety tests. To get an accurate assessment in field tests, such cars would have to be driven millions or even billions of miles to arrive at an acceptable level of certaint –a time-consuming process that would cost tens of millions of dollars.

University of Michigan (UM) is doing its “homework” to develop accelerated testing for autonomous vehicles. Researchers affiliated with the UM’s Connected and Automated Vehicle Center have developed an accelerated evaluation process that eliminates the many miles of uneventful driving activity to filter out only the potentially dangerous driving situations where an automated vehicle needs to respond, creating a faster, less expensive testing program. This approach can reduce the amount of testing needed by a factor of 300 to 100,000 so that an automated vehicle driven for 1,000 test miles can yield the equivalent of 300,000 to 100 million miles of real-world driving. While more research and development needs to be done to perfect this technique, the accelerated evaluation procedure offers a ground-breaking solution for safe and efficient testing that is crucial to deploying mass-produced automated vehicles.

“Even the most advanced and largest-scale efforts to test automated vehicles today fall woefully short of what is needed to thoroughly test these robotic cars,” said Huei Peng, director of Mcity and the Roger L. McCarthy Professor of Mechanical Engineering at UM.

In essence, the new accelerated evaluation process breaks down difficult real-world driving situations into components that can be tested or simulated repeatedly, exposing automated vehicles to a condensed set of the most challenging driving situations. In this way, just 1,000 miles of testing can yield the equivalent of 300,000 to 100 million miles of real-world driving.

While 100 million miles may sound like overkill, it's not nearly enough for researchers to get enough data to certify the safety of a driverless vehicle. That’s because the difficult scenarios they need to zero in on are rare. A crash that results in a fatality occurs only once in every 100 million miles of driving.

Yet for consumers to accept driverless vehicles, the researchers say tests will need to prove with 80 percent confidence that they're 90 percent safer than human drivers. To get to that confidence level, test vehicles would need to be driven in simulated or real-world settings for 11 billion miles. But it would take nearly a decade of round-the-clock testing to reach just 2 million miles in typical urban conditions.

Beyond that, fully automated, driverless vehicles will require a very different type of validation than the dummies on crash sleds used for today’s cars. Even the questions researchers have to ask are more complicated. Instead of, “What happens in a crash?” they'll need to measure how well they can prevent one from happening.

“Test methods for traditionally driven cars are something like having a doctor take a patient's blood pressure or heart rate, while testing for automated vehicles is more like giving someone an IQ test,” said Ding Zhao, assistant research scientist in the UM Department of Mechanical Engineering.

To develop the four-step accelerated approach, the UM researchers analyzed data from 25.2 million miles of real-world driving collected by two UM Transportation Research Institute projects—Safety Pilot Model Deployment and Integrated Vehicle-Based Safety Systems. Together they involved nearly 3,000 vehicles and volunteers over the course of two years. From this data, the researchers:

·      Identified events that could contain "meaningful interactions" between an automated vehicle and one driven by a human, and created a simulation that replaced all the uneventful miles with these meaningful interactions.

·      Programmed their simulation to consider human drivers the major threat to automated vehicles and placed human drivers randomly throughout.

·      Conducted mathematical tests to assess the risk and probability of certain outcomes, including crashes, injuries, and near-misses.

·      Interpreted the accelerated test results, using a technique called "importance sampling" to learn how the automated vehicle would perform, statistically, in everyday driving situations.

The accelerated analysis research was conducted on the two most common situations resulting in serious crashes. The first was where the automated vehicle was following one driven by a human, where adjustments constantly must be made for movements of the lead vehicle, as well as speed, road and weather conditions and other rapidly changing factors. The second involved a human-driven car cutting in front of the automated car that was being followed, in turn, by another human-driven vehicle. Three metrics – crash, injury, and conflict rates – were calculated, along with the likelihood that one or more passengers in the automated vehicle would suffer moderate to fatal injuries. The accuracy of the evaluation was determined by conducting and then comparing accelerated and real-world simulations.

References

  1. www.transportation.gov/AV
  2. Huei Peng, Roger L. McCarthy, Ding Zhao, From the Lab to the Street: Solving the Challenge of Accelerating Automated Vehicle Testing.
About the Author

Sam Davis Blog | Editor-In-Chief - Power Electronics

Sam Davis was the editor-in-chief of Power Electronics Technology magazine and website that is now part of Electronic Design. He has 18 years experience in electronic engineering design and management, six years in public relations and 25 years as a trade press editor. He holds a BSEE from Case-Western Reserve University, and did graduate work at the same school and UCLA. Sam was the editor for PCIM, the predecessor to Power Electronics Technology, from 1984 to 2004. His engineering experience includes circuit and system design for Litton Systems, Bunker-Ramo, Rocketdyne, and Clevite Corporation.. Design tasks included analog circuits, display systems, power supplies, underwater ordnance systems, and test systems. He also served as a program manager for a Litton Systems Navy program.

Sam is the author of Computer Data Displays, a book published by Prentice-Hall in the U.S. and Japan in 1969. He is also a recipient of the Jesse Neal Award for trade press editorial excellence, and has one patent for naval ship construction that simplifies electronic system integration.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!