Thinkstock
Self-driving car

Independent Thinking: Why the “Black Box” is Needed for Autonomous Vehicle Deployments

Jan. 20, 2018
While many of the technologies to deliver self-driving vehicles are in use or demonstration phases, regulatory issues are not resolved.

Realization of autonomous vehicles (AVs) rely on artificial intelligence (AI) coupled with machine learning (ML). The decisions an AI system makes from instant to instant will depend on its perception of the local environment (objects in “view,” road conditions, the condition of the vehicle, the load the vehicle is carrying, local rules of the road, and the experience accumulated since being put into service). The variables are so many that even the same vehicle may not behave the same way when traversing from A to B, depending upon the conditions. In other words, autonomous cars will have a personality of sorts.

How does one evaluate AVs, then? Would you trust the manufacturer who has a vested interest in getting to market and avoiding liability? Moreover, in driving many variables can affect success (getting to your destination). The objective may not be just “getting there.” In real life, it is often “getting there at a specific time.”

Data input from multiple sensors replace our five senses to inform us what is going on around the vehicle.

How Vehicles Are Tested

The time objective adds parameters that vary with the travel itself. Since the vehicle is making the pathway, segment speed, obstruction avoidance, collision avoidance, travel time decisions, and more, testing an AV is not just about braking performance or how a vehicle crushes in a collision. One must evaluate how the vehicle responds to its environment with pressures from its objectives and the desired driving style of the chief occupant. Testing decision-making capability and appropriateness of a vehicle is a completely new world. It should be a necessary practice to ensure vehicle performance and safety in the extremely wide range of conditions that AVs will be subjected to.

At present, Event Data Recorder regulations only require data to be collected for a short time (less than a second) around an impact sufficient to deploy the airbag system. However, I can think of many driving incidents in my own life that never deployed the airbag. These incidents were caused by human error. In the future, we can replace the cause with AI decision error (although the hope is far fewer).

Since humans may be passive and not even paying attention, it will likely be important to include an independent witness of events to understand the circumstances of an incident. If the AV has human controls (steering wheel, brake, accelerator), what had control at the time of the accident? The witness may need to independently detect lesser incidents and keep a record for future analysis.

An independent witness compiling vehicle sensor information, video, and control data would be valuable to evaluate whether the AI/ML system is properly interpreting sensor data. Additionally, the independent witness could help determine whether the decisions made are at least within the acceptable range defined by expert drivers.

Vehicle makers need differentiation of control systems. A result may be a myriad of artificial intelligence and machine learning behaviors or personalities.

Even with semi-autonomy, how is control handed off between the AI/ML driver and the human? Do both see the same danger and select the same corrective action? Which has the liability if the corrective action is insufficiently effective? How do you know who (AL/ML or human) has control at the instant of the incident?

Scenarios of this type have been investigated by the University of Leeds in the UK. In situations such as this, who has the liability for injury and property damage? Did the human take control away from the AI processor? Did the AI processor override human input? On the other hand, was the ultimate response to a situation a combination of both? If control was handed back to the human, were they paying attention and given enough time to respond?

Sensors for Modern Testing

The sensors that replace your eyes and ears to tell you what is going on around you and what your car is doing are many. They range from an array of cameras, radars, sonar, laser imagers, and accelerometers to the on-board GPS and even the clock. The data coming from this collection of sensors must fuse together to form a real-time equivalent of what you see and understand in any instant of a driving situation. To do so makes the AI processing very busy.

What happens if a sensor malfunctions? What backup is there should one or more sensors fail and the AI/ML system is partially blinded? What happens if a sensor is displaced (functioning or not) and thus information needed by the AI processor is flawed or missing altogether?

What happens during a collision when sensors may progressively be damaged or destroyed while the AI processor is trying to navigate the vehicle to safety? Can it still construct and properly classify objects and predict trajectories? If it determines that the probability of failure is high, what does it do? If there are no human controls, does it just stop? How does it find a safe place? In an accident, how could you know what went wrong without an independent black box that collects raw, unprocessed sensory information being sent to the AI/ML system?

All these questions suggest that the nature of vehicle testing is due for a huge change. We can also expect that all of the AI/ML systems will be different as vehicle manufacturers try to find their competitive edge and satisfy the demographics of their target marketplace.

If there is an incident, how and when, if ever, is control handed off between the AI/ML driver and the human?

Onboard Sensing Technology

This all builds a case for an onboard independent witness that can not only provide evidence in an accident investigation, but also can lead to the development of new criteria to improve AI processors, including experience and skill loaded into the ML of new vehicles.

It also suggests that the instrumentation at vehicle test ranges may not be adequate to evaluate AVs, test changes and modes, and certify a design as ready for release. Instead, tools that can capture and record data synchronized with video should be required so that we can capture each picture taken by the cameras, each sound bite, each radar image, each action taken by the AI processor, each response by the vehicle systems (brakes, steering, engine), and each input by the chief occupant.

Such information is particularly important in fault analysis. We can evaluate the decisions made by the AI processor and how the chief occupant interacted that resulted in the incident. Armed with this information, all stakeholders can learn the cause of an incident and plot a path to prevent it from happening again.

At test ranges, data/video collection would serve as a tool to independently evaluate if the data from all of the sensors and cameras were correctly integrated to form an “image” of the scene similar or even better than a human would have assembled. This comparison should be through the eyes of the sensors themselves so that expert evaluators can see what information and imagery was presented to the AI processor to create its interpretation of its physical environment.

In real-world environments, this data/video should be easily accessible to first responders and accident investigators. It will be essential for them to collect all relevant information about the AV’s behavior and influence in the resulting event.

Autonomous cars are different from any other. They are decision makers. How we test them, introduce them to the marketplace, investigate accidents, and manage maintenance will be very different. An independent black box will be necessary to improve decision-making and collect the data needed to understand the causes and consequences of incidents.

Paul Hightower is CEO of Instrumentation Technology Systems (ITS), the market leading supplier of HD-SDI video-data fusion products. While bringing traditional text and graphics overlay and accurate timestamping capability to the HD-SDI engineering test market, the company has pioneered the use of metadata to collect image relevant data in real time.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!