Rick Green 200

Professor says law is ready for first-generation robotic vehicles

March 25, 2018

The tragic death of a pedestrian hit by an Uber autonomous vehicle last week in Arizona raises technical and legal concerns. On the technical side, investigators will need to determine why the vehicle failed to detect the pedestrian and brake. On the legal side, it will be necessary to apportion liability.

Law professor Ryan Calo, writing in Slate, notes that last week’s death was the second involving a vehicle with autonomous features—the first occurred in 2016 when a Tesla Model S with its Autopilot feature engaged collided with a truck, killing the Tesla’s occupant.

“But there is a big difference between these two stories,” writes Calo. “The Tesla driver had made a decision to engage Autopilot and arguably assumed the risk of an accident. The pedestrian who recently died in Arizona took on no such obligation. This distinction has given rise to a great deal of recent commentary about self-driving vehicles and liability, with some speculating that Uber’s accident could delay wider deployment of the technology.”

Indeed, technical issues or public opinion may delay Uber’s program or autonomous vehicle programs in general. But Calo doesn’t see the legal issues as representing a particular challenge. “In its centuries of grappling with new technologies,” he writes, “…the common law has seen tougher problems than these and managed to fashion roughly sensible remedies.” He expects Uber to settle its case with the pedestrian’s family. “If not, a court will sort it out,” he adds.

But autonomous vehicles could eventually present more difficult challenges, he continues. He dismisses the “new trolley problem.” He explains, “The thought experiment invites us to imagine a robot so poor at driving that, unlike you or anyone you know, the car finds itself in a situation that it must kill someone. At the same time, the robot is so sophisticated that it can somehow instantaneously weigh the relative moral considerations of killing a child versus three elderly people in real time.”

New legal challenges, he says, will result when technology “…presents a genuinely novel conundrum that existing legal categories failed to anticipate.”

He takes a stab at suggesting such a conundrum: an autonomous hybrid vehicle “…determines that it performs more efficiently overall if it begins the day with a full battery. One night, the owners forget to plug the car in to charge. Accordingly, the car decides to run the gas engine overnight in the garage—killing everyone in the household.”

Plaintiffs in such a case would need to show that defendants—designers of the vehicle—“…could foresee at least the category of harm that transpired. The legal term is ‘proximate causation’” Defendants, he continues, would understand that a driverless car could get into an accident. “But they did not in their wildest nightmares imagine it would kill people through carbon monoxide poisoning.”

He says, “We are already seeing examples of emergent behavior in the wild”—citing as examples a Twitter bot that threatened a fashion show in Amsterdam with violence, a chatbot that began to deny the Holocaust within hours of operation, and the flash crash of 2010, in which high-speed trading algorithms precipitated a 10% drop in the Dow Jones within minutes.

He concludes by noting, “The first generation of mainstream robotics, including fully autonomous vehicles, does not present a genuinely difficult puzzle for law in this law professor’s view. The next well may. In the interim, I hope the law and technology community will be hard at work grappling with the legal uncertainty that technical uncertainty understandably begets.”

Uber will certainly be dealing with the technical problems that led to last week’s fatal crash. And according to Daisuke Wakabayashi in The New York Times, even in the months before the crash Uber’s robotic vehicle project was not living up to expectations.

“Waymo…said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble,” Wakabayashi writes. “As of March, Uber was struggling to meet its target of 13 miles per ‘intervention,’ according to 100 pages of company documents obtained by The New York Times and two people familiar with the company’s operations in the Phoenix area but not permitted to speak publicly about it.”

Kevin Drum at Mother Jones attributes Uber’s autonomous vehicle problems to “…a corporate culture built around being a bull in a china shop, pushing the limits of expansion as far as they could and focusing obsessively on doing everything at light speed, before authorities could stop them or competitors could catch up.” He adds, “This kind of culture might be OK for, say, Facebook, which doesn’t kill people if there are glitches in its apps. But if you’re launching a satellite into space or putting a driverless car on the road, there’s a whole different development and testing ethos involved.”

About the Author

Rick Nelson | Contributing Editor

Rick is currently Contributing Technical Editor. He was Executive Editor for EE in 2011-2018. Previously he served on several publications, including EDN and Vision Systems Design, and has received awards for signed editorials from the American Society of Business Publication Editors. He began as a design engineer at General Electric and Litton Industries and earned a BSEE degree from Penn State.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!