Rick Nelson 90x110

Beware autonomous machines making decisions

Autonomous and semiautonomous machines are reaching the threshold of practicality on a variety of fronts. Amazon, for example, has launched a well-publicized proposal to deliver packages using aerial drones. The challenges seem more legal and legislative than technical.

On another front, in response to the White House’s SmartAmerica Challenge, National Instruments and several industrial and university partners have demonstrated a semiautonomous Smart Emergency Response System that uses smart technology to empower first responders and other emergency personnel with information needed to locate and assist victims in disaster situations.

Perhaps most visibly, autonomous vehicles are gaining traction on the road to commercial viability. Intel is paving the way for autonomous vehicles and connected cars with a family of hardware and software products called Intel In-Vehicle Solutions, and it announced it is channeling some of its Intel Capital Connected Car Fund to ZMP, the Japanese developer of an autonomous driving platform and sensor-laden connected vehicles.

And autonomous vehicle pioneer Google is talking with automakers about bringing its self-driving technology to market within a six-year time frame and is considering designing its own vehicles or making its technology available to automakers. The company announced in May that it is exploring what fully self-driving vehicles would look like by building some prototypes. “They won’t have a steering wheel, accelerator pedal, or brake pedal… because they don’t need them,” noted a May 27 Google blog post. “Our software and sensors do all the work.”

Less well known is a proposal by Nina Mahmoudian, a researcher at Michigan Technological University, for a next generation of autonomous underwater vehicles that have a sense of what they are looking for—such as a missing airliner. “We want to make a smarter vehicle, one that can search on its own and make decisions on its own,” she said.

If machines are to make decisions autonomously, they’ll need some guidance. To that end, researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute are teaming with the U.S. Navy in a project (funded by the Office of Naval Research) to explore technology that would pave the way for developing robots capable of making moral decisions.

The scientists will explore the challenges of infusing autonomous robots with a sense of right, wrong, and the consequences of both. They will attempt to isolate elements of human moral competence and develop frameworks for modeling the human-level moral reasoning that can be implemented in computer architectures.

Patrick Lin, Ph.D., director of the Ethics + Emerging Sciences Group at California Polytechnic State University, has focused on the morality of autonomous-vehicle crash algorithms. Writing May 6 in Wired, he suggested how fraught machine decisions can be. A human driver having a consciousness bandwidth of about 50 b/s and faced with an inevitable crash will act instinctively. An autonomous vehicle in similar circumstances, however, will have plenty of time to make decisions on whether, for example, to collide with a motorcyclist wearing a helmet or one not wearing one. The helmeted cyclist is more likely to survive a crash, but targeting helmeted riders creates perverse incentives.

Lin counseled discussion about ethics and autonomous cars to help raise public and industry awareness of the issues involved. That’s good advice, but the discussion will need to move beyond autonomous vehicles as more and more machines will be making decisions on their own. The Navy-funded project is a step in the right direction.

Rick Nelson
Executive Editor
Visit my blog: bit/ly/N8rmKm

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!