Researchers try to teach robots to be good

Can you teach a robot to be good? Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute are teaming with the U.S. Navy in a project to explore technology that would pave the way for developing robots capable of making moral decisions. If successful, the program might allay concerns expressed recently by Stephen Hawking and colleagues about the potential dangers of artificial intelligence.

And on a related note, Patrick Lin, PhD, director of the Ethics + Emerging Sciences Group at California Polytechnic State University, takes on the morality of autonomous-vehicle crash algorithms. Writing in Wired, he asks whether an autonomous vehicle facing an inevitable accident should choose to hit a motorcyclist wearing a helmet, who would be more likely to survive, or one not wearing a helmet, who is acting irresponsibly.

The project involving Tufts, Brown, and RPI is funded by the Office of Naval Research and coordinated under the Multidisciplinary University Research Initiative. The scientists will explore the challenges of infusing autonomous robots with a sense of right, wrong, and the consequences of both.

“Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,” said principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts, as reported in Newswise. “The question is whether machines—or any other artificial system, for that matter—can emulate and exercise these abilities.”

Scheutz described a battlefield scenario in which a robot must make decisions about which of two injured soldiers it should assist—for example, convey one patient for urgently needed treatment or stop along the way on encountering another injured person.

Researchers will attempt to isolate elements of human moral competence and develop frameworks for modeling the human-level moral reasoning that can be implemented in computer architectures.

Selmer Bringsjord, head of the Cognitive Science Department at RPI, said, “We’re talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don’t have to tell them what to do. When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario.”

Scheutz added, “If we can computationally model aspects of moral cognition in machines, we may be able to equip robots with the tools for better navigating real-world dilemmas.”

The article in Wired by Lin of Cal Poly suggests how difficult the moral problems that robots might have to solve can be. He tackles the problem by addressing the morality of autonomous-vehicle crash algorithms. He asks whether an autonomous (and assumedly unoccupied) vehicle faced with the certainty of a crash should swerve left and hit a Volvo SUV or swerve right and hit a Mini Cooper. Physics would suggest the former, although an algorithm implementing that approach would make SUVs into targeted vehicles through no fault of the owners or driver and passengers.

He presents yet another scenario, in which an autonomous vehicle facing an inevitable accident must crash into either a motorcycle whose rider is wearing a helmet or one whose isn't. Crash optimization suggests the former, but that would penalize helmet-wearing riders for acting responsibly. Further, should the details of an algorithm that targets helmeted riders become widely known, otherwise responsible riders may choose to forego helmets.

Lin suggests a couple of possibilities. First, when an accident is inevitable, program the autonomous vehicle to choose randomly which vehicle to collide with. Another approach would be to withhold information—for example, don't let the crash algorithms know the make and model of the SUV and Mini Cooper, or whether motorcyclists are wearing helmets. Both approaches have problems.

Lin concludes his article by noting, “Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want to play in this emerging field…. As the first step toward solving these challenges, creating an open discussion about ethics and autonomous cars can help raise public and industry awareness of the issues, defusing outrage (and therefore large lawsuits) when bad luck or fate crashes into us.”

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!