Professor criticizes empathy machines; humans respond to bot’s plea for life
You’ve heard of artificial intelligence, but how about artificial intimacy? Sherry Turkle, a professor in the program in Science, Technology, and Society at MIT, is critical of the idea. Writing in The New York Times, she recalls a years-ago conversation with a 16-year-old girl who was considering sometime having a computer companion, mainly because she found people so disappointing.
“This girl had grown up in the time of Siri, a conversational object presented as an empathy machine—a thing that could understand her,” writes Turkle. “And so it seemed natural to her that other machines would expand the range of conversation.” However, Turkle writes, “These robots can perform empathy in a conversation about your friend, your mother, your child or your lover, but they have no experience of any of these relationships.”
Turkle continues, “In our manufacturing and marketing of these machines, we encourage children to develop an emotional tie that is sure to lead to an empathic dead end.” She has previously been critical of social robots and “…the illusion of companionship without the demands of friendship, the illusion of connection without the reciprocity of a mutual relationship”—especially for children.
Kevin Drum at Mother Jones offers a different take. “There’s nothing magic about human emotion,” he writes in response to Turkle’s article. “It’s all neurons and cortisol and dopamine and so forth, just like everything else in the human brain. If we feel like it, we can program analogues of human neurochemistry into an artificial intelligence and then send it out into the world to have all sorts of emotional experiences.”
Drum continues, “We wouldn’t bother with this if we were building a robot to assemble cars, but we would if we were building a robot to take care of a child.”
Drum references an experiment where a computer elicited empathy. As described in The Verge, 89 volunteers were recruited to complete mundane tasks, such as organizing a weekly schedule, with the help of Nao, a small humanoid robot. The point of the experiment came at the end, though, when experimenters asked the volunteers to turn the robot off. In response to Nao’s plea, ““No! Please do not switch me off!” 13 of 43 volunteers refused to turn Nao off, and the remaining 30 took twice as long to turn the bot off as volunteers not subjected to the plea.
Drum comments that Nao’s simulated fearfulness was real enough that many of the volunteers refused to turn off the bot. Drum expects robots will develop at least simulated empathy that’s as good as the real kind. He notes that people simulate empathy—actors and used-car salesmen, for example.
“In fact, here’s my prediction: artificial intelligence will eventually do everything better than humans do it,” Drum writes. “That includes the development and expression of emotions.”
But Turkle disagrees. “Technology challenges us to look at our human values,” she writes. “We can try to use technology to cure Parkinson’s or Alzheimer’s, which would be a blessing, but that blessing is not a reason to move from artificial brain enhancement to artificial intimacy.”
She concludes, “We program machines to appear more empathic. Being human today is about the struggle to remain genuinely empathic ourselves. To remember why it matters, to remember what we cherish.”