R Nelson Mug Thumb

Maybe it is time to fear the robots

Dec. 13, 2014

Is artificial intelligence a threat, as Stephen Hawking and others have warned? Gary Marcus, a professor of psychology and neuroscience at New York University and CEO of startup Geometric Intelligence, advises against panicking. Computers today, he writes in the Wall Street Journal, can balance checkbooks and land airplanes, but they lack a teenager’s ability to learn to play a new video game.

Maybe so, but some are pretty adept at old video games. As Tom Simonite points out in MIT Technology Review, researchers at DeepMind (now part of Google) demonstrated last December software that mastered the classic Atari games Pong, Breakout, and Enduro. The software employed deep learning (data processing through simulated neurons) and reinforcement learning (based on the work of animal psychologists including B.F. Skinner) to learn the games by trial and error.

Simonite quotes Stuart Russell, a professor and artificial intelligence specialist at University of California, Berkeley, as saying of the demonstration, “People were a bit shocked because they didn’t expect that we would be able to do that at this stage of the technology. I think it gave a lot of people pause.”

Google’s immediate plans for DeepMind technology seem innocuous—refine YouTube recommendations and improve mobile voice search. But DeepMind founder Demis Hassabis envisions much more for the technology, such as generating and testing hypotheses about disease in the lab. Simonite quotes Hassabis as saying, “One reason we don’t have more robots doing more helpful things is that they’re usually preprogrammed. They’re very bad at dealing with the unexpected or learning new things.”

Let’s hope when robots learn new things they are good things. Simonite notes that to that end, Hassabis is setting up an ethics board within Google to cope with the potential downsides of AI.

Marcus, writing in the Journal, still insists, “’Superintelligent’ machines won’t be arriving soon.” Nevertheless, he cautions that computers needn’t be superintelligent to cause damage—a stock-market flash crash, for example.

Clearly, Marcus supports continued AI research. But he adds, “The real problem isn’t that world domination automatically follows from sufficiently increased machine intelligence; it is that we have absolutely no way, so far, of predicting or regulating what comes next.”

About the Author

Rick Nelson | Contributing Editor

Rick is currently Contributing Technical Editor. He was Executive Editor for EE in 2011-2018. Previously he served on several publications, including EDN and Vision Systems Design, and has received awards for signed editorials from the American Society of Business Publication Editors. He began as a design engineer at General Electric and Litton Industries and earned a BSEE degree from Penn State.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!