The program, also known as AlphaGo, defeated European champion Fan Hui in five straight games of Go, one of the hardest tests for artificial intelligence research. (Image courtesy of Thinkstock).
The program, also known as AlphaGo, defeated European champion Fan Hui in five straight games of Go, one of the hardest tests for artificial intelligence research. (Image courtesy of Thinkstock).
The program, also known as AlphaGo, defeated European champion Fan Hui in five straight games of Go, one of the hardest tests for artificial intelligence research. (Image courtesy of Thinkstock).
The program, also known as AlphaGo, defeated European champion Fan Hui in five straight games of Go, one of the hardest tests for artificial intelligence research. (Image courtesy of Thinkstock).
The program, also known as AlphaGo, defeated European champion Fan Hui in five straight games of Go, one of the hardest tests for artificial intelligence research. (Image courtesy of Thinkstock).

Defeating Top Go Player, Artificial Intelligence Narrows Learning Gap with Humans

Feb. 4, 2016
In a major AI breakthrough, a computer program has defeated a top player of Go, an ancient board game that requires a devilish mix of strategy and instinct.

Despite our murky understanding of how humans form concepts, researchers from DeepMind—an artificial intelligence company under Google parent Alphabet—have developed a program that appears to bring human and machine learning closer than ever before. In a major breakthrough, the program defeated a top player of Go, an ancient board game that requires a mix of strategy and instinct extremely difficult for computers.

In a competition held last fall, the program, also known as AlphaGo, defeated European champion Fan Hui in five straight games on a full board. The event marked the first time that a computer had beaten a professional under those circumstances. Other programs had previously defeated professionals on smaller, unofficial boards or with other handicaps. The results were published in the journal Nature last week.

Beating humans at board games has long served as a benchmark for AI research. AlphaGo has widely been compared to IBM’s Deep Blue program, which infamously defeated chess champion Gary Kasparov in 1996. Deep Blue used what is known as brute force processing to defeat Kasparov, combing through all the possible moves in the game and all the possible outcomes of those moves.

Although brute force processing was an effective approach to chess, it has not worked with Go. The game, which is thought to have originated in China more than 2,500 years ago, involves placing white and black stones on a grid, with each players vying to control more than half the board.

Go is considered one of the hardest tests for AI because there are far more possible moves than in chess. “Another way of viewing the complexity of Go is that the number of possible configurations on the board is more than the number of atoms in the universe,” says Demis Hassabis, chief executive of DeepMind, in a YouTube video.

For that reason, the researchers set out to create a program that could narrow its search area, recognizing patterns and making the kind of intuitive (and almost instinctual) judgments that define how humans approach the game.

European Go champion Fan Hui shakes hands with Demis Hassabis, CEO of DeepMind, an artificial intelligence company under Google parent Alphabet. Hassabis says that the AlphaGo breakthrough could lead to more general AI programs. (Image courtesy of DeepMind, Youtube).

Within AlphaGo, the research team combined two different neural networks—programs with millions of interconnected points that attempt to process data like a human brain. These networks employ so-called “deep learning” to help the program form abstract concepts by studying huge amounts of data.

AlphaGo's first neural network analyzed around 30 million Go positions, with humans teaching it which moves to make next. The second program tested what it had learned from watching humans, playing thousands of games against itself. The program taught itself to evaluate board positions and even developed new strategies on its own. While the first part of AlphaGo’s education was supervised by humans, this second part was completely unsupervised.

Unlike brute force programs, this approach to machine learning allowed the research team to restrict the depth of AlphaGo’s tree search. The program only looked about 20 moves ahead, ensuring that the program did not crumble under the weight of so many possibilities.

European Go champion Fan Hui talks with DeepMind researchers about one of his games with AlphaGo. (Image courtesy of DeepMind, Youtube).

The researchers compared the new approach to the way that humans learn from experience and make creative choices with new knowledge. “The search process itself is not based on brute force,” says David Silver, a researcher at DeepMind. “It’s based on something more akin to imagination.”

Using this hybrid algorithm, AlphaGo won 99.8% of the games it played against other Go programs. Next came the defeat of Hui, who is ranked 633rd in the world. In a blog post last week, Alphabet said that AlphaGo was next scheduled to play against Lee Sedol, the fifth-ranked player in the world, in March.

Although there has been significant progress in the study of neural networks, AlphaGo’s success was somewhat surprising to other researchers. Remi Coulom, who has spent nearly a decade developing a Go-playing algorithm, said in an interview with Wired magazine last year that he thought it would take another 10 years for researchers to design a program that could defeat professionals.

Coulom’s prediction came last December, at which point AlphaGo had already won its first match against a professional, but before the results were published. Around the same time, a research team within Facebook published new advances in their own Go-playing program, and multiple companies in Silicon Valley had begun to pour money into artificial intelligence research. In November, for instance, Toyota established an AI research laboratory, pledging $1 billion in funding.

IBM's Watson program defeated the two most successful Jeopardy! champions, Ken Jennings and Brad Rutter, in the trivia game in 2011. Doctors are now using the program to help physicians diagnose and treat patients. AlphaGo could be employed in similar applications in the future, according to DeepMind researchers. (Image courtesy of IBM).

While it might not be the general AI that researchers have chased for decades, AlphaGo’s ability to recognize patterns could have far-reaching applications. DeepMind’s Silver says that the program could be used to scan through healthcare data, make diagnoses, and find new strategies for treatment. In a Youtube video published by Nature, Silver also said that it could be used to enhance Google services, such as a new personal assistant for smartphones and tablets. This contrasts with IBM’s Deep Blue, which was designed specifically for chess.

Neural networks are also being investigated in the study of image recognition and human-computer interactions. John Gianandrea, head of machine learning at Google, who has focused in recent years on self-driving cars, has said that language understanding and summary is the “holy grail” of AI research.

The Defense Advanced Research Projects Agency (DARPA), for instance, is sponsoring research into computer programs that not only extract ideas from words, but also from a person’s tone, facial expression, and gestures. The agency is also relying on games to test computer intelligence. One of the most ambitious goals is collaborative storytelling, in which computers and humans each take turns adding to a story.

“This is a parlor game for humans, but a tremendous challenge for computers,” said Dr. Paul Cohen, a DARPA program manager who has written several books on AI. The point of the research is to make computers more like our partners and assistants, he says—not just tools to be prodded by a few clicks.

Doing so requires making computers more sensitive to the complexities of life. For Hassabis, games are the most natural way to do this. “Most games are fun and were designed because they’re microcosm of some aspect of life,” he says, “and they’re maybe slightly constrained or simplified in some way, but that makes them the perfect challenge as a stepping stone toward building general AI.”

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!