First went checkers, then fell chess. Now, a computer program has defeated the world’s top player in the ancient east Asian board game of Go — a major milestone for artificial intelligence that brings to a close the era of board games as benchmarks in computing.
At the Four Seasons Hotel in Seoul, Google DeepMind’s AlphaGo capped a 3-0 week on Saturday against Lee Sedol, a giant of the game. Lee and AlphaGo were to play again Sunday and Tuesday, but with AlphaGo having already clinched victory in the five-game match, the results are in and history has been made. It was a feat that experts had thought was still years away.
At the postgame news conference, Lee sat bolt upright, with a slight tone of resignation in his voice as he tried to explain his failure to get the better of the computer program that has taken from him bragging rights to global Go supremacy.
“I have to express my apologies,” Lee said, his voice quivering slightly. He seemed just as sad as after his previous two losses earlier in the week, but this time not so surprised.
“I misjudged the capabilities of AlphaGo and felt powerless.”
AlphaGo seems to have totally original moves it creates itself.
AlphaGo and Lee have been facing off in Seoul as a way of testing the computer program against a top-ranked player in a game that until now had been too complex for programs to master. Most observers had expected Lee to win the match, and AlphaGo’s success so far has surprised even its designers.
Seated next to Lee at the news conference, Google DeepMind co-founder Demis Hassabis said he was “stunned and speechless.” He complimented Lee on having put up a strong fight, saying, “AlphaGo can compute tens of thousands of positions per second. What’s incredible is that Lee Sedol can compete with that just with the power of his mind.”
Clad in a black blazer and sky blue shirt, Lee had a demeanor during the game that was as fidgety and nervous as during the previous two contests. He spent much of the game twiddling his fingers, leaning forward and back, while staring intently at the board.
All three games have been closely contested throughout, with AlphaGo pulling away toward the end. Lee continued his aggressive play on Saturday, while AlphaGo had clever counters to all his moves. “He went down swinging,” Chris Garlock, one of the live commentators, said of Lee’s performance Saturday.
The objective of Go is to use stones to acquire more territory on the board than one’s opponent. The game has been much more challenging for computer algorithms to master because of its complex and intuitive nature. There are far more possible moves in Go than chess or checkers, games in which computer programs have long been able to beat the sharpest human players.
Having now won three of the match’s five games, AlphaGo has won the match, but play will continue in accordance with Go convention. Saturday’s victory earned AlphaGo $1 million in prize money that Google DeepMind says it will donate to charities.
The game was watched by tens of thousands of spectators in South Korea and all over the world on English and Korean-language live feeds that were streamed on YouTube. Although Go is not a very popular game outside East Asia, it is played by many in South Korea, China and Japan, and the games have been keenly watched and dissected by millions.
A small group of South Korean Go professionals came to the decisive third contest for an up-close look at the evolution of their game. Ko Ju-yeon, 26, was impressed by AlphaGo’s strength and skill. “It knows everything and can play creatively.”
“All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself,” she added.
Google co-founder Sergey Brin praised AlphaGo’s style of play after the match.
“When you watch really great Go players play, it is like a thing of beauty,” said Brin, whose company acquired DeepMind in 2014. “So I am very excited that we have been able to instill that kind of beauty in our computers.”
AlphaGo’s success is “a really big deal” not just because it beat a world champion player at what might be the world’s most complex board game, but because of how it did it, said Guy Suter, founder of AI communications start-up Notion.
In the early days of AI, when researchers programmed computers to play checkers or chess, the algorithms ran through every permutation of every possible outcome, then chose the one with the highest likelihood of success.
This is how Deep Blue, the computer that IBM and Carnegie Mellon University scientists developed to play chess, defeated Garry Kasparov in a milestone 1997 match — the first in which a computer beat a human world champion.
Through brute force computing power, the 1997 version of Deep Blue could evaluate 200 million chess positions per second, twice as fast as a 1996 version that lost to Kasparov. In some situations, it could plot moves as far as 20 turns away.
Such computing worked for simpler games, but with Go, there are so many possibilities it isn’t practical for even a computer, Suter said.
Since Kasparov’s defeat, Go has become the new benchmark in board games — and one that couldn’t be reached without neural network AI engines, a technical term for computers that learn on their own. This is the kind of AI powering AlphaGo, which learns by observing the moves from the best human players, then training itself to get even better.
And while AlphaGo can only play Go, which may not seem useful to anyone who isn’t a board game enthusiast, Suter said its breakthrough will have implications for the AI world as a whole.
“There’s now a model we can take away from this and apply to so many different things,” Suter said.
If a computer can train itself to be the best Go player by watching the best human players, what’s to stop it from teaching itself how to be the best at anything else? Perhaps it could teach itself to recognize faces by looking at lots of different images, or how to drive a car more safely than humans.
“At the end of the day, winning a game of Go isn’t much different than safely driving a car across the country,” Suter said. “There are limitless possibilities of actions to take along the way, and a clearly defined failure state.”
Artificial intelligence experts believe computers are now ready to take on more than board games. Some are putting AI through the wringer with two-player no-limit Texas Hold ‘Em poker to see how a computer program fares when it plays against an opponent whose cards it can’t see. Others, like Oren Etzioni at the Allen Institute for Artificial Intelligence, are administering standardized tests like the SATs to see if algorithms can understand and answer less predictable questions.
Impressive as AI has become, Suter said it is still mostly developed for narrow purposes.
“This isn’t general purpose intelligence — it’s not sentient,” he said, acknowledging that some people have taken the news of AlphaGo’s win with trepidation. “The software bot couldn’t just wake up one morning and decide it wants to learn how to use firearms. It’s not that kind of level of intelligence.”
Still, AlphaGo’s win is a milestone for AI researchers, said Murray Campbell, a research scientist at IBM who worked on Deep Blue, the first AI to beat a human chess champion.
“In a way, it’s the end of an era,” Campbell said. “They’ve shown that board games are more or less done and it’s time to move on.”
Lien reported from Los Angeles and special correspondent Borowiec from Seoul.