The match between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol was viewed as an important test of how far research into artificial intelligence has come in its quest to create machines smarter than humans.

Share story

SEOUL, South Korea — Computer, one. Human, zero.

A Google computer program stunned one of the world’s top players Wednesday in a round of Go, which is believed to be the most complex board game created.

The match — between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol — was viewed as an important test of how far research into artificial intelligence, or AI, has come in its quest to create machines smarter than humans.

“I am very surprised because I have never thought I would lose,” Lee said at a news conference in Seoul. “I didn’t know that AlphaGo would play such a perfect Go.”

Unlimited Digital Access. $1 for 4 weeks.

Lee acknowledged defeat after 3 ½ hours of play.

Demis Hassabis, founder and chief executive of Google’s AI team DeepMind, the creator of AlphaGo, called the program’s victory a “historic moment.”

The match, the first of five scheduled through Tuesday, took place at a Seoul hotel amid intense news media attention. Hundreds of reporters — many from China, Japan and South Korea, where Go has been played for centuries — were covering it. Tens of thousands of people watched the contest live on YouTube.

Go is a two-player game of strategy said to have originated in China 3,000 years ago. Players compete to win more territory by placing black and white “stones” on a grid measuring 19 squares by 19 squares.

The play is more complex than chess, with a far greater possible sequence of moves, and requires superlative instincts and evaluation skills. Because of that, many researchers believed that mastery of the game by a computer was still a decade away.

Before the match, Lee said he could win 5-0 or 4-1, predicting that computing power alone could not win a Go match. Victory takes “human intuition,” something AlphaGo has not yet mastered, he said.

But after reading more about the program, he became less upbeat, saying AlphaGo appeared able to imitate human intuition to a certain degree and predicting that AI would eventually surpass humans in Go.

AlphaGo posed Lee a unique challenge. In a human-versus-human Go match, which typically lasts several hours, the players “feel” each other and evaluate styles and psychologies, he said.

“This time, it’s like playing the game alone,” Lee said on the eve of the match. “There are mistakes humans make because they are humans. If that happens to me, I can lose a match.”

To researchers who have been using games as platforms for testing AI, Go has remained the great challenge since the IBM-developed supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997.

“Really, the only game left after chess is Go,” Hassabis said Wednesday.

AlphaGo made news when it routed the three-time European Go champion Fan Hui in October, 5-0.

But Lee, 33, is one of the world’s most accomplished professional Go players, with 18 international titles under his belt. He has called the European champion’s level in Go “near the top among amateurs.”

AlphaGo has become much stronger since its matches with Fan, its developers said. It challenged Lee because it was ready to take on someone “iconic,” “a legend of the game,” Hassabis said. Google offered Lee $1 million if he wins the best-of-five series.

Hassabis said AlphaGo does not try to consider all the possible moves in a match, as a traditional AI machine such as Deep Blue does. Rather, it narrows its options based on what it has learned from millions of matches played against itself and in 100,000 Go games available online.

Hassabis said a central advantage of AlphaGo was “It will never get tired, and it will not get intimidated either.”

Kim Sung-ryong, a South Korean Go master who provided commentary during Wednesday’s match, said AlphaGo had made a clear mistake early on, but that unlike most human players, it did not lose its “cool.”

“It didn’t play Go as a human does,” he said. “It was a Go match with human emotional elements carved out.”

Lee said he knew he had lost the match after AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”

Lee said he now thought his chances for victory in the five-match series were 50-50.

Some computer scientists said Wednesday that they had expected the outcome.

“I’m not surprised at all,” said Fei-Fei Li, a Stanford University computer scientist who is director of the Stanford Artificial Intelligence Laboratory. “How come we are not surprised that a car runs faster than the fastest human?”

On Tuesday, before the match began, Oren Etzioni, director of the Allen Institute for Artificial Intelligence, a nonprofit research organization in Seattle, conducted a survey of leading members of the Association for the Advancement of Artificial Intelligence.

Of 55 scientists, 69 percent believed the program would win, and 31 percent believed Lee would be victorious. Moreover, 60 percent believed that the achievement could be considered a milestone toward building human-level AI software.

That question remains one of the most hotly debated within the field of AI. Machines have had increasing success in the past five years at narrow humanlike capabilities, such as understanding speech and vision.

However, the goal of “strong AI” — defined as a machine with an intellectual capability equal to that of a human — remains elusive.