AlphaGo
In this handout image provided by Google, South Korean professional Go player Lee Se-Dol (right) puts his first stone against Google's artificial intelligence program, AlphaGo, during the Google DeepMind Challenge Match in Seoul, South Korea, March 10, 2016. Google via Getty Images

AlphaGo, a computer program developed by researchers at Google’s DeepMind division, has won the third straight game of Go against the 18-time world champion Lee Sedol. With Saturday’s victory, the computer program has taken an unassailable lead in the five-game series against his human competitor, marking a watershed moment in the field of machine learning and artificial intelligence (AI) research.

“Folks, you saw history made here today,” one of the match’s English-language commentators Chris Garlock reportedly said when Lee finally conceded defeat in the game that lasted just over four hours.

With its three-games-to-none victory, Google will receive $1 million in prize money. However, the remaining two games in the series, scheduled for Sunday and Tuesday, will still be played out.

Go is believed to have been invented in China nearly 2,500 years ago. It’s played by placing black or white stones on a square grid. When a player surrounds any of his opponent’s pieces, they’re captured. The goal of the game is to control at least 50 percent of the board.

While the rules of the game are simpler than those of chess, the overall complexity is much higher, making it, in the words of DeepMind’s co-founder Demis Hassabis, a much more “intuitive” game.

“Go is a game of profound complexity,” Hassabis recently wrote in a blog post. “This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to AI researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.”

As DeepMind’s AI researchers explain, while chess offers some 20 possible choices per move, Go has about 200. In other words, the game provides more options than there are atoms in the universe.

DeepMind utilizes two deep-learning algorithms, the “policy network,” which is trained to imitate human play by watching millions of games, and the “value network,” which evaluates how strong each move is.

Ultimately, researchers at DeepMind aim to create a machine imbued with artificial general intelligence — which would be much broader in scope than the AIs like IBM’s Deep Blue — that would be useful in robotics, smartphone assistant systems, healthcare and climate modeling.