In 1997, IBM’s supercomputer “Deep Blue” famously defeated reigning chess champion Garry Kasparov in a heavily publicized match that marked a significant breakthrough in the field of machine learning. Two decades on, computers have learned to master backgammon and several Atari video games, such as Breakout, Pinball, Space Invaders and Pong.
Now, in another milestone in machine learning and pattern recognition, a computer program developed by researchers at the Alphabet-owned Google DeepMind division defeated the three-time European champion of the ancient Chinese game Go — which has long been considered the most challenging game for a computer to master.
The team that developed the computer program, named AlphaGo, detailed the findings in an article published in the journal Nature.
Go is believed to have been invented in China nearly 2,500 years ago. It’s played by placing black or white stones on a square grid. When a player surrounds any of his opponent’s pieces, they’re captured. The goal of the game is to control at least 50 percent of the board.
While the rules of the game are simpler than those of chess, the overall complexity is much higher, making it, in the words of DeepMind co-founder Demis Hassabis, a much more “intuitive” game.
“Go is a game of profound complexity,” AI researcher Hassabis wrote in a blog post Wednesday. “This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to AI researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.”
As DeepMind’s AI researchers explain, while chess offers some 20 possible choices per move, Go has about 200. In other words, the game provides more possible positions than there are atoms in the universe.
As a result, computer scientists have been trying to crack the game for years. Coincidentally, just a day before the DeepMind team announced its breakthrough, Facebook founder Mark Zuckerberg wrote in a post that his company’s AI team was “getting close” to achieving the same thing.
Tested against rival Go-playing AIs, Google's system won 499 out of 500 matches. And, last October, when AlphaGo was pitted against Fan Hui — Europe’s top player —the program won all five games.
“The most significant aspect of all this for us is that AlphaGo isn’t just an ‘expert’ system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go,” Hassabis wrote, in the blog post. “While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems.”
Since the methods used by AlphaGo to master Go are general purpose, in the future, the company hopes to be able to use the new computing power in fields like healthcare, complex disease analysis, and climate modeling.
For now, though, the researchers at DeepMind are preparing for another test — pitting AlphaGo against the world's top Go player Lee Sedol in Seoul in March.