For the first time, a computer has beaten a human professional at the game of Go — an ancient board game that has long been viewed as one of the greatest challenges for Artificial Intelligence.
Google DeepMind’s GPU-accelerated AlphaGo program beat Fan Hui, the European Go champion, five times out of five in tournament conditions.
Demis Hassabis, who oversees DeepMind, mentioned in a recent article that DeepMind’s deep learning system works pretty well on a single computer equipped with a decent number of GPU accelerators, but for the match against Fan Hui, the researchers used a larger network of computers that spanned about 170 GPUs. This larger computer network both trained the system and played the actual game, drawing on the results of the training.
The team confirmed they will use the same setup when they take on the Go world champion in South Korea.
Rémi Coulom, the French researcher behind what was previously the world’s top artificially intelligent Go player, has spent the past decade trying to build a system capable of beating the world’s best players, and now, he believes that system is here. “I’m busy buying some GPUs,” he says.