Games that machines now play best

Had you asked any serious chess player on 5 December 2017 what the strongest commercially available chess software on the market was, mostly likely you would have heard names like Houdini, Komodo or Stockfish. The correct answer happened to be Stockfish, but all three programs certainly play chess better than any human, including current world champion Magnus Carlsen.

On 6 December that all changed. DeepMind, a British company now owned by Google that specialises in artificial intelligence, published a paper detailing the explosive entrance of a new champion in the computer chess arena. According to DeepMind, its AlphaZero neural network was taught only the rules of chess, then allowed to play against itself for a mere four hours. With that, AlphaZero had learned enough to obliterate Stockfish. In a 100-game match, AlphaZero scored 28 wins and 72 draws, a staggering achievement even for advanced AI.

Traditional chess engines have long depended on massive opening theory ‘books’ and endgame ‘tablebases’ that the software consults at appropriate points during a game. Middlegame decisions are made using a process known as a search tree, looking ahead to see millions of possible candidate moves and then numerically evaluating and ranking them. The criteria an engine uses to decide its best move in a given position is programmed into the software by humans. 

AlphaZero used neither opening databases nor endgame tables, and nothing about the game was pre-programmed. It simply ‘taught’ itself chess. In a few hours, playing through (presumably) millions of games against itself, the AI remembered its successes as well as its failures, continuously updating its knowledge of the game.

Chess isn’t the first ancient strategy game turned upside down.

While DeepMind hasn’t released enough information to fully calculate AlphaZero’s chess-playing strength, it appears to be vastly superior to anything carbon-based. Chess prowess is measured using the Elo rating. A beginner who has just learned the rules might have an Elo rating of 400 to 700. A player with a few months’ experience could play at about 1,000. An expert player is rated 1,800 to 2,000. Grandmasters are 2,500 and higher, with the top players in the world rated 2,700 to 2,800. The best ratings ever achieved by a human are in the 2,880 range. Stockfish was estimated to be in the 3,300 range, as it routinely trounced all human opponents with ease. AlphaZero, when finally assessed properly, could well be in the 4,000 range.

Chess isn’t the first ancient strategy game DeepMind has turned upside down. In 2016 its AlphaGo program defeated the reigning world Go champion, Lee Sedol. AI experts had previously predicted a program capable of beating a 9-dan (the highest possible ranking) Go professional was at least a decade away.

When Go supremacy was wrested away from human beings, it joined an ever-growing list of strategy games now played better by computers. 

In the chess world, Garry Kasparov famously lost a match under normal chess time controls to IBM’s Deep Blue in 1997. Backgammon software was playing at or near world-champion level as far back as the late 1980s. Checkers, or 8×8 draughts, fell to the machines in 1995 when the University of Alberta’s Chinook program defeated then world champion Don Lafferty. Chinook would go on to ‘solve’ checkers in 2007 by proving the game would always end in a draw with perfect play from both sides.

As recently as last year, a poker-playing program specialising in heads-up no-limit hold ’em, called Libratus, soundly defeated a team of four world-class hold ’em experts during a multi-day tournament in which more than 120,000 hands were dealt.{%recommended 5980%}

A slightly simpler version of the game, limit hold ’em, had been solved two years before (again by researchers at the University of Alberta).  

Other solved board games include Connect Four, in which the first player can always force a win. Othello is, technically, not yet solved, but proper play by both sides will almost certainly result in a draw. 

Chess and Go, due to the complexity of the two games, are not expected to be fully solved for years to come. The prediction for chess is a draw with perfect play, although some experts claim a win for white (with its first-move advantage) may be inevitable. Go is still too complex for any meaningful guesses as to a solved state.

At least we humans still have table tennis, right? Well, we did. 

At the 2018 Consumer Electronics Show in Las Vegas, Japanese technology company Omron unveiled Forpheus, a table-tennis robot using advanced cameras and artificial intelligence to track and return any ball hit its way. By interpreting body language, Forpheus could even predict when its opponent intended to ‘smash’ the ball back over the net. I heard no reports of it losing a single game. 

Please login to favourite this article.