AI gaming program smashes all comers, human and not

Not happy with beating us at complex games, computers now want to belittle us.

Last year Cosmos reported on the development of AlphaGo Zero, a program that can take us to the cleaners at the fiendishly difficult game of Go without any training (hence the “zero” bit). It works out the strategy itself.

AlphaGo Zero showed us its potential with a 100-to-nought rout of its predecessor, which had benefited from human input, and not surprisingly this was hailed as milestone in artificial intelligence research. It’s worth reading Cathal O’Connell’s original report to see why.

Now, however, the developers, Britain’s DeepMind, have upped the ante again with the release of AlphaZero (note that this time the “Go” bit is missing), which can do the same thing in any number of games.

It was able to learn Go, chess and shogi (Japanese chess) simply by playing against itself repeatedly until each was mastered and, according to developer David Silver, within a few hours could beat state-of-the-art AI programs created just to specialise in one of these games.{%recommended 655%}

Writing in a paper published in the journal Science, Silver and colleagues rather casually suggest that they “generalised” the AlphaGo Zero approach to create a single algorithm that “can achieve superhuman performance” in many challenging games.

“AlphaZero replaces the handcrafted knowledge and domain-specific augmentations used in traditional game-playing programs with deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm,” they report.

The new program uses the same network architecture as its predecessor but has to take account of some significant differences between the games. For example, unlike in Go you can have draws in chess and shogi, and the rules are position dependent.

AlphaZero also cannot assume symmetry, because chess and shogi aren’t symmetrical – pawns only move forward, for example, and castling is different on kingside and queenside.

The developers seem to have taken this in their stride, however. 

They report that AlphaZero actually searches significantly fewer positions per second than the game-specific programs it thrashes in competition, often compensating “by using its deep neural network to focus much more selectively on the most promising variations”.

Which is kind of a human thing to do – and that raises the question of how and where we might be able to maintain some advantage.

It has been noted that while chess, shogi and Go are highly complex, a number of characteristics make them easier for AI systems to get to grips with: notably, all the information needed to make a decision is visible to the players.

It’s time to give them something really difficult to play. 

Please login to favourite this article.