Game-playing DeepMind AI can beat top humans at chess, Go and poker – New Scientist

Shall we play a game?

mccool/Alamy

A single artificial intelligence can beat human players in chess, Go, poker and other games that require a variety of strategies to win. The AI, called Student of Games, was created by Google DeepMind, which says it is a step towards an artificial general intelligence capable of carrying out any task with superhuman performance.

Martin Schmid, who worked at DeepMind on the AI but who is now at a start-up called EquiLibre Technologies, says that the Student of Games (SoG) model can trace its lineage back to two projects. One was DeepStack, the AI created by a team including Schmid at the University of Alberta in Canada and which was the first to beat human professional players at poker. The other was DeepMinds AlphaZero, which has beaten the best human players at games like chess and Go.

The difference between those two models is that one focused on imperfect-knowledge games those where players dont know the state of all other players, such as their hands in poker and one focused on perfect-knowledge games like chess, where both players can see the position of all pieces at all times. The two require fundamentally different approaches. DeepMind hired the whole DeepStack team with the aim of building a model that could generalise across both types of game, which led to the creation of SoG.

Schmid says that SoG begins as a blueprint for how to learn games, and then improve at them through practice. This starter model can then be set loose on different games and teach itself how to play against another version of itself, learning new strategies and gradually becoming more capable. But while DeepMinds previous AlphaZero could adapt to perfect-knowledge games, SoG can adapt to both perfect and imperfect-knowledge games, making it far more generalisable.

The researchers tested SoG on chess, Go, Texas holdem poker and a board game called Scotland Yard, as well as Leduc holdem poker and a custom-made version of Scotland Yard with a different board, and found that it could beat several existing AI models and human players. Schmid says it should be able learn to play other games as well. Theres many games that you can just throw at it and it would be really, really good at it.

This wide-ranging ability comes at a slight cost in performance compared with DeepMinds more specialised algorithms, but SoG can nonetheless easily beat even the best human players at most games it learns. Schmid says that SoG learns to play against itself in order to improve at games, but also to explore the range of possible scenarios from the present state of a game even if it is playing an imperfect-knowledge one.

When youre in a game like poker, its so much harder to figure out; how the hell am I going to search [for the best strategic next move in a game] if I dont know what cards the opponent holds? says Schmid. So there was some some set of ideas coming from AlphaZero, and some set of ideas coming from DeepStack into this big big mix of ideas, which is Student of Games.

Michael Rovatsos at the University of Edinburgh, UK, who wasnt involved in the research, says that while impressive, there is still a very long way to go before an AI can be thought of as generally intelligent, because games are settings in which all rules and behaviours are clearly defined, unlike the real world.

The important thing to highlight here is that its a controlled, self-contained, artificial environment where what everything means, and what the outcome of every action is, is crystal clear, he says. The problem is a toy problem because, while it may be very complicated, its not real.

Topics:

Read more here:

Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist

Related Posts

Comments are closed.