The blueprints for Libratus – the poker AI bot that crushed professional players in a Texas hold ’em tournament earlier this year – were published on Monday in a research paper.
The software's victory over humans sparked a lot of headlines as it demonstrated a computer mastering an imperfect information game. Unlike chess or Go where players can see all the board pieces at all times, poker players have to come up with a strategy based more on probabilities since they do not know their opponent’s cards.
Libratus emerged as the clear victor after playing more than 120,000 hands in a heads-up no-limit Texas hold ’em poker tournament back in February. The machine crushed its meatbag opponents by 14.7 big blinds per game, drawing in $1,776,250 in prize money.
Now, a paper published in Science reveals how Libratus was programmed. The approach taken by its creators Noam Brown, a PhD student, and Tuomas Sandholm, a professor of computer science, both at Carnegie Mellon University in the US, employed three algorithms.
“Our game-theoretic approach features application independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy,” the pair's paper stated.
The first algorithm was briefly discussed after the competition as “counterfactual regret minimization.” It modeled a simpler version of poker – heads-up pot-limit Texas hold 'em – using a precomputed decision tree containing about 1013 nodes – much smaller than the 10161 nodes needed to cover all possible unique decisions in a no-limit game – and gradually learned to pick the best moves from the tree by playing simulated match after simulated match.
Similar hands were grouped together, Brown explained this week: "Intuitively, there is little difference between a King-high flush and a Queen-high flush. Treating those hands as identical reduces the complexity of the game and thus makes it computationally easier.” Also, betting, say, $100 or $101 is basically the same, so again, the betting decisions could be simplified.
So, essentially, Libratus started off with a fairly simple weighted decision tree from which to select its moves depending on its hole cards and those on the board.
Next, to elevate the software to superhuman level, it would whip out a more advanced strategy in the latter betting rounds during a hand. Once play had reached that point, a more detailed, fine-grained abstraction model of Texas hold 'em would be produced in real time to best win the hand. This algorithm was dubbed "nested subgame solving."
Dong Kim, one of Libratus’ opponents, previously said the competition was “extremely tough as the AI keeps getting better.” This is where the third algorithm came in: Libratus wasn't just trained offline once and used inference to make decisions in real-time during hands – it had a “self-improver” module to refine its decision-making processes.
It used machine learning to fill in missing branches of the overall "blueprint" decision model based on its opponents' moves. “In principle, one could conduct all such computations in advance, but the game tree is way too large for that to be feasible,” the paper stated.
By watching how its human rivals played, Libratus fleshed out the relatively simple "blueprint" decision tree with extra nodes to help it win hands against those opponents. It would analyze the frequency of its opponents' bet sizes, and update itself overnight, improving throughout the competition.
Libratus is computationally expensive, and was powered by the Bridges system, a high-performance computer at the Pittsburgh Supercomputer Center. It could achieve, at maximum, 1.35 PFLOPS – or more than a quadrillion floating-point math calculations per second. Libratus burned through approximately 19 million core hours of computing throughout the tournament.
"The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including non-recreational applications," the paper concluded.
This is a high-level overview of the system, of course, and the paper goes into some more detail. The code, however, will not be released publicly as the technology behind Libratus has been exclusively licensed to Strategic Machine, a startup founded by Sandholm in March this year.
At the Neural Processing Information Systems conference (NIPS) this year, during a demonstration of Libratus, Sandholm told The Register that the AI could be used for calculating strategic decisions in the real world, such as in finance and information security.
Sandholm said it could be deployed to help organizations thwart hackers exploiting zero-day vulnerabilities, where bugs in software are unknown to the folks trying to defend against such attacks. Meanwhile, Noam and Sandholm's research on nested subgame solving [PDF] won best paper at NIPS 2017. ®