Policy Based Inference in Trick-Taking Card Games

Trick-taking card games feature a large amount of private information that slowly gets revealed through a long sequence of actions. This makes the number of histories exponentially large in the action sequence length, as well as creating extremely large information sets. As a result, these games become too large to solve. To deal with these issues many algorithms employ inference, the estimation of the probability of states within an information set. In this paper, we demonstrate a Policy Based Inference (PI) algorithm that uses player modelling to infer the probability we are in a given state. We perform experiments in the German trick-taking card game Skat, in which we show that this method vastly improves the inference as compared to previous work, and increases the performance of the state-of-the-art Skat AI system Kermit when it is employed into its determinized search algorithm.

[1]  Matthew L. Ginsberg,et al.  GIB: Imperfect Information in a Computationally Challenging Game , 2011, J. Artif. Intell. Res..

[2]  Sam Devlin,et al.  Emulating Human Play in a Leading Mobile Card Game , 2019, IEEE Transactions on Games.

[3]  Peter I. Cowling,et al.  Information Set Monte Carlo Tree Search , 2012, IEEE Transactions on Computational Intelligence and AI in Games.

[4]  Mark Richards,et al.  Opponent Modeling in Scrabble , 2007, IJCAI.

[5]  Nathan R. Sturtevant,et al.  Robust game play against unknown opponents , 2006, AAMAS '06.

[6]  Michael Buro,et al.  Recursive Monte Carlo search for imperfect information games , 2013, 2013 IEEE Conference on Computational Inteligence in Games (CIG).

[7]  Patrick Russell,et al.  Jack , 2015, The Medical journal of Australia.

[8]  Ian Frank,et al.  Search in Games with Incomplete Information: A Case Study Using Bridge Card Play , 1998, Artif. Intell..

[9]  Nathan R. Sturtevant,et al.  Understanding the Success of Perfect Information Monte Carlo Sampling in Game Tree Search , 2010, AAAI.

[10]  Kevin Waugh,et al.  DeepStack: Expert-level artificial intelligence in heads-up no-limit poker , 2017, Science.

[11]  Michael Buro,et al.  Learning Policies from Human Data for Skat , 2019, 2019 IEEE Conference on Games (CoG).

[12]  Michael Buro,et al.  Improving Search with Supervised Learning in Trick-Based Card Games , 2019, AAAI.

[13]  Noam Brown,et al.  Superhuman AI for heads-up no-limit poker: Libratus beats top professionals , 2018, Science.