Policy Based Inference in Trick-Taking Card Games
暂无分享,去创建一个
Nathan R. Sturtevant | Michael Buro | Douglas Rebstock | Christopher Solinas | Nathan R Sturtevant | M. Buro | Douglas Rebstock | Christopher Solinas
[1] Matthew L. Ginsberg,et al. GIB: Imperfect Information in a Computationally Challenging Game , 2011, J. Artif. Intell. Res..
[2] Sam Devlin,et al. Emulating Human Play in a Leading Mobile Card Game , 2019, IEEE Transactions on Games.
[3] Peter I. Cowling,et al. Information Set Monte Carlo Tree Search , 2012, IEEE Transactions on Computational Intelligence and AI in Games.
[4] Mark Richards,et al. Opponent Modeling in Scrabble , 2007, IJCAI.
[5] Nathan R. Sturtevant,et al. Robust game play against unknown opponents , 2006, AAMAS '06.
[6] Michael Buro,et al. Recursive Monte Carlo search for imperfect information games , 2013, 2013 IEEE Conference on Computational Inteligence in Games (CIG).
[7] Patrick Russell,et al. Jack , 2015, The Medical journal of Australia.
[8] Ian Frank,et al. Search in Games with Incomplete Information: A Case Study Using Bridge Card Play , 1998, Artif. Intell..
[9] Nathan R. Sturtevant,et al. Understanding the Success of Perfect Information Monte Carlo Sampling in Game Tree Search , 2010, AAAI.
[10] Kevin Waugh,et al. DeepStack: Expert-level artificial intelligence in heads-up no-limit poker , 2017, Science.
[11] Michael Buro,et al. Learning Policies from Human Data for Skat , 2019, 2019 IEEE Conference on Games (CoG).
[12] Michael Buro,et al. Improving Search with Supervised Learning in Trick-Based Card Games , 2019, AAAI.
[13] Noam Brown,et al. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals , 2018, Science.