暂无分享,去创建一个
[1] H. Kuhn. 9. A SIMPLIFIED TWO-PERSON POKER , 1951 .
[2] Wojciech M. Czarnecki,et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.
[3] Noam Brown,et al. Superhuman AI for multiplayer poker , 2019, Science.
[4] Noam Brown,et al. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals , 2018, Science.
[5] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[6] Demis Hassabis,et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.
[7] Michael H. Bowling,et al. Regret Minimization in Games with Incomplete Information , 2007, NIPS.
[8] Simon M. Lucas,et al. A Survey of Monte Carlo Tree Search Methods , 2012, IEEE Transactions on Computational Intelligence and AI in Games.
[9] T. Tideman,et al. Independence of clones as a criterion for voting rules , 1987 .
[10] David Silver,et al. Fictitious Self-Play in Extensive-Form Games , 2015, ICML.
[11] Nathan R. Sturtevant,et al. A parameterized family of equilibrium profiles for three-player kuhn poker , 2013, AAMAS.