On the Parallelization of Monte-Carlo planning

We provide a parallelization with and without shared-memory for Bandit-Based Monte-Carlo Planning algorithms, applied to the game of Go. The resulting algorithm won the first non-blitz game against a professionnal human player in 9x9 Go.

[1]  J. Banks,et al.  Denumerable-Armed Bandits , 1992 .

[2]  R. Agrawal The Continuum-Armed Bandit Problem , 1995 .

[3]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[4]  Andrew G. Barto,et al.  Learning to Act Using Real-Time Dynamic Programming , 1995, Artif. Intell..

[5]  Robert W. Chen,et al.  Bandit problems with infinitely many arms , 1997 .

[6]  Claudio Gentile,et al.  Adaptive and Self-Confident On-Line Learning Algorithms , 2000, J. Comput. Syst. Sci..

[7]  Csaba Szepesvari,et al.  Reduced-Variance Payoff Estimation in Adversarial Bandit Problems , 2005 .

[8]  Tristan Cazenave,et al.  Combining Tactical Search and Monte-Carlo in the Game of Go , 2005, CIG.

[9]  Thomas P. Hayes,et al.  Robbing the bandit: less regret in online geometric optimization against an adaptive adversary , 2006, SODA '06.

[10]  Rémi Coulom,et al.  Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search , 2006, Computers and Games.

[11]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[12]  David Silver,et al.  Combining online and offline knowledge in UCT , 2007, ICML '07.

[13]  Rémi Munos,et al.  Bandit Algorithms for Tree Search , 2007, UAI.

[14]  Rémi Coulom,et al.  Computing "Elo Ratings" of Move Patterns in the Game of Go , 2007, J. Int. Comput. Games Assoc..

[15]  Sylvain Gelly,et al.  Modifications of UCT and sequence-like simulations for Monte-Carlo Go , 2007, 2007 IEEE Symposium on Computational Intelligence and Games.

[16]  Dimitri P. Bertsekas,et al.  Approximate Dynamic Programming , 2017, Encyclopedia of Machine Learning and Data Mining.

[17]  T. L. Lai Andherbertrobbins Asymptotically Efficient Adaptive Allocation Rules , 2022 .