Backpropagation Modification in Monte-Carlo Game Tree Search

The Algorithm UCT, proposed by Kocsys et al[3], which apply multi-armed bandit problem into the tree-structured search space, achieves some remarkable success in some challenging fields[2]. For UCT algorithm, Monte-Carlo simulations are performed with the guidance of UCB1 formula, which are averaged to evaluate a specified action. We observe that, as more simulations are performed, later ones usually lead to more accurate results, partly because the level of the search used in the later simulation is deeper and partly because more results are available to direct subsequent simulations. This paper presents a new method to improve the performance of UCT algorithm by increasing the feedback value of the later simulations. And the experimental results in the classical game Go show that our approach increases the performance of Monte-Carlo simulations significantly when exponential models are used.