Bandit based methods for tree search have recently gained popularity when applied to huge trees, e.g. in the game of go [6]. Their efficient exploration of the tree enables to return rapidly a good value, and improve precision if more time is provided. The UCT algorithm [8], a tree search method based on Upper Confidence Bounds (UCB) [2], is believed to adapt locally to the effective smoothness of the tree. However, we show that UCT is "over-optimistic" in some sense, leading to a worst-case regret that may be very poor. We propose alternative bandit algorithms for tree search. First, a modification of UCT using a confidence sequence that scales exponentially in the horizon depth is analyzed. We then consider Flat-UCB performed on the leaves and provide a finite regret bound with high probability. Then, we introduce and analyze a Bandit Algorithm for Smooth Trees (BAST) which takes into account actual smoothness of the rewards for performing efficient "cuts" of sub-optimal branches with high confidence. Finally, we present an incremental tree expansion which applies when the full tree is too big (possibly infinite) to be entirely represented and show that with high probability, only the optimal branches are indefinitely developed. We illustrate these methods on a global optimization problem of a continuous function, given noisy values.
[1]
Rémi Coulom,et al.
Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search
,
2006,
Computers and Games.
[2]
Peter Auer,et al.
Improved Rates for the Stochastic Continuum-Armed Bandit Problem
,
2007,
COLT.
[3]
Olivier Teytaud,et al.
Modification of UCT with Patterns in Monte-Carlo Go
,
2006
.
[4]
László Györfi,et al.
A Probabilistic Theory of Pattern Recognition
,
1996,
Stochastic Modelling and Applied Probability.
[5]
Csaba Szepesvári,et al.
Bandit Based Monte-Carlo Planning
,
2006,
ECML.
[6]
Csaba Szepesvári,et al.
Variance estimates and exploration function in multi-armed bandit
,
2008
.
[7]
Peter Auer,et al.
Finite-time Analysis of the Multiarmed Bandit Problem
,
2002,
Machine Learning.