Combining expert advice in reactive environments

“Experts algorithms” constitute a methodology for choosing actions repeatedly, when the rewards depend both on the choice of action and on the unknown current state of the environment. An experts algorithm has access to a set of strategies (“experts”), each of which may recommend which action to choose. The algorithm learns how to combine the recommendations of individual experts so that, in the long run, for any fixed sequence of states of the environment, it does as well as the best expert would have done relative to the same sequence. This methodology may not be suitable for situations where the evolution of states of the environment depends on past chosen actions, as is usually the case, for example, in a repeated non-zero-sum game.A general exploration-exploitation experts method is presented along with a proper definition of value. The definition is shown to be adequate in that it both captures the impact of an expert's actions on the environment and is learnable. The new experts method is quite different from previously proposed experts algorithms. It represents a shift from the paradigms of regret minimization and myopic optimization to consideration of the long-term effect of a player's actions on the environment. The importance of this shift is demonstrated by the fact that this algorithm is capable of inducing cooperation in the repeated Prisoner's Dilemma game, whereas previous experts algorithms converge to the suboptimal non-cooperative play. The method is shown to asymptotically perform as well as the best available expert. Several variants are analyzed from the viewpoint of the exploration-exploitation tradeoff, including explore-then-exploit, polynomially vanishing exploration, constant-frequency exploration, and constant-size exploration phases. Complexity and performance bounds are proven.

[1]  H. Chernoff A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations , 1952 .

[2]  W. Hoeffding Probability Inequalities for sums of Bounded Random Variables , 1963 .

[3]  J. Runnenburg PROBABILITY THEORY AND ITS APPLICATIONS , 1985 .

[4]  David Williams,et al.  Probability with Martingales , 1991, Cambridge mathematical textbooks.

[5]  David Haussler,et al.  How to use expert advice , 1993, STOC.

[6]  Dean P. Foster,et al.  A Randomization Rule for Selecting Forecasts , 1993, Oper. Res..

[7]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..

[8]  Nicolò Cesa-Bianchi,et al.  Gambling in a rigged casino: The adversarial multi-armed bandit problem , 1995, Proceedings of IEEE 36th Annual Foundations of Computer Science.

[9]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[10]  Vladimir Vovk,et al.  A game of prediction with expert advice , 1995, COLT '95.

[11]  S. Yakowitz,et al.  Machine learning and nonparametric bandit theory , 1995, IEEE Trans. Autom. Control..

[12]  Manfred K. Warmuth,et al.  How to use expert advice , 1997, JACM.

[13]  Michael Kearns,et al.  Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms , 1998, NIPS.

[14]  D. Fudenberg,et al.  The Theory of Learning in Games , 1998 .

[15]  Y. Freund,et al.  Adaptive game playing using multiplicative weights , 1999 .

[16]  Dean P. Foster,et al.  Regret in the On-Line Decision Problem , 1999 .

[17]  Peter Auer,et al.  The Nonstochastic Multiarmed Bandit Problem , 2002, SIAM J. Comput..

[18]  Nimrod Megiddo,et al.  How to Combine Expert (and Novice) Advice when Actions Impact the Environment? , 2003, NIPS.

[19]  Sham M. Kakade,et al.  On the sample complexity of reinforcement learning. , 2003 .

[20]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[21]  Michael Kearns,et al.  Near-Optimal Reinforcement Learning in Polynomial Time , 2002, Machine Learning.

[22]  Dan Suciu,et al.  Journal of the ACM , 2006 .