On the Identification and Mitigation of Weaknesses in the Knowledge Gradient Policy for Multi-Armed Bandits

The Knowledge Gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not make dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.

[1]  Yaming Yu Structural Properties of Bayesian Bandits with Exponential Family Distributions , 2011, 1103.3089.

[2]  Stochastic Orders , 2008 .

[3]  Warren B. Powell,et al.  The Knowledge Gradient Algorithm for a General Class of Online Learning Problems , 2012, Oper. Res..

[4]  Warren B. Powell,et al.  The Knowledge-Gradient Policy for Correlated Normal Beliefs , 2009, INFORMS J. Comput..

[5]  Ilya O. Ryzhov,et al.  Optimal learning with non-Gaussian rewards , 2013, 2013 Winter Simulations Conference (WSC).

[6]  S. Gupta,et al.  Bayesian look ahead one-stage sampling allocations for selection of the best population , 1996 .

[7]  T. Lai,et al.  Optimal learning and experimentation in bandit problems , 2000 .

[8]  Warren B. Powell,et al.  A Knowledge-Gradient Policy for Sequential Information Collection , 2008, SIAM J. Control. Optim..

[9]  Warren B. Powell,et al.  On the robustness of a one-period look-ahead policy in multi-armed bandit problems , 2010, ICCS.

[10]  Warren B. Powell,et al.  Optimal Learning , 2022, Encyclopedia of Machine Learning and Data Mining.

[11]  Benjamin Van Roy,et al.  Learning to Optimize via Posterior Sampling , 2013, Math. Oper. Res..

[12]  M. Mohri,et al.  Bandit Problems , 2006 .

[13]  Christian M. Ernst,et al.  Multi-armed Bandit Allocation Indices , 1989 .

[14]  P. Whittle Multi‐Armed Bandits and the Gittins Index , 1980 .

[15]  P. Whittle Restless Bandits: Activity Allocation in a Changing World , 1988 .

[16]  Warren B. Powell,et al.  The value of information in multi-armed bandits with exponentially distributed rewards , 2011, ICCS.

[17]  Kevin D. Glazebrook,et al.  Multi-Armed Bandit Allocation Indices: Gittins/Multi-Armed Bandit Allocation Indices , 2011 .

[18]  Warren B. Powell,et al.  Optimal Learning: Powell/Optimal , 2012 .

[19]  Stephen E. Chick,et al.  Economic Analysis of Simulation Selection Problems , 2009, Manag. Sci..

[20]  R. Weber On the Gittins Index for Multiarmed Bandits , 1992 .

[21]  Donald R. Jones,et al.  Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..

[22]  J. Bather,et al.  Multi‐Armed Bandit Allocation Indices , 1990 .