Simple Regret Minimization for Contextual Bandits

There are two variants of the classical multi-armed bandit (MAB) problem that have received considerable attention from machine learning researchers in recent years: contextual bandits and simple regret minimization. Contextual bandits are a sub-class of MABs where, at every time step, the learner has access to side information that is predictive of the best arm. Simple regret minimization assumes that the learner only incurs regret after a pure exploration phase. In this work, we study simple regret minimization for contextual bandits. Motivated by applications where the learner has separate training and autonomous modes, we assume that, the learner experiences a pure exploration phase, where feedback is received after every action but no regret is incurred, followed by a pure exploitation phase in which regret is incurred but there is no feedback. We present the Contextual-Gap algorithm and establish performance guarantees on the simple regret, i.e., the regret during the pure exploitation phase. Our experiments examine a novel application to adaptive sensor selection for magnetic field estimation in interplanetary spacecraft, and demonstrate considerable improvement over algorithms designed to minimize the cumulative regret.

[1]  Aditya Gopalan,et al.  On Kernelized Multi-armed Bandits , 2017, ICML.

[2]  Masashi Sugiyama,et al.  Fully adaptive algorithm for pure exploration in linear bandits , 2017, 1710.05552.

[3]  H. Bercovici,et al.  The Horn conjecture for sums of compact selfadjoint operators , 2009 .

[4]  Sébastien Bubeck,et al.  Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems , 2012, Found. Trends Mach. Learn..

[5]  Shuai Li,et al.  Online Clustering of Contextual Cascading Bandits , 2017, AAAI.

[6]  Charles D. Norton,et al.  Spaceborne flight validation of NASA ESTO technologies , 2012, 2012 IEEE International Geoscience and Remote Sensing Symposium.

[7]  Shipra Agrawal,et al.  Analysis of Thompson Sampling for the Multi-armed Bandit Problem , 2011, COLT.

[8]  Mihaela van der Schaar,et al.  RELEAF: An Algorithm for Learning and Exploiting Relevance , 2015, IEEE Journal of Selected Topics in Signal Processing.

[9]  Yuesheng Xu,et al.  Universal Kernels , 2006, J. Mach. Learn. Res..

[10]  Stanislav Minsker On Some Extensions of Bernstein's Inequality for Self-adjoint Operators , 2011, 1112.5448.

[11]  Vikram Krishnamurthy,et al.  Algorithms for optimal scheduling and management of hidden Markov model sensors , 2002, IEEE Trans. Signal Process..

[12]  Demosthenis Teneketzis,et al.  Multi-Armed Bandit Problems , 2008 .

[13]  Shuai Li,et al.  Distributed Clustering of Linear Bandits in Peer to Peer Networks , 2016, ICML.

[14]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[15]  Nello Cristianini,et al.  Finite-Time Analysis of Kernelised Contextual Bandits , 2013, UAI.

[16]  Vikram Krishnamurthy,et al.  Optimal sensor scheduling for Hidden Markov models , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[17]  Alessandro Lazaric,et al.  Best-Arm Identification in Linear Bandits , 2014, NIPS.

[18]  Alfred O. Hero,et al.  Sensor Management: Past, Present, and Future , 2011, IEEE Sensors Journal.

[19]  R. Tsunoda,et al.  Tilts and Wave Structure in the Bottomside of the Low-Latitude F Layer: Recent Findings and Future Opportunities , 2016 .

[20]  Alfred O. Hero,et al.  Partially Observable Markov Decision Process Approximations for Adaptive Sensing , 2009, Discret. Event Dyn. Syst..

[21]  Michal Valko,et al.  Simple regret for infinitely many armed bandits , 2015, ICML.

[22]  Christopher T. Russell,et al.  Accurate determination of magnetic field gradients from four point vector measurements. I. Use of natural constraints on vector data obtained from a single spinning spacecraft , 1996 .

[23]  Yan Zi-Zong,et al.  SCHUR COMPLEMENTS AND DETERMINANT INEQUALITIES , 2009 .

[24]  Jonathan J. Hull,et al.  A Database for Handwritten Text Recognition Research , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[25]  Alessandro Lazaric,et al.  Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence , 2012, NIPS.

[26]  Nando de Freitas,et al.  On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning , 2014, AISTATS.

[27]  Aurélien Garivier,et al.  Optimal Best Arm Identification with Fixed Confidence , 2016, COLT.

[28]  Joelle Pineau,et al.  Streaming kernel regression with provably adaptive mean, variance, and regularization , 2017, J. Mach. Learn. Res..

[29]  Shuai Li,et al.  Online Clustering of Bandits , 2014, ICML.

[30]  D. Castañón Approximate dynamic programming for sensor management , 1997, Proceedings of the 36th IEEE Conference on Decision and Control.

[31]  Peter Auer,et al.  The Nonstochastic Multiarmed Bandit Problem , 2002, SIAM J. Comput..

[32]  Benjamin Recht,et al.  Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator , 2017, ICML.

[33]  Melody Y. Guan,et al.  Nonparametric Stochastic Contextual Bandits , 2018, AAAI.

[34]  Csaba Szepesvári,et al.  Improved Algorithms for Linear Stochastic Bandits , 2011, NIPS.

[35]  Chih-Jen Lin,et al.  A comparison of methods for multiclass support vector machines , 2002, IEEE Trans. Neural Networks.

[36]  Peter Auer,et al.  Using Confidence Bounds for Exploitation-Exploration Trade-offs , 2003, J. Mach. Learn. Res..

[37]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[38]  Mark B. Moldwin,et al.  Adaptive interference cancelation using a pair of magnetometers , 2016, IEEE Transactions on Aerospace and Electronic Systems.

[39]  D. J. H. Garling,et al.  The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities by J. Michael Steele , 2005, Am. Math. Mon..

[40]  Wei Chu,et al.  A contextual-bandit approach to personalized news article recommendation , 2010, WWW '10.

[41]  Ann Nowé,et al.  Bayesian Best-Arm Identification for Selecting Influenza Mitigation Strategies , 2017, ECML/PKDD.

[42]  Robert D. Nowak,et al.  Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting , 2014, 2014 48th Annual Conference on Information Sciences and Systems (CISS).