Thompson Sampling on Symmetric Alpha-Stable Bandits

Thompson Sampling provides an efficient technique to introduce prior knowledge in the multi-armed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from symmetric $\alpha$-stable distributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for posterior inference, which leads to two algorithms for Thompson Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of symmetric $\alpha$-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features.

[1]  Lihong Li,et al.  An Empirical Evaluation of Thompson Sampling , 2011, NIPS.

[2]  Nicolò Cesa-Bianchi,et al.  Bandits With Heavy Tail , 2012, IEEE Transactions on Information Theory.

[3]  Carey L. Williamson,et al.  A tale of the tails: Power-laws in internet measurements , 2013, IEEE Network.

[4]  S. Godsill,et al.  Bayesian inference for time series with heavy-tailed symmetric α-stable noise processes , 1999 .

[5]  Benjamin Van Roy,et al.  Learning to Optimize via Posterior Sampling , 2013, Math. Oper. Res..

[6]  Rémi Munos,et al.  Thompson Sampling for 1-Dimensional Exponential Family Bandits , 2013, NIPS.

[7]  John C. Frain Studies on the application of the alpha-stable distribution in economics , 2009 .

[8]  W. R. Thompson ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES , 1933 .

[9]  Michael R. Lyu,et al.  Almost Optimal Algorithms for Linear Stochastic Bandits with Heavy-Tailed Payoffs , 2018, NeurIPS.

[10]  P. Carr,et al.  The Finite Moment Log Stable Process and Option Pricing , 2003 .

[11]  Andres Muñoz Medina,et al.  No-Regret Algorithms for Heavy-Tailed Linear Bandits , 2016, ICML.

[12]  Michael R. Lyu,et al.  Pure Exploration of Multi-Armed Bandits with Heavy-Tailed Payoffs , 2018, UAI.

[13]  Shipra Agrawal,et al.  Further Optimal Regret Bounds for Thompson Sampling , 2012, AISTATS.

[14]  R. Munos,et al.  Kullback–Leibler upper confidence bounds for optimal sequential allocation , 2012, 1210.1136.

[15]  D. F. Andrews,et al.  Scale Mixtures of Normal Distributions , 1974 .

[16]  M. Taqqu,et al.  Financial Risk and Heavy Tails , 2003 .

[17]  C. L. Nikias,et al.  Signal processing with fractional lower order moments: stable processes and their applications , 1993, Proc. IEEE.

[18]  Zbynek Pawlas,et al.  Fractional absolute moments of heavy tailed distributions , 2013, 1301.4804.

[19]  Sattar Vakili,et al.  Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems , 2011, IEEE Journal of Selected Topics in Signal Processing.

[20]  P. Levy,et al.  Calcul des Probabilites , 1926, The Mathematical Gazette.

[21]  Shipra Agrawal,et al.  Analysis of Thompson Sampling for the Multi-armed Bandit Problem , 2011, COLT.

[22]  Ravi Kumar,et al.  A characterization of online browsing behavior , 2010, WWW '10.

[23]  C. Mallows,et al.  A Method for Simulating Stable Random Variables , 1976 .

[24]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.