Regret Analysis of Bandit Problems with Causal Background Knowledge

We study how to learn optimal interventions sequentially given causal information represented as a causal graph along with associated conditional distributions. Causal modeling is useful in real world problems like online advertisement where complex causal mechanisms underlie the relationship between interventions and outcomes. We propose two algorithms, causal upper confidence bound (C-UCB) and causal Thompson Sampling (C-TS), that enjoy improved cumulative regret bounds compared with algorithms that do not use causal information. We thus resolve an open problem posed by \cite{lattimore2016causal}. Further, we extend C-UCB and C-TS to the linear bandit setting and propose causal linear UCB (CL-UCB) and causal linear TS (CL-TS) algorithms. These algorithms enjoy a cumulative regret bound that only scales with the feature dimension. Our experiments show the benefit of using causal information. For example, we observe that even with a few hundreds of iterations, the regret of causal algorithms is less than that of standard algorithms by a factor of three. We also show that under certain causal structures, our algorithms scale better than the standard bandit algorithms as the number of interventions increases.

[1]  Jason L. Loeppky,et al.  Improving Online Marketing Experiments with Drifting Multi-armed Bandits , 2015, ICEIS.

[2]  Alexandros G. Dimakis,et al.  Identifying Best Interventions through Online Importance Sampling , 2017, ICML.

[3]  Neil T. Heffernan,et al.  AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning , 2016, L@S.

[4]  Euclid,et al.  Statistical science : a review journal of the Institute of Mathematical Statistics. , 1986 .

[5]  Daniel A. Braun,et al.  Generalized Thompson sampling for sequential decision-making and causal inference , 2013, Complex Adapt. Syst. Model..

[6]  Shipra Agrawal,et al.  Analysis of Thompson Sampling for the Multi-armed Bandit Problem , 2011, COLT.

[7]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[8]  Tor Lattimore,et al.  Causal Bandits: Learning Good Interventions via Causal Inference , 2016, NIPS.

[9]  Nir Friedman,et al.  Probabilistic Graphical Models - Principles and Techniques , 2009 .

[10]  Elias Bareinboim,et al.  Structural Causal Bandits: Where to Intervene? , 2018, NeurIPS.

[11]  Lihong Li,et al.  An Empirical Evaluation of Thompson Sampling , 2011, NIPS.

[12]  John N. Tsitsiklis,et al.  A Structured Multiarmed Bandit Problem and the Greedy Policy , 2008, IEEE Transactions on Automatic Control.

[13]  J. Pearl Causality: Models, Reasoning and Inference , 2000 .

[14]  Nicolò Cesa-Bianchi,et al.  Combinatorial Bandits , 2012, COLT.

[15]  Ambuj Tewari,et al.  From Ads to Interventions: Contextual Bandits in Mobile Health , 2017, Mobile Health - Sensors, Analytic Methods, and Applications.

[16]  Frederick Eberhardt,et al.  Experiment selection for causal discovery , 2013, J. Mach. Learn. Res..

[17]  Emma Brunskill,et al.  Online Learning for Causal Bandits , 2017 .