Multiplayer Bandit Learning, from Competition to Cooperation

The stochastic multi-armed bandit model captures the tradeoff between exploration and exploitation. We study the effects of competition and cooperation on this tradeoff. Suppose there are $k$ arms and two players, Alice and Bob. In every round, each player pulls an arm, receives the resulting reward, and observes the choice of the other player but not their reward. Alice's utility is $\Gamma_A + \lambda \Gamma_B$ (and similarly for Bob), where $\Gamma_A$ is Alice's total reward and $\lambda \in [-1, 1]$ is a cooperation parameter. At $\lambda = -1$ the players are competing in a zero-sum game, at $\lambda = 1$, they are fully cooperating, and at $\lambda = 0$, they are neutral: each player's utility is their own reward. The model is related to the economics literature on strategic experimentation, where usually players observe each other's rewards. With discount factor $\beta$, the Gittins index reduces the one-player problem to the comparison between a risky arm, with a prior $\mu$, and a predictable arm, with success probability $p$. The value of $p$ where the player is indifferent between the arms is the Gittins index $g = g(\mu,\beta) > m$, where $m$ is the mean of the risky arm. We show that competing players explore less than a single player: there is $p^* \in (m, g)$ so that for all $p > p^*$, the players stay at the predictable arm. However, the players are not myopic: they still explore for some $p > m$. On the other hand, cooperating players explore more than a single player. We also show that neutral players learn from each other, receiving strictly higher total rewards than they would playing alone, for all $ p\in (p^*, g)$, where $p^*$ is the threshold from the competing case. Finally, we show that competing and neutral players eventually settle on the same arm in every Nash equilibrium, while this can fail for cooperating players.

[1]  Yishay Mansour,et al.  Competing Bandits: Learning Under Competition , 2017, ITCS.

[2]  Susan A. Murphy,et al.  Monographs on statistics and applied probability , 1990 .

[3]  Naumaan Nayyar,et al.  Decentralized Learning for Multiplayer Multiarmed Bandits , 2014, IEEE Transactions on Information Theory.

[4]  Zhiwei Steven Wu,et al.  The Perils of Exploration under Competition: A Computational Modeling Approach , 2019, EC.

[5]  M. Andersson,et al.  The Evolution of Eusociality , 1984 .

[6]  Mohammad Taghi Hajiaghayi,et al.  Price of Competition and Dueling Games , 2016, ICALP.

[7]  Sébastien Bubeck,et al.  Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems , 2012, Found. Trends Mach. Learn..

[8]  Yishay Mansour,et al.  Implementing the “Wisdom of the Crowd” , 2013, Journal of Political Economy.

[9]  Yishay Mansour,et al.  Bayesian Incentive-Compatible Bandit Exploration , 2018 .

[10]  Masaki Aoyagi,et al.  Mutual Observability and the Convergence of Actions in a Multi-Person Two-Armed Bandit Model , 1998 .

[11]  J. Bather,et al.  Multi‐Armed Bandit Allocation Indices , 1990 .

[12]  M. Rothschild A two-armed bandit theory of market pricing , 1974 .

[13]  Robert J. Aumann,et al.  Repeated Games with Incomplete Information , 1995 .

[14]  Nicolas Vieille,et al.  Social Learning in One-Arm Bandit Problems , 2007 .

[15]  S. Matthew Weinberg,et al.  Multi-armed Bandit Problems with Strategic Arms , 2017, COLT.

[16]  Christoph Grüter,et al.  Social Learning: The Importance of Copying Others , 2010, Current Biology.

[17]  Lilian Besson,et al.  Multi-Armed Bandit Learning in IoT Networks , 2017 .

[18]  Aleksandrs Slivkins,et al.  Introduction to Multi-Armed Bandits , 2019, Found. Trends Mach. Learn..

[19]  Eshcar Hillel,et al.  Distributed Exploration in Multi-Armed Bandits , 2013, NIPS.

[20]  David Besanko,et al.  The Impact of Market Structure and Learning on the Tradeoff between R&D Competition and Cooperation , 2013 .

[21]  R. N. Bradt,et al.  On Sequential Designs for Maximizing the Sum of $n$ Observations , 1956 .

[22]  D. Fudenberg,et al.  Subgame-perfect equilibria of finite- and infinite-horizon games , 1981 .

[23]  Amir Leshem,et al.  Distributed Multi-Player Bandits - a Game of Thrones Approach , 2018, NeurIPS.

[24]  W. Hamilton The genetical evolution of social behaviour. I. , 1964, Journal of theoretical biology.

[25]  Adam Tauman Kalai,et al.  Dueling algorithms , 2011, STOC '11.

[26]  Yuval Peres,et al.  Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without , 2020, COLT.

[27]  Philipp Strack,et al.  Strategic Experimentation with Private Payoffs , 2015, J. Econ. Theory.

[28]  Hai Jiang,et al.  Medium access in cognitive radio networks: A competitive multi-armed bandit framework , 2008, 2008 42nd Asilomar Conference on Signals, Systems and Computers.

[29]  Jon M. Kleinberg,et al.  Incentivizing exploration , 2014, EC.

[30]  M. Sion On general minimax theorems , 1958 .

[31]  Gábor Lugosi,et al.  Multiplayer bandits without observing collision information , 2018, Math. Oper. Res..

[32]  E. Stacchetti,et al.  Towards a Theory of Discounted Repeated Games with Imperfect Monitoring , 1990 .

[33]  J. Gittins Bandit processes and dynamic allocation indices , 1979 .

[34]  J. Boomsma,et al.  Kin Selection versus Sexual Selection: Why the Ends Do Not Meet , 2007, Current Biology.

[35]  Qing Zhao,et al.  Distributed Learning in Multi-Armed Bandit With Multiple Players , 2009, IEEE Transactions on Signal Processing.

[36]  M. Cripps,et al.  Strategic Experimentation with Exponential Bandits , 2003 .

[37]  Ohad Shamir,et al.  Multi-player bandits: a musical chairs approach , 2016, ICML 2016.

[38]  Vianney Perchet,et al.  SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits , 2018, NeurIPS.

[39]  Nicolas Vieille,et al.  On Games of Strategic Experimentation , 2013, Games Econ. Behav..

[40]  Sven Rady,et al.  Negatively Correlated Bandits , 2008 .

[41]  Andreas Krause,et al.  Multi-Player Bandits: The Adversarial Case , 2019, J. Mach. Learn. Res..

[42]  R. Aumann Agreeing to disagree. , 1976, Nature cell biology.

[43]  M. Nowak,et al.  The evolution of eusociality , 2010, Nature.

[44]  Shie Mannor,et al.  Concurrent Bandits and Cognitive Radio Networks , 2014, ECML/PKDD.