暂无分享,去创建一个
Mark Sellke | S'ebastien Bubeck | Thomas Budzinski | Sébastien Bubeck | Mark Sellke | Thomas Budzinski
[1] Andreas Krause,et al. Multi-Player Bandits: The Adversarial Case , 2019, J. Mach. Learn. Res..
[2] Vianney Perchet,et al. SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed Bandits , 2018, NeurIPS.
[3] Yuval Peres,et al. Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without , 2020, COLT.
[4] Shie Mannor,et al. Concurrent Bandits and Cognitive Radio Networks , 2014, ECML/PKDD.
[5] Hai Jiang,et al. Medium access in cognitive radio networks: A competitive multi-armed bandit framework , 2008, 2008 42nd Asilomar Conference on Signals, Systems and Computers.
[6] Ananthram Swami,et al. Distributed Algorithms for Learning and Cognitive Medium Access with Logarithmic Regret , 2010, IEEE Journal on Selected Areas in Communications.
[7] Jacques Palicot,et al. Multi-Armed Bandit Learning in IoT Networks: Learning Helps Even in Non-stationary Settings , 2017, CrownCom.
[8] Ohad Shamir,et al. Multi-player bandits: a musical chairs approach , 2016, ICML 2016.
[9] Qing Zhao,et al. Distributed Learning in Multi-Armed Bandit With Multiple Players , 2009, IEEE Transactions on Signal Processing.
[10] S'ebastien Bubeck,et al. Coordination without communication: optimal regret in two players multi-armed bandits , 2020, COLT.
[11] Gábor Lugosi,et al. Multiplayer bandits without observing collision information , 2018, Math. Oper. Res..