Incentivizing Exploration with Selective Data Disclosure
暂无分享,去创建一个
Nicole Immorlica | Zhiwei Steven Wu | Aleksandrs Slivkins | Jieming Mao | Nicole Immorlica | Aleksandrs Slivkins | Jieming Mao
[1] Moshe Tennenholtz,et al. Economic Recommendation Systems , 2015, ArXiv.
[2] Bangrui Chen,et al. Incentivizing Exploration by Heterogeneous Users , 2018, COLT.
[3] Aleksandrs Slivkins,et al. Bandits with Knapsacks , 2013, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[4] D. Bergemann,et al. The Dynamic Pivot Mechanism , 2008 .
[5] Yishay Mansour,et al. Implementing the “Wisdom of the Crowd” , 2014, Journal of Political Economy.
[6] Andreas Krause,et al. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms , 2013, WWW.
[7] Sampath Kannan,et al. Fairness Incentives for Myopic Agents , 2017, EC.
[8] Aleksandrs Slivkins,et al. Introduction to Multi-Armed Bandits , 2019, Found. Trends Mach. Learn..
[9] Carlos Riquelme,et al. Human Interaction with Recommendation Systems , 2017, AISTATS.
[10] Annie Liang,et al. Optimal and Myopic Information Acquisition , 2017, EC.
[11] Peter Auer,et al. Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.
[12] Jon M. Kleinberg,et al. Incentivizing exploration , 2014, EC.
[13] Moshe Tennenholtz,et al. Social Learning and the Innkeeper's Challenge , 2019, EC.
[14] Marciano M. Siniscalchi,et al. Ambiguity and Ambiguity Aversion , 2014 .
[15] H. Robbins,et al. Asymptotically efficient adaptive allocation rules , 1985 .
[16] Khashayar Khosravi,et al. Mostly Exploration-Free Algorithms for Contextual Bandits , 2017, Manag. Sci..
[17] Sampath Kannan,et al. A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem , 2018, NeurIPS.
[18] J. Bather,et al. Multi‐Armed Bandit Allocation Indices , 1990 .
[19] M. Cripps,et al. Strategic Experimentation with Exponential Bandits , 2003 .
[20] Zhiwei Steven Wu,et al. The Externalities of Exploration and How Data Diversity Helps Exploitation , 2018, COLT.
[21] Peter Auer,et al. The Nonstochastic Multiarmed Bandit Problem , 2002, SIAM J. Comput..
[22] Sébastien Bubeck,et al. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems , 2012, Found. Trends Mach. Learn..
[23] Yishay Mansour,et al. Bayesian Incentive-Compatible Bandit Exploration , 2018 .
[24] Yeon-Koo Che,et al. Recommender Systems as Mechanisms for Social Learning , 2018 .
[25] A. Kolotilin. Optimal Information Disclosure: A Linear Programming Approach , 2016 .
[26] Vianney Perchet,et al. Batched Bandit Problems , 2015, COLT.
[27] Ilya Segal,et al. An Efficient Dynamic Mechanism , 2013 .
[28] E. Glen Weyl,et al. Descending Price Optimally Coordinates Search , 2016, EC.
[29] Patrick Hummel,et al. Learning and incentives in user-generated content: multi-armed bandits with endogenous arms , 2013, ITCS '13.
[30] Stephen Morris,et al. Information Design, Bayesian Persuasion and Bayes Correlated Equilibrium , 2016 .
[31] Nikhil R. Devanur,et al. The price of truthfulness for pay-per-click auctions , 2009, EC '09.
[32] Omar Besbes,et al. Dynamic Pricing Without Knowing the Demand Function: Risk Bounds and Near-Optimal Algorithms , 2009, Oper. Res..
[33] Frank Thomson Leighton,et al. The value of knowing a demand curve: bounds on regret for online posted-price auctions , 2003, 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings..
[34] Sham M. Kakade,et al. Optimal Dynamic Mechanism Design and the Virtual Pivot Mechanism , 2013, Oper. Res..
[35] Aleksandrs Slivkins,et al. Adaptive contract design for crowdsourcing markets: bandit algorithms for repeated principal-agent problems , 2014, J. Artif. Intell. Res..
[36] Annie Liang,et al. Overabundant Information and Learning Traps , 2018, EC.