暂无分享,去创建一个
Martin J. Wainwright | Kannan Ramchandran | Reinhard Heckel | Max Simchowitz | M. Wainwright | Max Simchowitz | K. Ramchandran | Reinhard Heckel
[1] Xi Chen,et al. Optimal PAC Multiple Arm Identification with Applications to Crowdsourcing , 2014, ICML.
[2] Kannan Ramchandran,et al. A Case for Ordinal Peer-evaluation in MOOCs , 2013 .
[3] L. Thurstone. A law of comparative judgment. , 1994 .
[4] Thorsten Joachims,et al. The K-armed Dueling Bandits Problem , 2012, COLT.
[5] César A. Hidalgo,et al. The Collaborative Image of The City: Mapping the Inequality of Urban Perception , 2013, PloS one.
[6] P.-C.-F. Daunou,et al. Mémoire sur les élections au scrutin , 1803 .
[7] Eyke Hüllermeier,et al. Online Rank Elicitation for Plackett-Luce: A Dueling Bandits Approach , 2015, NIPS.
[8] Harry Joe,et al. Majorization, entropy and paired comparisons , 1988 .
[9] Jian Li,et al. Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection , 2017, AISTATS.
[10] Aurélien Garivier,et al. On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models , 2014, J. Mach. Learn. Res..
[11] Brian Eriksson,et al. Learning to Top-K Search using Pairwise Comparisons , 2013, AISTATS.
[12] Matthew J. Salganik,et al. Wiki surveys : Open and quantifiable social data collection ∗ , 2012 .
[13] Martin J. Wainwright,et al. Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence , 2015, J. Mach. Learn. Res..
[14] Sébastien Bubeck,et al. Multiple Identifications in Multi-Armed Bandits , 2012, ICML.
[15] Robert D. Nowak,et al. Sparse Dueling Bandits , 2015, AISTATS.
[16] Thorsten Joachims,et al. Beat the Mean Bandit , 2011, ICML.
[17] Max Simchowitz,et al. The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime , 2017, COLT.
[18] Charu C. Aggarwal,et al. Recommender Systems: The Textbook , 2016 .
[19] Martin J. Wainwright,et al. Simple, Robust and Optimal Ranking from Pairwise Comparisons , 2015, J. Mach. Learn. Res..
[20] A. Tversky,et al. Substitutability and similarity in binary choices , 1969 .
[21] Bruce E. Hajek,et al. Minimax-optimal Inference from Partial Rankings , 2014, NIPS.
[22] A. Culyer. Thurstone’s Law of Comparative Judgment , 2014 .
[23] Devavrat Shah,et al. Iterative ranking from pair-wise comparisons , 2012, NIPS.
[24] Nihar B. Shah,et al. Active ranking from pairwise comparisons and when parametric assumptions do not help , 2016, The Annals of Statistics.
[25] Shie Mannor,et al. Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems , 2006, J. Mach. Learn. Res..
[26] R. Luce,et al. Individual Choice Behavior: A Theoretical Analysis. , 1960 .
[27] Eyke Hüllermeier,et al. Top-k Selection based on Adaptive Sampling of Noisy Preferences , 2013, ICML.
[28] Ambuj Tewari,et al. PAC Subset Selection in Stochastic Multi-armed Bandits , 2012, ICML.
[29] Zhenghao Chen,et al. Tuned Models of Peer Assessment in MOOCs , 2013, EDM.
[30] H. Landau. On dominance relations and the structure of animal societies: III The condition for a score structure , 1953 .
[31] Martin J. Wainwright,et al. Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues , 2015, IEEE Transactions on Information Theory.
[32] Raphaël Féraud,et al. Generic Exploration and K-armed Voting Bandits , 2013, ICML.
[33] R. A. Bradley,et al. RANK ANALYSIS OF INCOMPLETE BLOCK DESIGNS THE METHOD OF PAIRED COMPARISONS , 1952 .
[34] Matthew Malloy,et al. lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits , 2013, COLT.
[35] Nir Ailon,et al. Active Learning Ranking from Pairwise Preferences with Almost Optimal Query Complexity , 2011, NIPS.
[36] D. Hunter. MM algorithms for generalized Bradley-Terry models , 2003 .
[37] R. Duncan Luce,et al. Individual Choice Behavior: A Theoretical Analysis , 1979 .