Making better use of the crowd
暂无分享,去创建一个
[1] A. P. Dawid,et al. Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm , 1979 .
[2] Robin Hanson,et al. Combinatorial Information Market Design , 2003, Inf. Syst. Frontiers.
[3] Brendan T. O'Connor,et al. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.
[4] Laura A. Dabbish,et al. Designing games with a purpose , 2008, CACM.
[5] Lance Fortnow,et al. Complexity of combinatorial market makers , 2008, EC '08.
[6] Panagiotis G. Ipeirotis,et al. Get another label? improving data quality and data mining using multiple, noisy labelers , 2008, KDD.
[7] Chris Callison-Burch,et al. Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk , 2009, EMNLP.
[8] Duncan J. Watts,et al. Financial incentives and the "performance of crowds" , 2009, HCOMP '09.
[9] David G. Rand,et al. The online laboratory: conducting experiments in a real labor market , 2010, ArXiv.
[10] Michael S. Bernstein,et al. Soylent: a word processor with a crowd inside , 2010, UIST.
[11] Panagiotis G. Ipeirotis,et al. Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.
[12] Pietro Perona,et al. The Multidimensional Wisdom of Crowds , 2010, NIPS.
[13] Gerardo Hermosillo,et al. Learning From Crowds , 2010, J. Mach. Learn. Res..
[14] R. Preston McAfee,et al. Who moderates the moderators?: crowdsourcing abuse detection in user-generated content , 2011, EC '11.
[15] Daniel G. Goldstein,et al. Honesty in an Online Labor Market , 2011, Human Computation.
[16] Aniket Kittur,et al. An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets , 2011, ICWSM.
[17] Adam Tauman Kalai,et al. Adaptively Learning the Crowd Kernel , 2011, ICML.
[18] Christopher G. Harris. You're Hired! An Examination of Crowdsourcing Incentive Models in Human Resource Tasks , 2011 .
[19] Aaron D. Shaw,et al. Designing incentives for inexpert human raters , 2011, CSCW.
[20] Aniket Kittur,et al. CrowdForge: crowdsourcing complex work , 2011, UIST.
[21] Devi Parikh. Human-Debugging of Machines , 2011 .
[22] Michael D. Buhrmester,et al. Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.
[23] Devavrat Shah,et al. Iterative Learning for Reliable Crowdsourcing Systems , 2011, NIPS.
[24] Jian Peng,et al. Variational Inference for Crowdsourcing , 2012, NIPS.
[25] Dana Chandler,et al. Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets , 2012, ArXiv.
[26] Walter S. Lasecki,et al. A readability evaluation of real-time crowd captions in the classroom , 2012, ASSETS '12.
[27] Omar Alonso,et al. Implementing crowdsourcing-based relevance experimentation: an industrial perspective , 2013, Information Retrieval.
[28] James Hays,et al. SUN attribute database: Discovering, annotating, and recognizing scene attributes , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[29] John C. Platt,et al. Learning from the Wisdom of Crowds by Minimax Entropy , 2012, NIPS.
[30] Walter S. Lasecki,et al. Online quality control for real-time crowd captioning , 2012, ASSETS '12.
[31] Walter S. Lasecki,et al. Real-time captioning by groups of non-experts , 2012, UIST.
[32] Siddharth Suri,et al. Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.
[33] Walter S. Lasecki,et al. Warping time for more effective real-time crowdsourcing , 2013, CHI.
[34] Yu-An Sun,et al. The Effects of Performance-Contingent Financial Incentives in Online Labor Markets , 2013, AAAI.
[35] Sanja Fidler,et al. Analyzing Semantic Segmentation Using Hybrid Human-Machine CRFs , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[36] R. Preston McAfee,et al. The cost of annoying ads , 2013, WWW '13.
[37] Daniel Gildea,et al. Text Alignment for Real-Time Crowd Captioning , 2013, NAACL.
[38] Chen Xu,et al. The SUN Attribute Database: Beyond Categories for Deeper Scene Understanding , 2014, International Journal of Computer Vision.
[39] Michael S. Bernstein,et al. Mechanical Turk is Not Anonymous , 2013 .
[40] Chien-Ju Ho,et al. Adaptive Task Assignment for Crowdsourced Classification , 2013, ICML.
[41] Jennifer Wortman Vaughan,et al. Efficient Market Making via Convex Optimization, and a Connection to Online Learning , 2013, TEAC.
[42] Lydia B. Chilton,et al. Cobi: a community-informed conference scheduling tool , 2013, UIST.
[43] Quentin Pleple,et al. Interactive Topic Modeling , 2013 .
[44] Lydia B. Chilton,et al. Community Clustering: Leveraging an Academic Crowd to Form Coherent Conference Sessions , 2013, HCOMP.
[45] Yu-An Sun,et al. Monetary Interventions in Crowdsourcing Task Switching , 2014, HCOMP.
[46] Jaime Teevan,et al. Selfsourcing personal tasks , 2014, CHI Extended Abstracts.
[47] Jacki O'Neill,et al. Turk-Life in India , 2014, GROUP.
[48] Lydia B. Chilton,et al. Frenzy: collaborative data organization for creating conference sessions , 2014, CHI.
[49] R. Preston McAfee,et al. The Economic and Cognitive Costs of Annoying Display Advertisements , 2014 .
[50] Xi Chen,et al. Spectral Methods Meet EM: A Provably Optimal Algorithm for Crowdsourcing , 2014, J. Mach. Learn. Res..
[51] Devavrat Shah,et al. Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems , 2011, Oper. Res..
[52] Michael S. Bernstein,et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers , 2015, CHI.
[53] Leib Litman,et al. The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk , 2014, Behavior Research Methods.
[54] Adam Tauman Kalai,et al. Crowdsourcing Feature Discovery via Adaptively Chosen Comparisons , 2015, HCOMP.
[55] Aleksandrs Slivkins,et al. Incentivizing High Quality Crowdwork , 2015 .
[56] The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk , 2015, WWW.
[57] Ben R. Newell,et al. The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers , 2015, Judgment and Decision Making.
[58] Matthew Lease,et al. Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms , 2015 .
[59] Elena Paslaru Bontas Simperl,et al. Improving Paid Microtasks through Gamification and Adaptive Furtherance Incentives , 2015, WWW.
[60] Krzysztof Z. Gajos,et al. Curiosity Killed the Cat, but Makes Crowdwork Better , 2016, CHI.
[61] Alessandro Acquisti,et al. Beyond the Turk: An Empirical Comparison of Alternative Platforms for Crowdsourcing Online Behavioral Research , 2016 .
[62] Blase Ur,et al. Do Users' Perceptions of Password Security Match Reality? , 2016, CHI.
[63] Ece Kamar,et al. Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence , 2016, IJCAI.
[64] Ashish Khetan,et al. Achieving budget-optimality with adaptive schemes in crowdsourcing , 2016, NIPS.
[65] Li Fei-Fei,et al. Crowdsourcing in Computer Vision , 2016, Found. Trends Comput. Graph. Vis..
[66] Long-run cooperation , 2016 .
[67] Daniel G. Goldstein,et al. Improving Comprehension of Numbers in the News , 2016, CHI.
[68] Mary L. Gray,et al. The Crowd is a Collaborative Network , 2016, CSCW.
[69] Jaime Teevan,et al. Supporting Collaborative Writing with Microtasks , 2016, CHI.
[70] Sanja Fidler,et al. Human-Machine CRFs for Identifying Bottlenecks in Scene Understanding , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[71] Tim Roughgarden,et al. Mathematical foundations for social computing , 2016, Commun. ACM.
[72] Ming Yin,et al. The Communication Network Within the Crowd , 2016, WWW.
[73] Michael S. Bernstein,et al. Mechanical Novel: Crowdsourcing Complex Work through Reflection and Revision , 2016, CSCW.
[74] Lionel P. Robert,et al. When Does More Money Work? Examining the Role of Perceived Fairness in Pay on the Performance Quality of Crowdworkers , 2017, ICWSM.
[75] Joseph Goodman,et al. Crowdsourcing Consumer Research , 2017 .
[76] Eric Horvitz,et al. On Human Intellect and Machine Failures: Troubleshooting Integrative Machine Learning Systems , 2016, AAAI.
[77] Jaime Teevan,et al. Communicating Context to the Crowd for Complex Writing Tasks , 2017, CSCW.
[78] Jesse Chandler,et al. Lie for a Dime , 2017 .
[79] Joel Huber,et al. Character Misrepresentation by Amazon Turk Workers : Assessment and Solutions CONTRIBUTION STATEMENT Consumer researchers conducting studies with Amazon Mechanical Turk Workers , 2018 .