Incentives to Counter Bias in Human Computation
暂无分享,去创建一个
Boi Faltings | Bao Duy Tran | Pearl Pu | Radu Jurca | B. Faltings | P. Pu | R. Jurca | B. Tran
[1] Ryan P. Adams,et al. Trick or treat: putting peer prediction to the test , 2014 .
[2] Jacki O'Neill,et al. Turk-Life in India , 2014, GROUP.
[3] D. Prelec. A Bayesian Truth Serum for Subjective Data , 2004, Science.
[4] Pascal Van Hentenryck,et al. Crowdsourcing contest dilemma , 2014, Journal of The Royal Society Interface.
[5] Boi Faltings,et al. Incentive Mechanisms for Community Sensing , 2014, IEEE Transactions on Computers.
[6] M. Six Silberman,et al. Turkopticon: interrupting worker invisibility in amazon mechanical turk , 2013, CHI.
[7] Jeffrey S. Rosenschein,et al. Robust mechanisms for information elicitation , 2006, AAMAS '06.
[8] Boi Faltings,et al. Swissnoise: Online Polls with Game-Theoretic Incentives , 2014, AAAI.
[9] Michael S. Bernstein,et al. Boomerang: Rebounding the Consequences of Reputation Feedback on Crowdsourcing Platforms , 2016, UIST.
[10] E. Horvitz,et al. Incentives and Truthful Reporting in Consensus-centric Crowdsourcing , 2012 .
[11] Mary L. Gray,et al. The Crowd is a Collaborative Network , 2016, CSCW.
[12] David C. Parkes,et al. Designing incentives for online question and answer forums , 2009, EC '09.
[13] Michael S. Bernstein,et al. EmailValet: managing email overload through private, accountable crowdsourcing , 2013, CSCW.
[14] Panagiotis G. Ipeirotis,et al. Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.
[15] David C. Parkes,et al. Peer prediction without a common prior , 2012, EC '12.
[16] Anirban Dasgupta,et al. Crowdsourced judgement elicitation with endogenous proficiency , 2013, WWW.
[17] A. Tversky,et al. Judgment under Uncertainty: Heuristics and Biases , 1974, Science.
[18] Eric Horvitz,et al. Incentives for truthful reporting in crowdsourcing , 2012, AAMAS.
[19] Yuval Peres,et al. Approval Voting and Incentives in Crowdsourcing , 2015, ICML.
[20] Boi Faltings,et al. A Robust Bayesian Truth Serum for Non-Binary Signals , 2013, AAAI.
[21] Paul Resnick,et al. Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..
[22] Laura A. Dabbish,et al. Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers , 2015, CHI.
[23] Boi Faltings,et al. Mechanisms for Making Crowds Truthful , 2014, J. Artif. Intell. Res..
[24] Wai-Tat Fu,et al. Don't hide in the crowd!: increasing social transparency between peer workers improves crowdsourcing outcomes , 2013, CHI.
[25] Boi Faltings,et al. Incentives for Effort in Crowdsourcing Using the Peer Truth Serum , 2016, ACM Trans. Intell. Syst. Technol..
[26] Michael S. Bernstein,et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers , 2015, CHI.
[27] Alice M. Brawley,et al. Work experiences on MTurk: Job satisfaction, turnover, and information sharing , 2016, Comput. Hum. Behav..
[28] Joemon M. Jose,et al. A Game-Theory Approach for Effective Crowdsource-Based Relevance Assessment , 2016, ACM Trans. Intell. Syst. Technol..
[29] D. Helbing,et al. How social influence can undermine the wisdom of crowd effect , 2011, Proceedings of the National Academy of Sciences.
[30] Nihar B. Shah,et al. Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing , 2014, J. Mach. Learn. Res..
[31] Yoav Shoham,et al. Eliciting truthful answers to multiple-choice questions , 2009, EC '09.
[32] Laura A. Dabbish,et al. Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.
[33] Carlos Guestrin,et al. The Wisdom of Multiple Guesses , 2015, EC.
[34] David C. Parkes,et al. A Robust Bayesian Truth Serum for Small Populations , 2012, AAAI.
[35] Aaron D. Shaw,et al. Designing incentives for inexpert human raters , 2011, CSCW.
[36] L. J. Savage. Elicitation of Personal Probabilities and Expectations , 1971 .
[37] Boi Faltings,et al. An incentive compatible reputation mechanism , 2003, EEE International Conference on E-Commerce, 2003. CEC 2003..
[38] Dan Cosley,et al. Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk , 2016, CHI.
[39] Sriram Vishwanath,et al. Improving Impact Sourcing via Efficient Global Service Delivery , 2015 .
[40] Wai-Tat Fu,et al. Enhancing reliability using peer consistency evaluation in human computation , 2013, CSCW '13.
[41] Christopher G. Harris. You're Hired! An Examination of Crowdsourcing Incentive Models in Human Resource Tasks , 2011 .
[42] Gjergji Kasneci,et al. Crowd IQ: aggregating opinions to boost performance , 2012, AAMAS.
[43] Boi Faltings,et al. Incentives for Answering Hypothetical Questions , 2011 .
[44] Panagiotis G. Ipeirotis. Demographics of Mechanical Turk , 2010 .
[45] Boi Faltings,et al. Incentives for Truthful Information Elicitation of Continuous Signals , 2014, AAAI.
[46] Nicholas R. Jennings,et al. Mechanism design for the truthful elicitation of costly probabilistic estimates in distributed information systems , 2011, Artif. Intell..
[47] Jacki O'Neill,et al. Being a turker , 2014, CSCW.
[48] David G. Rand,et al. The online laboratory: conducting experiments in a real labor market , 2010, ArXiv.