Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests

We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.

[1]  D. Prelec A Bayesian Truth Serum for Subjective Data , 2004, Science.

[2]  Lydia B. Chilton,et al.  The labor economics of paid crowdsourcing , 2010, EC '10.

[3]  Karim R. Lakhani,et al.  Incentives and Problem Uncertainty in Innovation Contests: An Empirical Analysis , 2011, Manag. Sci..

[4]  B. P. Lewis,et al.  Thinking about Choking? Attentional Processes and Paradoxical Performance , 1997, Personality & social psychology bulletin.

[5]  Morteza Zadimoghaddam,et al.  Collusion in VCG Path Procurement Auctions , 2010, WINE.

[6]  Arun Sundararajan,et al.  Optimal Design of Crowdsourcing Contests , 2009, ICIS.

[7]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[8]  Yoram Bachrach,et al.  Honor among thieves: collusion in multi-unit auctions , 2010, AAMAS.

[9]  Onn Shehory,et al.  Proceedings of the 2006 AAMAS workshop and TADA/AMEC 2006 conference on Agent-mediated electronic commerce: automated negotiation and strategy design for electronic markets , 2006 .

[10]  R. Baumeister Choking under pressure: self-consciousness and paradoxical effects of incentives on skillful performance. , 1984, Journal of personality and social psychology.

[11]  David C. Parkes,et al.  Peer prediction without a common prior , 2012, EC '12.

[12]  Boi Faltings,et al.  Robust Incentive-Compatible Feedback Payments , 2006, TADA/AMEC.

[13]  Lada A. Adamic,et al.  Crowdsourcing with All-Pay Auctions: A Field Experiment on Taskcn , 2014, Manag. Sci..

[14]  Milan Vojnovic,et al.  Crowdsourcing and all-pay auctions , 2009, EC '09.

[15]  Peter-J. Jost,et al.  Mergers in Patent Contest Models with Synergies and Spillovers , 2006 .

[16]  Boi Faltings,et al.  Enforcing Truthful Strategies in Incentive Compatible Reputation Mechanisms , 2005, WINE.

[17]  B. Moldovanu,et al.  The Optimal Allocation of Prizes in Contests , 2001 .

[18]  Rann Smorodinsky,et al.  All-Pay Auctions - An Experimental Study , 2006 .

[19]  Nikolay Archak,et al.  Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on TopCoder.com , 2010, WWW '10.

[20]  Sian L. Beilock,et al.  More on the fragility of performance: choking under pressure in mathematical problem solving. , 2004, Journal of experimental psychology. General.

[21]  Paul Resnick,et al.  Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..

[22]  Boi Faltings,et al.  Mechanisms for Making Crowds Truthful , 2014, J. Artif. Intell. Res..

[23]  Ron Siegel,et al.  All-Pay Contests , 2009 .

[24]  K. Konrad,et al.  Merger and collusion in contests , 2002 .

[25]  Panagiotis G. Ipeirotis,et al.  Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.