Crowds in two seconds: enabling realtime crowd-powered interfaces

Interactive systems must respond to user input within seconds. Therefore, to create realtime crowd-powered interfaces, we need to dramatically lower crowd latency. In this paper, we introduce the use of synchronous crowds for on-demand, realtime crowdsourcing. With synchronous crowds, systems can dynamically adapt tasks by leveraging the fact that workers are present at the same time. We develop techniques that recruit synchronous crowds in two seconds and use them to execute complex search tasks in ten seconds. The first technique, the retainer model, pays workers a small wage to wait and respond quickly when asked. We offer empirically derived guidelines for a retainer system that is low-cost and produces on-demand crowds in two seconds. Our second technique, rapid refinement, observes early signs of agreement in synchronous crowds and dynamically narrows the search space to focus on promising directions. This approach produces results that, on average, are of more reliable quality and arrive faster than the fastest crowd member working alone. To explore benefits and limitations of these techniques for interaction, we present three applications: Adrenaline, a crowd-powered camera where workers quickly filter a short video down to the best single moment for a photo; and Puppeteer and A|B, which examine creative generation tasks, communication with workers, and low-latency voting.

[1]  H. Simon,et al.  Rational choice and the structure of the environment. , 1956, Psychological review.

[2]  Allen Newell,et al.  The psychology of human-computer interaction , 1983 .

[3]  Hiroshi Ishii,et al.  ClearBoard: a seamless medium for shared drawing and conversation with eye contact , 1992, CHI.

[4]  Jakob Nielsen,et al.  Usability engineering , 1997, The Computer Science and Engineering Handbook.

[5]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[6]  Takeo Igarashi,et al.  As-rigid-as-possible shape manipulation , 2005, ACM Trans. Graph..

[7]  Richard Szeliski,et al.  The Moment Camera , 2006, Computer.

[8]  Luis von Ahn,et al.  Matchin: eliciting user preferences with an online game , 2009, CHI.

[9]  Aaron Michael Koblin,et al.  The sheep market , 2009, C&C '09.

[10]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[11]  Aniket Kittur,et al.  Crowdsourcing, collaboration and creativity , 2010, XRDS.

[12]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[13]  Siddhartha S. Srinivasa,et al.  People helping robots helping people: Crowdsourcing for grasping novel objects , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Lydia B. Chilton,et al.  TurKit: human computation algorithms on mechanical turk , 2010, UIST.

[15]  Lydia B. Chilton,et al.  Exploring iterative and parallel human computation processes , 2010, HCOMP '10.

[16]  Jeffrey P. Bigham,et al.  VizWiz: nearly real-time answers to visual questions , 2010, W4A.

[17]  Lydia B. Chilton,et al.  Task search in a human computation market , 2010, HCOMP '10.

[18]  Scott R. Klemmer,et al.  Shepherding the crowd: managing and providing feedback to crowd workers , 2011, CHI Extended Abstracts.

[19]  Ariel D. Procaccia,et al.  Human Computation and Multiagent Systems: An Algorithmic Perspective , 2011 .

[20]  James A. Landay,et al.  Utility of human-computer interactions: toward a science of preference measurement , 2011, CHI.

[21]  Panagiotis G. Ipeirotis,et al.  Estimating the Completion Time of Crowdsourced Tasks Using Survival Analysis Models , 2011 .

[22]  Siddharth Suri,et al.  Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.