Extending Workers' Attention Span Through Dummy Events

This paper studies a new paradigm for improving the attention span of workers in tasks that heavily rely on user's attention to the occurrence of rare events. Such tasks are highly common, ranging from crime monitoring to controlling autonomous complex machines, and many of them are ideal for crowdsourcing.  The underlying idea in our approach is to dynamically augment the task with some dummy (artificial) events at different times throughout the task, rewarding the worker upon identifying and reporting them.  This, as an alternative to the traditional approach of exclusively relying on rewarding the worker for successfully identifying the event of interest itself.  We propose three methods for timing the dummy events throughout the task. Two of these methods are static and determine the timing of the dummy events at random or uniformly throughout the task. The third method is dynamic and uses the identification (or misidentification) of dummy events as a signal for the worker's attention to the task, adjusting the rate of dummy events generation accordingly. We use extensive experimentation to compare the methods with the traditional approach of inducing attention through rewarding the identification of the event of interest and within the three. The analysis of the results indicates that with the use of dummy events a substantially more favorable tradeoff between the detection (of the event of interest) probability and the expected expense can be achieved, and that among the three proposed method the one that decides on dummy events on the fly is (by far) the best.

[1]  Huan Liu,et al.  Promoting Coordination for Disaster Relief - From Crowdsourcing to Coordination , 2011, SBP.

[2]  A. Tversky,et al.  Prospect theory: an analysis of decision under risk — Source link , 2007 .

[3]  Daniel Trottier,et al.  Crowdsourcing CCTV surveillance on the Internet , 2014 .

[4]  Athanasios V. Vasilakos,et al.  TRAC: Truthful auction for location-aware collaborative sensing in mobile crowdsourcing , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[5]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[6]  R. Sugden,et al.  Regret Theory: An alternative theory of rational choice under uncertainty Review of Economic Studies , 1982 .

[7]  Avshalom Elmalech,et al.  Agent development as a strategy shaper , 2015, Autonomous Agents and Multi-Agent Systems.

[8]  Stefano Tranquillini,et al.  Keep it simple: reward and task design in crowdsourcing , 2013, CHItaly '13.

[9]  Michael S. Bernstein,et al.  Analytic Methods for Optimizing Realtime Crowdsourcing , 2012, ArXiv.

[10]  Ming Yin,et al.  Bonus or Not? Learn to Reward in Crowdsourcing , 2015, IJCAI.

[11]  Michael S. Bernstein,et al.  Crowds in two seconds: enabling realtime crowd-powered interfaces , 2011, UIST.

[12]  Panagiotis G. Ipeirotis,et al.  Quizz: targeted crowdsourcing with a billion (potential) users , 2014, WWW.

[13]  Parham Aarabi,et al.  The art of lecturing : a practical guide to successful university lectures and business presentations , 2007 .

[14]  David Sarne,et al.  Enhancing comparison shopping agents through ordering and gradual information disclosure , 2017, Autonomous Agents and Multi-Agent Systems.

[15]  G. DeJong,et al.  Theory and Application of Reward Shaping in Reinforcement Learning , 2004 .

[16]  David Sarne,et al.  Improving comparison shopping agents' competence through selective price disclosure , 2015, Electron. Commer. Res. Appl..

[17]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[18]  Peng Dai,et al.  And Now for Something Completely Different: Improving Crowdsourcing Workflows with Micro-Diversions , 2015, CSCW.

[19]  Sam Devlin,et al.  An Empirical Study of Potential-Based Reward Shaping and Advice in Complex, Multi-Agent Systems , 2011, Adv. Complex Syst..

[20]  Eric Horvitz,et al.  Volunteering Versus Work for Pay: Incentives and Tradeoffs in Crowdsourcing , 2013, HCOMP.

[21]  David Sarne,et al.  Intelligent Advice Provisioning for Repeated Interaction , 2016, AAAI.

[22]  Milan Vojnovic,et al.  Crowdsourcing and all-pay auctions , 2009, EC '09.

[23]  Avshalom Elmalech,et al.  Problem restructuring for better decision making in recurring decision situations , 2014, Autonomous Agents and Multi-Agent Systems.

[24]  Yu-An Sun,et al.  Monetary Interventions in Crowdsourcing Task Switching , 2014, HCOMP.

[25]  Avshalom Elmalech,et al.  When Suboptimal Rules , 2015, AAAI.

[26]  Tim Kraska,et al.  CrowdDB: answering queries with crowdsourcing , 2011, SIGMOD '11.

[27]  David M. Rahman,et al.  But who will monitor the monitor? , 2009, EC '09.

[28]  Christos Makris,et al.  Editorial , 2015, Int. J. Artif. Intell. Tools.

[29]  David Murakami Wood,et al.  The Growth of CCTV: a global perspective on the international diffusion of video surveillance in publicly accessible space , 2002 .

[30]  Michael L. Littman,et al.  Social reward shaping in the prisoner's dilemma , 2008, AAMAS.

[31]  M. Zeelenberg,et al.  Consequences of regret aversion: 2. Additional evidence for effects of feedback on decision making , 1997 .

[32]  Panagiotis G. Ipeirotis Analyzing the Amazon Mechanical Turk marketplace , 2010, XRDS.

[33]  Alon Y. Halevy,et al.  Crowdsourcing systems on the World-Wide Web , 2011, Commun. ACM.

[34]  Aleksandrs Slivkins,et al.  Incentivizing high quality crowdwork , 2015, SECO.

[35]  Thore Graepel,et al.  Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests , 2012, AAAI.

[36]  Michael Vitale,et al.  The Wisdom of Crowds , 2015, Cell.

[37]  David M. Rahman But Who Will Monitor the Monitor , 2012 .

[38]  Daniel J. Veit,et al.  More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk , 2011, AMCIS.

[39]  Eduardo F. Morales,et al.  Dynamic Reward Shaping: Training a Robot by Voice , 2010, IBERAMIA.