Crowdsourcing Technology to Support Academic Research

Current crowdsourcing platforms typically concentrate on simple microtasks and do not meet the needs of academic research well, where more complex, time consuming studies are required. This has lead to the development of specialised software tools to support academic research on such platforms. However, the loose coupling of the software with the crowdsourcing site means that there is only limited access to the features of the platform. In addition, the specialised nature of the software tools means that technical knowledge is needed to operate them. Hence there is great potential to enrich the features of crowdsourcing platforms from an academic perspective. In this chapter we discuss the possibilities for practical improvement of academic crowdsourced studies through adaption of technological solutions.

[1]  Denzil Ferreira,et al.  AWARE: Mobile Context Instrumentation Framework , 2015, Front. ICT.

[2]  Fausto Giunchiglia,et al.  Privacy for Peer Profiling in Collective Adaptive Systems , 2014, Privacy and Identity Management.

[3]  Elizabeth F. Churchill,et al.  Mouse tracking: measuring and predicting users' experience of web-based content , 2012, CHI.

[4]  David Johnstone,et al.  Factors influencing the decision to crowdsource: A systematic literature review , 2015, Information Systems Frontiers.

[5]  Michael S. Bernstein,et al.  The future of crowd work , 2013, CSCW.

[6]  Kim Marriott,et al.  HOLA: Human-like Orthogonal Network Layout , 2016, IEEE Transactions on Visualization and Computer Graphics.

[7]  Alessandro Acquisti,et al.  Beyond the Turk: An Empirical Comparison of Alternative Platforms for Crowdsourcing Online Behavioral Research , 2016 .

[8]  Steven D. Gribble,et al.  Maverick: Providing Web Applications with Safe and Flexible Access to Local Devices , 2011, WebApps.

[9]  Vitaly Shmatikov,et al.  Robust De-anonymization of Large Sparse Datasets , 2008, 2008 IEEE Symposium on Security and Privacy (sp 2008).

[10]  Eric Horvitz,et al.  Volunteering Versus Work for Pay: Incentives and Tradeoffs in Crowdsourcing , 2013, HCOMP.

[11]  Michael Hauber,et al.  jActivity: supporting mobile web developers with HTML5/JavaScript based human activity recognition , 2013, MUM.

[12]  Weichao Li,et al.  Detecting low-quality crowdtesting workers , 2015, 2015 IEEE 23rd International Symposium on Quality of Service (IWQoS).

[13]  Gianluca Demartini,et al.  Mechanical Cheat: Spamming Schemes and Adversarial Techniques on Crowdsourcing Platforms , 2012, CrowdSearch.

[14]  Alon Y. Halevy,et al.  Crowdsourcing systems on the World-Wide Web , 2011, Commun. ACM.

[15]  C. Lintott,et al.  Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. , 2009, 0909.2925.

[16]  B. Caraway,et al.  Online labour markets: an inquiry into oDesk providers , 2010 .

[17]  Simon Breslav,et al.  Mimic: visual analytics of online micro-interactions , 2014, AVI.

[18]  Amihai Glazer,et al.  Motivating devoted workers , 2004 .

[19]  Alek Felstiner Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry , 2011 .

[20]  Phuoc Tran-Gia,et al.  Predicting result quality in Crowdsourcing using application layer monitoring , 2014, 2014 IEEE Fifth International Conference on Communications and Electronics (ICCE).

[21]  Jeffrey A. Smith,et al.  What Do Bureaucrats Do? The Effects of Performance Standards and Bureaucratic Preferences on Acceptance into the Jtpa Program , 1996 .

[22]  Jacki O'Neill,et al.  Being a turker , 2014, CSCW.

[23]  Per Ola Kristensson,et al.  Crowdsourcing a HIT: Measuring Workers' Pre-Task Interactions on Microtask Markets , 2013, HCOMP.

[24]  Isabelle Hupont,et al.  Bridging the gap between eye tracking and crowdsourcing , 2015, Electronic Imaging.

[25]  Padmini Srinivasan,et al.  Quality through flow and immersion: gamifying crowdsourced relevance assessments , 2012, SIGIR '12.

[26]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[27]  Neil Anderson,et al.  The predictive validity of cognitive ability tests: A UK meta-analysis , 2005 .

[28]  Andrew A. Adams,et al.  The future of video analytics for surveillance and its ethical implications , 2015 .

[29]  Jesse Chandler,et al.  Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers , 2013, Behavior Research Methods.

[30]  Daniel McDuff,et al.  Crowdsourcing facial responses to online videos: Extended abstract , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[31]  Todd M. Gureckis,et al.  CUNY Academic , 2016 .

[32]  C. Gualtieri,et al.  Reliability and validity of a computerized neurocognitive test battery, CNS Vital Signs. , 2006, Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists.

[33]  Mahmoud Elkhodr,et al.  A semantic obfuscation technique for the Internet of Things , 2014, 2014 IEEE International Conference on Communications Workshops (ICC).

[34]  Daniel McDuff,et al.  Crowdsourced data collection of facial responses , 2011, ICMI '11.

[35]  Pingmei Xu,et al.  TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking , 2015, ArXiv.

[36]  Isabelle Hupont,et al.  Eye Tracker in the Wild: Studying the delta between what is said and measured in a crowdsourcing experiment , 2015, CrowdMM@ACM Multimedia.

[37]  M. Six Silberman,et al.  Turkopticon: interrupting worker invisibility in amazon mechanical turk , 2013, CHI.

[38]  Amar Cheema,et al.  Data collection in a flat world: the strengths and weaknesses of mechanical turk samples , 2013 .

[39]  Gianluca Demartini,et al.  ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking , 2012, WWW.

[40]  Xuemin Shen,et al.  Mobile crowdsourcing [Editor's note] , 2015, IEEE Network.

[41]  Gabriella Kazai,et al.  Quality Management in Crowdsourcing using Gold Judges Behavior , 2016, WSDM.

[42]  Phuoc Tran-Gia,et al.  Best Practices for QoE Crowdtesting: QoE Assessment With Crowdsourcing , 2014, IEEE Transactions on Multimedia.

[43]  Jason T. Reed,et al.  An Exploratory Factor Analysis of Motivations for Participating in Zooniverse, a Collection of Virtual Citizen Science Projects , 2013, 2013 46th Hawaii International Conference on System Sciences.

[44]  Heli Väätäjä,et al.  Exploring augmented reality for user-generated hyperlocal news content , 2013, CHI Extended Abstracts.

[45]  Phuoc Tran-Gia,et al.  Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com , 2011, 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing.

[46]  Michael S. Bernstein,et al.  We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers , 2015, CHI.

[47]  Greg Little Exploring iterative and parallel human computation processes , 2010, CHI EA '10.

[48]  Murray R. Barrick,et al.  THE BIG FIVE PERSONALITY DIMENSIONS AND JOB PERFORMANCE: A META-ANALYSIS , 1991 .

[49]  Frédo Durand,et al.  Eulerian video magnification for revealing subtle changes in the world , 2012, ACM Trans. Graph..