Amazon Mechanical Turk: A Research Tool for Organizations and Information Systems Scholars

Amazon Mechanical Turk (AMT), a system for crowdsourcing work, has been used in many academic fields to support research and could be similarly useful for information systems research. This paper briefly describes the functioning of the AMT system and presents a simple typology of research data collected using AMT. For each kind of data, it discusses potential threats to reliability and validity and possible ways to address those threats. The paper concludes with a brief discussion of possible applications of AMT to research on organizations and information systems.

[1]  Panagiotis G. Ipeirotis,et al.  Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.

[2]  Michael Kaisser,et al.  Creating a Research Collection of Question Answer Sentence Pairs with Amazon's Mechanical Turk , 2008, LREC.

[3]  Kevin Crowston,et al.  From Conservation to Crowdsourcing: A Typology of Citizen Science , 2011, 2011 44th Hawaii International Conference on System Sciences.

[4]  J. Cohn Citizen Science: Can Volunteers Do Real Research? , 2008 .

[5]  Adam J. Berinsky,et al.  Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk , 2012, Political Analysis.

[6]  Daren C. Brabham Crowdsourcing as a Model for Problem Solving , 2008 .

[7]  Lee Sproull,et al.  Design for quality: the case of open source software development , 2008 .

[8]  Panagiotis G. Ipeirotis Analyzing the Amazon Mechanical Turk marketplace , 2010, XRDS.

[9]  Brendan T. O'Connor,et al.  Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.

[10]  Siddharth Suri,et al.  Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.

[11]  Panagiotis G. Ipeirotis Demographics of Mechanical Turk , 2010 .

[12]  Cyrus Rashtchian,et al.  Collecting Image Annotations Using Amazon’s Mechanical Turk , 2010, Mturk@HLT-NAACL.

[13]  Jeffrey Heer,et al.  Crowdsourcing graphical perception: using mechanical turk to assess visualization design , 2010, CHI.

[14]  Kevin Crowston,et al.  Motivation and Data Quality in a Citizen Science Game: A Design Science Evaluation , 2013, 2013 46th Hawaii International Conference on System Sciences.

[15]  David A. Forsyth,et al.  Utility data annotation with Amazon Mechanical Turk , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[16]  R. George,et al.  Information technology, globalization and ethics , 2006, Ethics and Information Technology.

[17]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[18]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[19]  Lydia B. Chilton,et al.  The labor economics of paid crowdsourcing , 2010, EC '10.

[20]  Michael D. Buhrmester,et al.  Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.

[21]  John M. Levine,et al.  To stay or leave?: the relationship of emotional and informational support to commitment in online health support groups , 2012, CSCW.

[22]  Jon Sprouse A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory , 2010, Behavior research methods.

[23]  Lydia B. Chilton,et al.  Task search in a human computation market , 2010, HCOMP '10.

[24]  Johanna D. Moore,et al.  Proceedings of the Conference on Human Factors in Computing Systems , 1989 .