Volunteer Science

Experimental research in traditional laboratories comes at a significant logistic and financial cost while drawing data from demographically narrow populations. The growth of online methods of research has resulted in effective means for social psychologists to collect large-scale survey-based data in a cost-effective and timely manner. However, the same advancement has not occurred for social psychologists who rely on experimentation as their primary method of data collection. The aim of this article is to provide an overview of one online laboratory for conducting experiments, Volunteer Science, and report the results of six studies that test canonical behaviors commonly captured in social psychological experiments. Our results show that the online laboratory is capable of performing a variety of studies with large numbers of diverse volunteers. We advocate for the use of the online laboratory as a valid and cost-effective way to perform social psychological experiments with large numbers of diverse subjects.

[1]  K. Cook,et al.  Trust Building via Risk Taking: A Cross-Societal Experiment , 2005 .

[2]  Matthew J. Salganik,et al.  Leading the Herd Astray: An Experimental Study of Self-fulfilling Prophecies in an Artificial Cultural Market , 2008, Social psychology quarterly.

[3]  David G. Rand,et al.  The promise of Mechanical Turk: how online labor markets can help theorists run behavioral experiments. , 2012, Journal of theoretical biology.

[4]  K. Cook,et al.  Social Exchange Theory , 1989, Theoretical Sociology.

[5]  R. McCrae,et al.  Universal features of personality traits from the observer's perspective: data from 50 cultures. , 2005, Journal of personality and social psychology.

[6]  James N. MacGregor,et al.  Human Performance on the Traveling Salesman and Related Problems: A Review , 2011, J. Probl. Solving.

[7]  E. Ostrom Collective action and the evolution of social norms , 2000, Journal of Economic Perspectives.

[8]  H. Pashler,et al.  Editors’ Introduction to the Special Section on Replicability in Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[9]  Jesse Shore,et al.  Facts and Figuring: An Experimental Investigation of Network Structure and Performance in Information and Solution Spaces , 2014, Organ. Sci..

[10]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[11]  Dustin J. Sleesman,et al.  Coordinated action in multiteam systems. , 2012, The Journal of applied psychology.

[12]  Lada A. Adamic,et al.  Computational Social Science , 2009, Science.

[13]  Samuel D Gosling,et al.  Wired but not WEIRD: The promise of the Internet in reaching more diverse samples , 2010, Behavioral and Brain Sciences.

[14]  Mike Wendt,et al.  Conflict adaptation in time: Foreperiods as contextual cues for attentional adjustment , 2011, Psychonomic bulletin & review.

[15]  A. Tversky,et al.  The framing of decisions and the psychology of choice. , 1981, Science.

[16]  D. Lazer,et al.  The Parable of Google Flu: Traps in Big Data Analysis , 2014, Science.

[17]  S. Levinson,et al.  WEIRD languages have misled us, too , 2010, Behavioral and Brain Sciences.

[18]  J. Andreoni,et al.  Public goods experiments without confidentiality: a glimpse into fund-raising , 2004 .

[19]  David Willer,et al.  Building Experiments: Testing Social Theory , 2007 .

[20]  D. Kahneman A perspective on judgment and choice: mapping bounded rationality. , 2003, The American psychologist.

[21]  T. Yamagishi,et al.  Culture, Identity, and Structure in Social Exchange: A Web-based Trust Experiment in the United States and Japan , 2007 .

[22]  Aaron C. Kay,et al.  Complementary justice: effects of "poor but happy" and "poor but honest" stereotype exemplars on system justification and implicit activation of the justice motive. , 2003, Journal of personality and social psychology.

[23]  C. Eriksen,et al.  The flankers task and response competition: A useful tool for investigating a variety of cognitive problems , 1995 .

[24]  Michael C. Frank,et al.  Estimating the reproducibility of psychological science , 2015, Science.

[25]  Timothy D. Wilson,et al.  Comment on “Estimating the reproducibility of psychological science” , 2016, Science.

[26]  U. Fischbacher z-Tree: Zurich toolbox for ready-made economic experiments , 1999 .

[27]  Jane Sell,et al.  Laboratory experiments in the social sciences , 2007 .

[28]  C. B. Colby The weirdest people in the world , 1973 .

[29]  Manuel Blum,et al.  reCAPTCHA: Human-Based Character Recognition via Web Security Measures , 2008, Science.

[30]  Hans Radder,et al.  In and about the World: Philosophical Studies of Science and Technology , 1996 .

[31]  C. Begley,et al.  Drug development: Raise standards for preclinical cancer research , 2012, Nature.

[32]  Daniel Houser,et al.  Disposition, History and Contributions in Public Goods Experiments , 2007 .

[33]  Stephen M. Johnson,et al.  The affect heuristic in judgments of risks and benefits , 2000 .

[34]  Gordon D. Logan,et al.  Stroop-Type Interference : Congruity Effects in Color Naming With Typewritten Responses , 1998 .

[35]  Steven Bamford,et al.  Citizen Science: Contributions to Astronomy Research , 2012, ArXiv.

[36]  Charles Stangor,et al.  Categorization of individuals on the basis of multiple social features. , 1992 .

[37]  H. Sauermann,et al.  Crowd science user contribution patterns and their implications , 2015, Proceedings of the National Academy of Sciences.

[38]  Siddharth Suri,et al.  Conducting behavioral research on Amazon’s Mechanical Turk , 2010, Behavior research methods.

[39]  Todd M. Gureckis,et al.  CUNY Academic , 2016 .

[40]  E. Miller Handbook of Social Psychology , 1946, Mental Health.

[41]  Krzysztof Z. Gajos,et al.  TurkServer: Enabling Synchronous and Longitudinal Online Experiments , 2012, HCOMP@AAAI.

[42]  Michael Kearns,et al.  Experiments in social computation , 2012, KDD.

[43]  Colin M. Macleod Half a century of research on the Stroop effect: an integrative review. , 1991, Psychological bulletin.

[44]  David G. Rand,et al.  Economic Games on the Internet: The Effect of $1 Stakes , 2011, PloS one.

[45]  L. Bobo,et al.  Can Non-full-probability Internet Surveys Yield Useful Data? A Comparison with Full-probability Face-to-face Surveys in the Domain of Race and Social Inequality Attitudes , 2015 .

[46]  E. Ostrom,et al.  Traditions and Trends in the Study of the Commons , 2007 .

[47]  R. McCrae,et al.  The Geographic Distribution of Big Five Personality Traits , 2007 .

[48]  C. Lintott,et al.  Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. , 2009, 0909.2925.

[49]  Duncan J. Watts,et al.  Cooperation and Contagion in Web-Based, Networked Public Goods Experiments , 2010, SECO.

[50]  J. Richard Hackman,et al.  Group Behavior and Performance , 2010 .

[51]  C. Nemeth Differential contributions of majority and minority influence , 1986 .

[52]  J. Schupp,et al.  Short assessment of the Big Five: robust across survey methods except telephone interviewing , 2011, Behavior research methods.

[53]  K. Stanovich,et al.  On the relative independence of thinking biases and cognitive ability. , 2008, Journal of personality and social psychology.

[54]  J. Freese,et al.  Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample , 2014 .

[55]  Phillip Li,et al.  Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say 'Usually Not' , 2015 .

[56]  S. Schmidt Shall we Really do it Again? The Powerful Concept of Replication is Neglected in the Social Sciences , 2009 .

[57]  Antonio Damasio,et al.  The somatic marker hypothesis: A neural theory of economic decision , 2005, Games Econ. Behav..