Planet Four: Craters—Optimizing task workflow to improve volunteer engagement and crater counting performance

Virtual citizen science platforms allow nonscientists to take part in scientific research across a range of disciplines, including planetary science. What is required of the volunteer can vary considerably in terms of task type, variety, judgment required, and autonomy—even when the overall goal is unchanged. Through analysis of our live Zooniverse Planet Four: Craters citizen science platform, the effects of task workflow design factors including volunteer autonomy, task variety, task type, and judgment required on volunteer engagement and crater marking performance were investigated. Website analytics showed volunteers using the Full interface (most autonomy and variety) were more likely to return to the platform, although the amount of time spent per visit was unaffected by the interface used. However, analysis of performance suggested that how this time was used did differ. The interface involving the least complex task resulted in the greatest amount of data and rate of collection, although this also coincided with a greater number of false positives when compared with the expert. Performance in terms of agreement, both between participants and with the expert judgment, was significantly improved when using the Stepped interface for crater position and the Ramped (Mark) when measuring diameter—interfaces that both directly measured the metric with a specific, delineated task. The implications for planetary scientists considering the citizen science route is that there is a balancing act to perform, weighing the importance of volunteer engagement with scientists' data needs and the resources that can be committed to data validation.

[1]  G. Mcgill Craters as “fossils”: The remote dating of planetary surface materials , 1977 .

[2]  Michalis Vazirgiannis,et al.  c ○ 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. On Clustering Validation Techniques , 2022 .

[3]  F. Jäkel,et al.  Spatial four-alternative forced-choice method is the preferred psychophysical method for naïve observers. , 2006, Journal of vision.

[4]  Jason Reed,et al.  A framework for defining and describing key design features of virtual citizen science projects , 2012, iConference '12.

[5]  Edward E. Prather,et al.  Measuring the conceptual understandings of citizen scientists participating in zooniverse projects: A first approach , 2013 .

[6]  Elisa Bertino,et al.  Quality Control in Crowdsourcing Systems: Issues and Directions , 2013, IEEE Internet Computing.

[7]  Marc-Thomas Schmidt,et al.  Building Workflow Business Objects , 1998 .

[8]  Michael S. Bernstein,et al.  Break It Down: A Comparison of Macro- and Microtasks , 2015, CHI.

[9]  Elena Paslaru Bontas Simperl,et al.  Designing for Citizen Data Analysis: A Cross-Sectional Case Study of a Multi-Domain Citizen Science Platform , 2015, CHI.

[10]  S. Deshpande,et al.  Task Characteristics and the Experience of Optimal Flow in Human—Computer Interaction , 1994 .

[11]  J. Hackman,et al.  Development of the Job Diagnostic Survey , 1975 .

[12]  Nancy G. Dodd,et al.  The interactive effects of variety, autonomy, and feedback on attitudes and performance , 1996 .

[13]  Oded Nov,et al.  Dusting for science: motivation and participation of digital citizen science volunteers , 2011, iConference.

[14]  E. Hand,et al.  Citizen science: People power , 2010, Nature.

[15]  Mark S. Robinson,et al.  How old are young lunar craters , 2012 .

[16]  Jaime Teevan,et al.  Chain Reactions: The Impact of Order on Microtask Chains , 2016, CHI.

[17]  greg A. chung-YAn,et al.  The nonlinear effects of job complexity and autonomy on job satisfaction, turnover, and psychological well-being. , 2010, Journal of occupational health psychology.

[18]  Anita Greenhill,et al.  Defining and Measuring Success in Online Citizen Science: A Case Study of Zooniverse Projects , 2015, Computing in Science & Engineering.

[19]  Charlene Jennett,et al.  Do games attract or sustain engagement in citizen science?: a study of volunteer motivations , 2013, CHI Extended Abstracts.

[20]  Barry Gerhart,et al.  How important are dispositional factors as determinants of job satisfaction? Implications for job design and other personnel programs. , 1987 .

[21]  C. Lintott,et al.  Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna , 2015, Scientific Data.

[22]  Anna L. Cox,et al.  “I want to be a captain! I want to be a captain!”: gamification in the old weather citizen science project , 2013, Gamification.

[23]  Alexandre N. Tuch,et al.  Disassembling gamification: the effects of points and meaning on user motivation and performance , 2013, CHI Extended Abstracts.

[24]  J. Wookey,et al.  Estimates of seismic activity in the Cerberus Fossae region of Mars , 2013 .

[25]  Björn Hartmann,et al.  Turkomatic: automatic recursive task and workflow design for mechanical turk , 2011, Human Computation.

[26]  Chin-Yin Huang,et al.  Distributed manufacturing execution systems: A workflow perspective , 2002, J. Intell. Manuf..

[27]  Steven Bamford,et al.  The Moon Zoo citizen science project: Preliminary results for the Apollo 17 landing site , 2016, 1602.01664.

[28]  Murray Grant,et al.  How clumpy is my image? Evaluating crowdsourced annotation tasks , 2013, 2013 13th UK Workshop on Computational Intelligence (UKCI).

[29]  Ann Blandford,et al.  Designing for dabblers and deterring drop-outs in citizen science , 2014, CHI.

[30]  Scott R. Klemmer,et al.  Shepherding the crowd yields better work , 2012, CSCW.

[31]  Clark R. Chapman,et al.  The variability of crater identification among expert and community crater analysts , 2014, 1404.1334.

[32]  Chris J. Lintott,et al.  Ideas for Citizen Science in Astronomy , 2014, 1409.4291.

[33]  N. A. Thacker,et al.  Estimating False Positive Contamination in Crater Annotations from Citizen Science Data , 2016, Earth, Moon, and Planets.

[34]  Björn Hartmann,et al.  Collaboratively crowdsourcing workflows with turkomatic , 2012, CSCW.

[35]  Ryan Meyer,et al.  Strategies Employed by Citizen Science Programs to Increase the Credibility of Their Data , 2016 .