CrowdRev: A platform for Crowd-based Screening of Literature Reviews

In this paper and demo we present a crowd and crowd+AI based system, called CrowdRev, supporting the screening phase of literature reviews and achieving the same quality as author classification at a fraction of the cost, and near-instantly. CrowdRev makes it easy for authors to leverage the crowd, and ensures that no money is wasted even in the face of difficult papers or criteria: if the system detects that the task is too hard for the crowd, it just gives up trying (for that paper, or for that criteria, or altogether), without wasting money and never compromising on quality.

[1]  Fabio Casati,et al.  Crowd-based Multi-Predicate Screening of Papers in Literature Reviews , 2018, WWW.

[2]  Fabio Casati,et al.  Crowdsourcing Paper Screening in Systematic Literature Reviews , 2017, HCOMP.

[3]  Byron C. Wallace,et al.  An exploration of crowdsourcing citation screening for systematic reviews , 2017, Research synthesis methods.

[4]  Neil R. Smalheiser,et al.  Identifying reports of randomized controlled trials (RCTs) via a hybrid machine learning and crowdsourcing approach , 2017, J. Am. Medical Informatics Assoc..

[5]  Matthew Lease,et al.  Crowdsourcing Information Extraction for Biomedical Systematic Reviews , 2016, AAAI 2016.

[6]  Michael Weiss,et al.  Crowdsourcing Literature Reviews in New Domains , 2016 .

[7]  Jinwoo Shin,et al.  Optimality of Belief Propagation for Crowdsourced Classification , 2016, ICML.

[8]  Philippe Ravaud,et al.  Wasted research when systematic reviews fail to provide a complete and up-to-date evidence synthesis: the example of lung cancer , 2016, BMC Medicine.

[9]  Qiang Liu,et al.  Scoring Workers in Crowdsourcing: How Many Control Questions are Enough? , 2013, NIPS.

[10]  Divesh Srivastava,et al.  Data Fusion: Resolving Conflicts from Multiple Sources , 2013, WAIM.

[11]  John C. Platt,et al.  Learning from the Wisdom of Crowds by Minimax Entropy , 2012, NIPS.

[12]  Chao Liu,et al.  TrueLabel + Confusions: A Spectrum of Probabilistic Models in Analyzing Multiple Ratings , 2012, ICML.

[13]  A. Haidich,et al.  Meta-analysis in medical research. , 2010, Hippokratia.

[14]  A. Webster,et al.  How to write a Cochrane systematic review , 2010, Nephrology.

[15]  Javier R. Movellan,et al.  Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise , 2009, NIPS.

[16]  A. Booth,et al.  A typology of reviews: an analysis of 14 review types and associated methodologies. , 2009, Health information and libraries journal.

[17]  K. Shojania,et al.  Systematic reviews can be produced and published faster. , 2008, Journal of clinical epidemiology.

[18]  G. Antes,et al.  Five Steps to Conducting a Systematic Review , 2003, Journal of the Royal Society of Medicine.

[19]  A. P. Dawid,et al.  Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm , 1979 .

[20]  Hongwei Li,et al.  Error Rate Analysis of Labeling by Crowdsourcing , 2013 .

[21]  J. Higgins,et al.  Cochrane Handbook for Systematic Reviews of Interventions, Version 5.1.0. The Cochrane Collaboration , 2013 .

[22]  Bin Bi,et al.  Iterative Learning for Reliable Crowdsourcing Systems , 2012 .

[23]  S. Greenfield,et al.  Clinical practice guidelines we can trust , 2011 .

[24]  J. Higgins Cochrane handbook for systematic reviews of interventions. Version 5.1.0 [updated March 2011]. The Cochrane Collaboration , 2011 .