Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks

ObjectivesText classification is a recurrent goal in machine learning projects and a typical task in crowdsourcing platforms. Hybrid approaches, leveraging crowdsourcing and machine learning, work better than either in isolation and help to reduce crowdsourcing costs. One way to mix crowd and machine efforts is to have algorithms highlight passages from texts and feed these to the crowd for classification. In this paper, we present a dataset to study text highlighting generation and its impact on document classification.Data descriptionThe dataset was created through two series of experiments where we first asked workers to (i) classify documents according to a relevance question and to highlight parts of the text that supported their decision, and on a second phase, (ii) to assess document relevance but supported by text highlighting of varying quality (six human-generated and six machine-generated highlighting conditions). The dataset features documents from two application domains: systematic literature reviews and product reviews, three document sizes, and three relevance questions of different levels of difficulty. We expect this dataset of 27,711 individual judgments from 1851 workers to benefit not only this specific problem domain, but the larger class of classification problems where crowdsourced datasets with individual judgments are scarce.

[1]  Fabio Casati,et al.  Understanding the Impact of Text Highlighting in Crowdsourcing Tasks , 2019, HCOMP.

[2]  Mirella Lapata,et al.  Ranking Sentences for Extractive Summarization with Reinforcement Learning , 2018, NAACL.

[3]  Yang Liu,et al.  Fine-tune BERT for Extractive Summarization , 2019, ArXiv.

[4]  John Sartori,et al.  Approximate Communication , 2018, ACM Comput. Surv..

[5]  Fabio Casati,et al.  CrowdRev : A platform for Crowd-based Screening of Literature , 2018 .

[6]  Bipin Indurkhya,et al.  Cognitively inspired task design to improve user performance on crowdsourcing platforms , 2014, CHI.

[7]  Beth Trushkowsky,et al.  Dynamic Filter: Adaptive Query Processing with the Crowd , 2017, HCOMP.

[8]  Brendan T. O'Connor,et al.  Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.

[9]  Fabio Casati,et al.  Combining Crowd and Machines for Multi-predicate Item Screening , 2018, Proc. ACM Hum. Comput. Interact..

[10]  Fabio Casati,et al.  Crowdsourcing Paper Screening in Systematic Literature Reviews , 2017, HCOMP.

[11]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[12]  Peng Dai,et al.  POMDP-based control of workflows for crowdsourcing , 2013, Artif. Intell..

[13]  Neil R. Smalheiser,et al.  Identifying reports of randomized controlled trials (RCTs) via a hybrid machine learning and crowdsourcing approach , 2017, J. Am. Medical Informatics Assoc..

[14]  Boualem Benatallah,et al.  Quality Control in Crowdsourcing , 2018, ACM Comput. Surv..

[15]  Michael S. Bernstein,et al.  Flock: Hybrid Crowd-Machine Learning Classifiers , 2015, CSCW.