Exposing ambiguities in a relation-extraction gold standard with crowdsourcing

Semantic relation extraction is one of the frontiers of biomedical natural language processing research. Gold standards are key tools for advancing this research. It is challenging to generate these standards because of the high cost of expert time and the difficulty in establishing agreement between annotators. We implemented and evaluated a microtask crowdsourcing approach that can produce a gold standard for extracting drug-disease relations. The aggregated crowd judgment agreed with expert annotations from a pre-existing corpus on 43 of 60 sentences tested. The levels of crowd agreement varied in a similar manner to the levels of agreement among the original expert annotators. This work rein-forces the power of crowdsourcing in the process of assembling gold standards for relation extraction. Further, it high-lights the importance of exposing the levels of agreement between human annotators, expert or crowd, in gold standard corpora as these are reproducible signals indicating ambiguities in the data or in the annotation guidelines.

[1]  Laura Inés Furlong,et al.  The EU-ADR corpus: Annotated drugs, diseases, targets, and their relationships , 2012, J. Biomed. Informatics.

[2]  Zhiyong Lu,et al.  NCBI disease corpus: A resource for disease name recognition and concept normalization , 2014, J. Biomed. Informatics.

[3]  Paloma Martínez,et al.  An analysis on the entity annotations in biological corpora , 2014, F1000Research.

[4]  Fei Xia,et al.  Using Amazon's Mechanical Turk for Annotating Medical Named Entities. , 2010, AMIA ... Annual Symposium proceedings. AMIA Symposium.

[5]  Thomas C. Rindflesch,et al.  Natural Language Processing , 1996, Annual Review of Applied Linguistics.

[6]  Todd Lingren,et al.  Web 2.0-Based Crowdsourcing for High-Quality Gold Standard Development in Clinical Natural Language Processing , 2013, Journal of medical Internet research.

[7]  Marcelo Fiszman,et al.  Extracting Semantic Predications from Medline Citations for Pharmacogenomics , 2006, Pacific Symposium on Biocomputing.

[8]  Lora Aroyo,et al.  Measuring Crowd Truth for Medical Relation Extraction , 2013, AAAI Fall Symposia.

[9]  Núria Queralt-Rosinach,et al.  Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research , 2014, BMC Bioinformatics.

[10]  Benjamin M. Good,et al.  Microtask Crowdsourcing for Disease Mention Annotation in PubMed Abstracts , 2014, Pacific Symposium on Biocomputing.

[11]  Halil Kilicoglu,et al.  Constructing a semantic predication gold standard from the biomedical literature , 2011, BMC Bioinformatics.

[12]  Lora Aroyo,et al.  Truth Is a Lie: Crowd Truth and the Seven Myths of Human Annotation , 2015, AI Mag..