Quality Control Research of Crowdsourcing for Annotation

Crowdsourcing is a distributed business model that has rapidly gained popularity in recent years.Crowdsourcing for annotation is an effective way to get a large number of labelers in cheap cost and short time.However,quality control still remains a challenge since spammers in crowdsourcing platforms may submit unreliable result.To improve effectively labeler accuracy in crowdsourcing,in this paper we learn annotator's reliability and actively decided who to annotate the given instance based on annotator's reliability.We adopted the prediction model of Greedy Forecaster inferring the given instance's annotation result based on the multi labelers submitted by annotators.Experimental results and performance comparisons with baseline methods shown that our approach could achieve higher annotation accuracy in different crowdsourcing scenarios,our method realized effectively quality control.