Though crowdsourcing holds great promise, many struggle with framing tasks and determining which members of the crowd should be recruited to obtain reliable output. In some cases, expert knowledge is desired but, given the time and cost constraints of the problem, may not be available. In this case, it would be beneficial to augment the expert input that is available with input from members of the general population. We believe that reduced reliance on experts will in some cases lead to acceptable performance while reducing cost and latency. In this work, we show that we are able to approach the performance of an expert group for an image labeling task, while reducing our reliance on experts by incorporating non-expert responses.
[1]
Dafna Shahaf,et al.
Generalized Task Markets for Human and Machine Computation
,
2010,
AAAI.
[2]
David A. Forsyth,et al.
Utility data annotation with Amazon Mechanical Turk
,
2008,
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
[3]
Scott E. Page,et al.
Problem Solving by Heterogeneous Agents
,
2001,
J. Econ. Theory.
[4]
Björn Hartmann,et al.
CommunitySourcing: engaging local crowds to perform expert work via physical kiosks
,
2012,
CHI.
[5]
Jeffrey Heer,et al.
Crowdsourcing graphical perception: using mechanical turk to assess visualization design
,
2010,
CHI.