Crowdsourcing for linguistic data typically aims to replicate expert annotations using simplified tasks. But an alternative goal — one that is especially relevant for research in the domains of language meaning and use — is to tap into people's rich experience as everyday users of language. Research in these areas has the potential to tell us a great deal about how language works, but designing annotation frameworks for crowdsourcing of this kind poses special challenges. In this paper we define and exemplify two approaches to linguistic data collection corresponding to these differing goals (model-driven and user-driven) and discuss some hybrid cases in which they overlap. We also describe some design principles and resolution techniques helpful for eliciting linguistic wisdom from the crowd.
[1]
Victor Kuperman,et al.
Crowdsourcing and language studies: the new generation of linguistic data
,
2010,
Mturk@HLT-NAACL.
[2]
Collin F. Baker,et al.
A Frames Approach to Semantic Analysis
,
2009
.
[3]
Brendan T. O'Connor,et al.
Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
,
2008,
EMNLP.
[4]
Daniel Gildea,et al.
Automatic Labeling of Semantic Roles
,
2000,
ACL.
[5]
David Huynh,et al.
Scaling Semantic Frame Annotation
,
2015,
LAW@NAACL-HLT.
[6]
Jisup Hong,et al.
How Good is the Crowd at "real" WSD?
,
2011,
Linguistic Annotation Workshop.