Linguistic Wisdom from the Crowd

Crowdsourcing for linguistic data typically aims to replicate expert annotations using simplified tasks. But an alternative goal — one that is especially relevant for research in the domains of language meaning and use — is to tap into people's rich experience as everyday users of language. Research in these areas has the potential to tell us a great deal about how language works, but designing annotation frameworks for crowdsourcing of this kind poses special challenges. In this paper we define and exemplify two approaches to linguistic data collection corresponding to these differing goals (model-driven and user-driven) and discuss some hybrid cases in which they overlap. We also describe some design principles and resolution techniques helpful for eliciting linguistic wisdom from the crowd.