Language identification (LID), the task of determining the natural language of a given text, is an essential first step in most NLP pipelines. While generally a solved problem for documents of sufficient length and languages with ample training data, the proliferation of microblogs and other social media has made it increasingly common to encounter use-cases that *don’t* satisfy these conditions. In these situations, the fundamental difficulty is the lack of, and cost of gathering, labeled data: unlike some annotation tasks, no single “expert” can quickly and reliably identify more than a handful of languages. This leads to a natural question: can we gain useful information when annotators are only able to *rule out* languages for a given document, rather than supply a positive label? What are the optimal choices for gathering and representing such *negative evidence* as a model is trained? In this paper, we demonstrate that using negative evidence can improve the performance of a simple neural LID model. This improvement is sensitive to policies of how the evidence is represented in the loss function, and for deciding which annotators to employ given the instance and model state. We consider simple policies and report experimental results that indicate the optimal choices for this task. We conclude with a discussion of future work to determine if and how the results generalize to other classification tasks.
[1]
D. Gerdemann,et al.
Evaluation of Language Identification Methods
,
2005
.
[2]
Jasper Snoek,et al.
Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
,
2018,
ICLR.
[3]
Noah Weber,et al.
Optimizing over a Bayesian Last Layer
,
2018
.
[4]
Timothy Baldwin,et al.
Cross-domain Feature Selection for Language Identification
,
2011,
IJCNLP.
[5]
Kevin Duh,et al.
JHU System Description for the MADAR Arabic Dialect Identification Shared Task
,
2019,
WANLP@ACL 2019.
[6]
Timothy Baldwin,et al.
Automatic Language Identification in Texts: A Survey
,
2018,
J. Artif. Intell. Res..
[7]
Karl-Michael Schneider.
On Word Frequency Information and Negative Evidence in Naive Bayes Text Classification
,
2004,
EsTAL.
[8]
Tomas Mikolov,et al.
Bag of Tricks for Efficient Text Classification
,
2016,
EACL.
[9]
Xiang Zhang,et al.
Character-level Convolutional Networks for Text Classification
,
2015,
NIPS.