In computer vision, virtually every state-of-the-art deep learning system is trained with data augmentation. In text classification, however, data augmentation is less widely practiced because it must be performed before training and risks introducing label noise. We augment the IMDB movie reviews dataset with examples generated by two families of techniques: random token perturbations introduced by Wei and Zou [2019] and backtranslation -- translating to a second language then back to English. In low resource environments, backtranslation generates significant improvement on top of the state of-the-art ULMFit model. A ULMFit model pretrained on wikitext103 and then fine-tuned on only 50 IMDB examples and 500 synthetic examples generated by backtranslation achieves 80.6% accuracy, an 8.1% improvement over the augmentation-free baseline with only 9 minutes of additional training time. Random token perturbations do not yield any improvements but incur equivalent computational cost. The benefits of training with backtranslated examples decreases with the size of the available training data. On the full dataset, neither augmentation technique improves upon ULMFit's state of the art performance. We address this by using backtranslations as a form of test time augmentation as well as ensembling ULMFit with other models, and achieve small improvements.
[1]
Ming-Wei Chang,et al.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
,
2019,
NAACL.
[2]
Christopher Potts,et al.
Learning Word Vectors for Sentiment Analysis
,
2011,
ACL.
[3]
Andrew M. Dai,et al.
Virtual Adversarial Training for Semi-Supervised Text Classification
,
2016,
ArXiv.
[4]
Sebastian Ruder,et al.
Universal Language Model Fine-tuning for Text Classification
,
2018,
ACL.
[5]
Richard Socher,et al.
Pointer Sentinel Mixture Models
,
2016,
ICLR.
[6]
Myle Ott,et al.
Understanding Back-Translation at Scale
,
2018,
EMNLP.
[7]
Hiroyuki Shindo,et al.
Interpretable Adversarial Perturbation in Input Embedding Space for Text
,
2018,
IJCAI.
[8]
Richard Socher,et al.
Regularizing and Optimizing LSTM Language Models
,
2017,
ICLR.
[9]
Kai Zou,et al.
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
,
2019,
EMNLP.
[10]
Rico Sennrich,et al.
Improving Neural Machine Translation Models with Monolingual Data
,
2015,
ACL.