Combined Method Based on Source Text and Representation for Text Enhancement

Text classification is a basic and important work in natural language processing (NLP). The existing text classification models are powerful. However, training such a model requires a large number of labeled training sets, but in the actual scene, insufficient data is often faced with. The lack of data is mainly divided into two categories: cold start and low resources. To solve this problem, text enhancement methods are usually used. In this paper, the source text enhancement and representation enhancement are combined to improve the enhancement effect. Five sets of experiments are designed to verify that our method is effective on different data sets and different classifiers. The simulation results show that the accuracy is improved and the generalization ability of the classifier is enhanced to some extent. We also find that the enhancement factor and the size of the training data set are not positively related to the enhancement effect. Therefore, the enhancement factor needs to be selected according to the characteristics of the data.

[1]  Claus Aranha,et al.  Data Augmentation Using GANs , 2019, ArXiv.

[2]  Kai Zou,et al.  EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks , 2019, EMNLP.

[3]  Angeliki Metallinou,et al.  Controlled Text Generation for Data Augmentation in Intelligent Artificial Agents , 2019, EMNLP.

[4]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[5]  Hongyi Zhang,et al.  mixup: Beyond Empirical Risk Minimization , 2017, ICLR.

[6]  Vukosi Marivate,et al.  Improving short text classification through global augmentation methods , 2019, CD-MAKE.

[7]  Xing Wu,et al.  Conditional BERT Contextual Augmentation , 2018, ICCS.

[8]  Tom M. Mitchell,et al.  Learning Data Manipulation for Augmentation and Weighting , 2019, NeurIPS.

[9]  Sosuke Kobayashi,et al.  Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations , 2018, NAACL.

[10]  Aleksander Smywinski-Pohl,et al.  Towards textual data augmentation for neural networks: synonyms and maximum loss , 2019, Comput. Sci..

[11]  Partha Talukdar,et al.  Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation , 2019, NAACL.

[12]  Christof Monz,et al.  Data Augmentation for Low-Resource Neural Machine Translation , 2017, ACL.

[13]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Ateret Anaby-Tavor,et al.  Do Not Have Enough Data? Deep Learning to the Rescue! , 2020, AAAI.

[15]  Quoc V. Le,et al.  Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.