Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models
暂无分享,去创建一个
[1] Chris Brockett,et al. Automatically Constructing a Corpus of Sentential Paraphrases , 2005, IJCNLP.
[2] Kannan Ramchandran,et al. Rademacher Complexity for Adversarially Robust Generalization , 2018, ICML.
[3] Jianfeng Gao,et al. SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization , 2019, ACL.
[4] Wanxiang Che,et al. Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency , 2019, ACL.
[5] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[6] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[7] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[8] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[9] Kai Zou,et al. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks , 2019, EMNLP.
[10] R. Thomas McCoy,et al. Syntactic Data Augmentation Increases Robustness to Inference Heuristics , 2020, ACL.
[11] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[12] Michael I. Jordan,et al. Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data , 2018, J. Mach. Learn. Res..
[13] Yijia Liu,et al. Sequence-to-Sequence Data Augmentation for Dialogue Language Understanding , 2018, COLING.
[14] Ji-Rong Wen,et al. S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization , 2020, CIKM.
[15] Graham Neubig,et al. On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models , 2019, NAACL.
[16] Carlos Guestrin,et al. Semantically Equivalent Adversarial Rules for Debugging NLP models , 2018, ACL.
[17] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[18] Yu Cheng,et al. FreeLB: Enhanced Adversarial Training for Natural Language Understanding , 2020, ICLR.
[19] Ting Wang,et al. TextBugger: Generating Adversarial Text Against Real-world Applications , 2018, NDSS.
[20] Liqun Chen,et al. Contextualized Perturbation for Textual Adversarial Attack , 2020, NAACL.
[21] Pushmeet Kohli,et al. Adversarial Robustness through Local Linearization , 2019, NeurIPS.
[22] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[23] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[24] Lemao Liu,et al. Understanding Data Augmentation in Neural Machine Translation: Two Perspectives towards Generalization , 2019, EMNLP.
[25] Maosong Sun,et al. Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning , 2020, FINDINGS.
[26] Yanjun Qi,et al. Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[27] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[28] Linyang Li,et al. TAVAT: Token-Aware Virtual Adversarial Training for Language Understanding. , 2020 .
[29] Shujie Liu,et al. Unsupervised Context Rewriting for Open Domain Conversation , 2019, EMNLP.
[30] Sosuke Kobayashi,et al. Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations , 2018, NAACL.
[31] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[32] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[33] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[34] Xipeng Qiu,et al. BERT-ATTACK: Adversarial Attack against BERT Using BERT , 2020, EMNLP.
[35] Percy Liang,et al. Transforming Question Answering Datasets Into Natural Language Inference Datasets , 2018, ArXiv.
[36] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Christof Monz,et al. Data Augmentation for Low-Resource Neural Machine Translation , 2017, ACL.
[38] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[39] Lei Li,et al. Generating Fluent Adversarial Examples for Natural Languages , 2019, ACL.
[40] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[41] Lingjuan Lyu,et al. Model Extraction and Adversarial Transferability, Your BERT is Vulnerable! , 2021, NAACL.
[42] Peter Szolovits,et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment , 2020, AAAI.
[43] Wayne Xin Zhao,et al. Towards Topic-Guided Conversational Recommender System , 2020, COLING.
[44] Mohit Bansal,et al. Robust Machine Comprehension Models via Adversarial Training , 2018, NAACL.
[45] Dongyan Zhao,et al. Insufficient Data Can Also Rock! Learning to Converse Using Smaller Data with Augmentation , 2019, AAAI.
[46] Xinyue Liu,et al. SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling , 2020, ACL.
[47] Diyi Yang,et al. That’s So Annoying!!!: A Lexical and Frame-Semantic Embedding Based Data Augmentation Approach to Automatic Categorization of Annoying Behaviors using #petpeeve Tweets , 2015, EMNLP.
[48] Qian Chen,et al. T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack , 2020, EMNLP.
[49] Bo Pang,et al. Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales , 2005, ACL.
[50] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[51] Aditi Raghunathan,et al. Robust Encodings: A Framework for Combating Adversarial Typos , 2020, ACL.