Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
暂无分享,去创建一个
Yew-Soon Ong | Yi Tay | Aston Zhang | Alvin Chan | Yi Tay | Y. Ong | Aston Zhang | Alvin Chan
[1] Ananthram Swami,et al. Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.
[2] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[3] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[4] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[5] Alexander M. Rush,et al. Adversarially Regularized Autoencoders , 2017, ICML.
[6] Brendan Dolan-Gavitt,et al. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain , 2017, ArXiv.
[7] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[8] Ankur Taly,et al. Did the Model Understand the Question? , 2018, ACL.
[9] Graham Neubig,et al. Weight Poisoning Attacks on Pretrained Models , 2020, ACL.
[10] Simon Osindero,et al. Conditional Generative Adversarial Nets , 2014, ArXiv.
[11] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[12] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[13] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[14] Yang Liu,et al. Metamorphic Relation Based Adversarial Attacks on Differentiable Neural Computer , 2018, ArXiv.
[15] Yew-Soon Ong,et al. Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks , 2019, ArXiv.
[16] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[17] Dawn Xiaodong Song,et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning , 2017, ArXiv.
[18] Claudia Eckert,et al. Support vector machines under adversarial label contamination , 2015, Neurocomputing.
[19] Carlos Guestrin,et al. Semantically Equivalent Adversarial Rules for Debugging NLP models , 2018, ACL.
[20] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[21] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[22] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[23] Tudor Dumitras,et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks , 2018, NeurIPS.
[24] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[25] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[26] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[27] Benny Pinkas,et al. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring , 2018, USENIX Security Symposium.
[28] Eric P. Xing,et al. Toward Controlled Generation of Text , 2017, ICML.
[29] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for NLP , 2017, ArXiv.
[30] Xiaojin Zhu,et al. The Security of Latent Dirichlet Allocation , 2015, AISTATS.
[31] Jung-Woo Ha,et al. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[32] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[33] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[34] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[35] Christopher D. Manning,et al. An extended model of natural logic , 2009, IWCS.
[36] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[37] Luke S. Zettlemoyer,et al. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks , 2018, NAACL.