Better constraints of imperceptibility, better adversarial examples in the text
暂无分享,去创建一个
Jianpeng Ke | Aoshuang Ye | Lina Wang | Wenqi Wang | Run Wang | Run Wang | Lina Wang | Aoshuang Ye | Jianpeng Ke | Wenqi Wang
[1] Cho-Jui Hsieh,et al. On the Robustness of Self-Attentive Models , 2019, ACL.
[2] Hui Liu,et al. Joint Character-Level Word Embedding and Adversarial Stability Training to Defend Adversarial Text , 2020, AAAI.
[3] K. S. Rao,et al. A novel approach to unsupervised pattern discovery in speech using Convolutional Neural Network , 2022, Comput. Speech Lang..
[4] Dejing Dou,et al. HotFlip: White-Box Adversarial Examples for Text Classification , 2017, ACL.
[5] Hiroyuki Shindo,et al. Interpretable Adversarial Perturbation in Input Embedding Space for Text , 2018, IJCAI.
[6] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[7] Zhifei Zhang,et al. Feature Importance-aware Transferable Adversarial Attacks , 2021, ArXiv.
[8] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[9] Luke S. Zettlemoyer,et al. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks , 2018, NAACL.
[10] Michael I. Jordan,et al. Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data , 2018, J. Mach. Learn. Res..
[11] Vikram Pudi,et al. Generating Natural Language Attacks in a Hard Label Black Box Setting , 2020, ArXiv.
[12] Peter Szolovits,et al. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment , 2020, AAAI.
[13] Zhiyuan Liu,et al. Word-level Textual Adversarial Attacking as Combinatorial Optimization , 2019, ACL.
[14] Shi Feng,et al. Pathologies of Neural Models Make Interpretations Difficult , 2018, EMNLP.
[15] Leonidas J. Guibas,et al. A metric for distributions with applications to image databases , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).
[16] Prashanth Vijayaraghavan,et al. Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model , 2019, ECML/PKDD.
[17] Ananthram Swami,et al. Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.
[18] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[19] Shiguang Shan,et al. Meta Gradient Adversarial Attack , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[20] Z. Anwar,et al. CyberPulse++: A machine learning‐based security framework for detecting link flooding attacks in software defined networks , 2021, Int. J. Intell. Syst..
[21] M. Ali Babar,et al. ReinforceBug: A Framework to Generate Adversarial Textual Examples , 2021, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
[22] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[23] Baoyuan Wang,et al. CRFace: Confidence Ranker for Model-Agnostic Face Detection Refinement , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Xirong Li,et al. Deep Text Classification Can be Fooled , 2017, IJCAI.
[25] Bhuwan Dhingra,et al. Combating Adversarial Misspellings with Robust Word Recognition , 2019, ACL.
[26] Simon Burton,et al. Structuring Validation Targets of a Machine Learning Function Applied to Automated Driving , 2018, SAFECOMP.
[27] Wanxiang Che,et al. Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency , 2019, ACL.
[28] Hwee Tou Ng,et al. Improving the Robustness of Question Answering Systems to Question Paraphrasing , 2019, ACL.
[29] Xinyu Dai,et al. A Reinforced Generation of Adversarial Samples for Neural Machine Translation , 2019, ArXiv.
[30] Ting Wang,et al. TextBugger: Generating Adversarial Text Against Real-world Applications , 2018, NDSS.
[31] Moustapha Cissé,et al. Houdini: Fooling Deep Structured Visual and Speech Recognition Models with Adversarial Examples , 2017, NIPS.
[32] Matteo Pagliardini,et al. Unsupervised Learning of Sentence Embeddings Using Compositional n-Gram Features , 2017, NAACL.
[33] Qian Chen,et al. T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack , 2020, EMNLP.
[34] Simon See,et al. ACT: an Attentive Convolutional Transformer for Efficient Text Classification , 2021, AAAI.
[35] CNN-based intelligent safety surveillance in green IoT applications , 2021, China Communications.
[36] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[37] Roger Wattenhofer,et al. A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples , 2020, COLING.
[38] Shruti Tople,et al. To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers , 2020, ArXiv.
[39] Yanjun Qi,et al. Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[40] Jonathan Berant,et al. White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks , 2019, NAACL.
[41] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.