Feature-Indistinguishable Attack to Circumvent Trapdoor-Enabled Defense
暂无分享,去创建一个
Xiaojing Ma | Bin B. Zhu | Shengshan Hu | Hai Jin | Chaoxiang He | Shengshan Hu | Hai Jin | Xiaojing Ma | Chaoxiang He
[1] Pin-Yu Chen,et al. Rethinking Randomized Smoothing for Adversarial Robustness , 2020, ArXiv.
[2] Johannes Stallkamp,et al. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition , 2012, Neural Networks.
[3] Ben Y. Zhao,et al. Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks , 2019, CCS.
[4] Tal Hassner,et al. Face recognition in unconstrained videos with matched background similarity , 2011, CVPR 2011.
[5] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[6] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[7] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[8] Siddharth Garg,et al. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks , 2019, IEEE Access.
[9] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[10] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[11] James Bailey,et al. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality , 2018, ICLR.
[12] Hans-Peter Kriegel,et al. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.
[13] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[14] David A. Forsyth,et al. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[16] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[17] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[18] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[19] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[20] Lawrence Carin,et al. Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability , 2020, NeurIPS.
[21] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[22] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[23] Kamyar Azizzadenesheli,et al. Stochastic Activation Pruning for Robust Adversarial Defense , 2018, ICLR.
[24] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[25] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[26] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[27] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[28] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[29] Uri Shaham,et al. Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.
[30] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[31] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[32] Colin Raffel,et al. Thermometer Encoding: One Hot Way To Resist Adversarial Examples , 2018, ICLR.
[33] Matthias Hein,et al. Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack , 2019, ICML.
[34] Ben Y. Zhao,et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks , 2019, 2019 IEEE Symposium on Security and Privacy (SP).
[35] David A. Wagner,et al. MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples , 2017, ArXiv.
[36] Jinfeng Yi,et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.
[37] Lawrence Carin,et al. Transferable Perturbations of Deep Feature Distributions , 2020, ICLR.
[38] Antonio Criminisi,et al. Measuring Neural Net Robustness with Constraints , 2016, NIPS.
[39] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[40] Jun Zhu,et al. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[42] Duen Horng Chau,et al. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.
[43] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Alan L. Yuille,et al. Improving Transferability of Adversarial Examples With Input Diversity , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[46] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[47] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[48] Nicholas Carlini,et al. A Partial Break of the Honeypots Defense to Catch Adversarial Attacks , 2020, ArXiv.
[49] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[50] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[51] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[52] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[53] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[54] Hai Li,et al. Feature Space Perturbations Yield More Transferable Adversarial Examples , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[55] Parham Aarabi,et al. Adversarial Attacks on Face Detectors Using Neural Net Based Constrained Optimization , 2018, 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP).
[56] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.
[57] Wen-Chuan Lee,et al. NIC: Detecting Adversarial Samples with Neural Network Invariant Checking , 2019, NDSS.
[58] Pushmeet Kohli,et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks , 2018, ICML.
[59] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.