Local Gradients Smoothing: Defense Against Localized Adversarial Attacks
暂无分享,去创建一个
Salman Khan | Fatih Murat Porikli | Muzammal Naseer | F. Porikli | Muzammal Naseer | Salman Hameed Khan
[1] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[2] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[3] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[4] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[5] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[6] Jamie Hayes,et al. On Visible Adversarial Perturbations & Digital Watermarking , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[7] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[8] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[9] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[10] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[11] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[12] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[14] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[15] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[16] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[17] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[18] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[19] Li Chen,et al. SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression , 2018, KDD.
[20] Yoav Goldberg,et al. LaVAN: Localized and Visible Adversarial Noise , 2018, ICML.
[21] Fernando A. Mujica,et al. An Empirical Evaluation of Deep Learning on Highway Driving , 2015, ArXiv.
[22] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[23] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[25] Andrew Zisserman,et al. Deep Face Recognition , 2015, BMVC.
[26] Mansour Ahmadi,et al. Microsoft Malware Classification Challenge , 2018, ArXiv.
[27] Alexander Cloninger,et al. Defending against Adversarial Images using Basis Functions Transformations , 2018, ArXiv.