Defend Against Adversarial Samples by Using Perceptual Hash
暂无分享,去创建一个
Shiyu Li | Dengpan Ye | Mei Yuan | Yueyun Shang | Liqiang Wang | Shunzhi Jiang | Changrui Liu | Liqiang Wang | Yueyun Shang | Changrui Liu | Dengpan Ye | Shunzhi Jiang | Shiyu Li | Yuan Mei
[1] Arunesh Sinha,et al. A Learning and Masking Approach to Secure Learning , 2017, GameSec.
[2] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Qi Zhao,et al. Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.
[4] Jungwoo Lee,et al. Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN , 2017, ArXiv.
[5] Yang Wang,et al. Advbox: a toolbox to generate adversarial examples that fool neural networks , 2020, ArXiv.
[6] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[7] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[9] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[11] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[12] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[13] Bin Liang,et al. Detecting Adversarial Examples in Deep Networks with Adaptive Noise Reduction , 2017, ArXiv.
[14] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[15] David A. Forsyth,et al. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Moustapha Cissé,et al. Houdini: Fooling Deep Structured Prediction Models , 2017, ArXiv.
[17] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[18] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[19] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[20] Ian S. Fischer,et al. Adversarial Transformation Networks: Learning to Generate Adversarial Examples , 2017, ArXiv.
[21] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[22] Thomas Eisenbarth,et al. DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes , 2018, 2019 Workshop on DYnamic and Novel Advances in Machine learning and Intelligent Cyber Security.
[23] Carl A. Gunter,et al. Resolving the Predicament of Android Custom Permissions , 2018, NDSS.
[24] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[25] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[26] Jian Liu,et al. Defense Against Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[27] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[28] Christian Gagné,et al. Robustness to Adversarial Examples through an Ensemble of Specialists , 2017, ICLR.
[29] Valentin Khrulkov,et al. Art of Singular Vectors and Universal Adversarial Perturbations , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[30] Rama Chellappa,et al. UPSET and ANGRI : Breaking High Performance Image Classifiers , 2017, ArXiv.
[31] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[32] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[34] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[35] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[36] Shih-Fu Chang,et al. A robust image authentication method distinguishing JPEG compression from malicious manipulation , 2001, IEEE Trans. Circuits Syst. Video Technol..
[37] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[38] Paul Schrater,et al. Adversary Detection in Neural Networks via Persistent Homology , 2017, ArXiv.