Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
暂无分享,去创建一个
[1] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[2] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[3] Yongdong Zhang,et al. APE-GAN: Adversarial Perturbation Elimination with GAN , 2017, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[4] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[5] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.
[6] Patrick P. K. Chan,et al. Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.
[7] Ki-Woong Park,et al. Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier , 2018, Comput. Secur..
[8] Gang Hua,et al. Labeled Faces in the Wild: A Survey , 2016 .
[9] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[12] Pascal Frossard,et al. Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.
[13] Charles A. Sutton,et al. Scheduled denoising autoencoders , 2015, ICLR.
[14] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[15] Wolfram Burgard,et al. Deep learning for human part discovery in images , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[16] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[18] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[19] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[20] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[21] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[22] Ki-Woong Park,et al. Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network , 2018, IEEE Access.
[23] Julio Hernandez-Castro,et al. No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, With Applications to CAPTCHA Generation , 2017, IEEE Transactions on Information Forensics and Security.
[24] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[25] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[26] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[27] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[28] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[29] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[30] Dan Boneh,et al. The Space of Transferable Adversarial Examples , 2017, ArXiv.