Random Untargeted Adversarial Example on Deep Neural Network
暂无分享,去创建一个
Hyunsoo Yoon | Daeseon Choi | Yongchul Kim | Hyun Kwon | H. Yoon | Hyun Kwon | D. Choi | Yongchul Kim
[1] Hyunsoo Yoon,et al. Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example , 2018, MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM).
[2] Ki-Woong Park,et al. Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers , 2018, IEICE Trans. Inf. Syst..
[3] Ki-Woong Park,et al. Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier , 2018, Comput. Secur..
[4] Ki-Woong Park,et al. Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network , 2018, IEEE Access.
[5] Hyunsoo Yoon,et al. POSTER: Zero-Day Evasion Attack Analysis on Race between Attack and Defense , 2018, AsiaCCS.
[6] Holger Ulmer,et al. Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2017, ArXiv.
[7] Nina Narodytska,et al. Simple Black-Box Adversarial Attacks on Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[8] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[9] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[10] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[11] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[12] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[13] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[14] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[17] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[19] Bob L. Sturm,et al. Deep Learning and Music Adversaries , 2015, IEEE Transactions on Multimedia.
[20] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[21] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.
[22] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[23] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[24] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[25] Kevin Curran,et al. Digital image steganography: Survey and analysis of current methods , 2010, Signal Process..
[26] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.