暂无分享,去创建一个
Li Chen | Michael E. Kounavis | Duen Horng Chau | Shang-Tse Chen | Nilaksh Das | Fred Hohman | Madhuri Shanbhogue | M. Kounavis | Fred Hohman | Shang-Tse Chen | Nilaksh Das | Li Chen | Madhuri Shanbhogue
[1] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[2] John J. Hopfield,et al. Dense Associative Memory Is Robust to Adversarial Inputs , 2017, Neural Computation.
[3] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[4] Ming-Yu Liu,et al. Tactics of Adversarial Attack on Deep Reinforcement Learning Agents , 2017, IJCAI.
[5] Prateek Mittal,et al. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.
[6] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[7] Johannes Stallkamp,et al. The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.
[8] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[9] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[10] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[12] Ying Tan,et al. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN , 2017, DMBD.
[13] Qi Zhao,et al. Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.
[14] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[15] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[16] Daniel Cullina,et al. Enhancing robustness of machine learning systems via data transformations , 2017, 2018 52nd Annual Conference on Information Sciences and Systems (CISS).
[17] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[18] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[19] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[20] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[21] Ananthram Swami,et al. Crafting adversarial input sequences for recurrent neural networks , 2016, MILCOM 2016 - 2016 IEEE Military Communications Conference.
[22] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[23] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[24] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[25] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).