暂无分享,去创建一个
Martin Wattenberg | Samuel S. Schoenholz | Ian J. Goodfellow | Luke Metz | Fartash Faghri | Justin Gilmer | Maithra Raghu | Fartash Faghri | J. Gilmer | M. Wattenberg | S. Schoenholz | M. Raghu | Luke Metz | I. Goodfellow
[1] Jungwoo Lee,et al. Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN , 2017, ArXiv.
[2] Yoshua Bengio,et al. Measuring the tendency of CNNs to Learn Surface Statistical Regularities , 2017, ArXiv.
[3] Pin-Yu Chen,et al. Attacking the Madry Defense Model with L1-based Adversarial Examples , 2017, ICLR.
[4] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[5] Christian Gagné,et al. Robustness to Adversarial Examples through an Ensemble of Specialists , 2017, ICLR.
[6] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[7] John J. Hopfield,et al. Dense Associative Memory Is Robust to Adversarial Inputs , 2017, Neural Computation.
[8] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[9] Hamza Fawzi,et al. Adversarial vulnerability for any classifier , 2018, NeurIPS.
[10] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[11] Qi Zhao,et al. Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.
[12] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[13] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[14] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[15] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[16] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[17] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[18] Mykel J. Kochenderfer,et al. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.
[19] T. Figiel,et al. The dimension of almost spherical sections of convex bodies , 1976 .
[20] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[21] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[22] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[23] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[24] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[25] Omar Fawzi,et al. Robustness of classifiers to uniform $\ell_p$ and Gaussian noise , 2018, AISTATS.
[26] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[27] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[28] Rama Chellappa,et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models , 2018, ICLR.