暂无分享,去创建一个
[1] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[2] Patrick D. McDaniel,et al. Adversarial Perturbations Against Deep Neural Networks for Malware Classification , 2016, ArXiv.
[3] Clark W. Barrett,et al. Provably Minimally-Distorted Adversarial Examples , 2017 .
[4] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[5] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[6] David H. C. Du. Proceedings - IEEE Symposium on Security and Privacy: Message from the General Chair , 2009, S&P 2009.
[7] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[8] Aleksander Madry,et al. Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors , 2018, ICLR.
[9] Omar Fawzi,et al. Robustness of classifiers to uniform $\ell_p$ and Gaussian noise , 2018, AISTATS.
[10] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[11] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[12] Nina Narodytska,et al. Simple Black-Box Adversarial Perturbations for Deep Networks , 2016, ArXiv.
[13] Lucy Rosenbloom. arXiv , 2019, The Charleston Advisor.
[14] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[15] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[16] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.