暂无分享,去创建一个
Giovanni S. Alberti | Rima Alaifari | Tandri Gauksson | G. Alberti | Rima Alaifari | Tandri Gauksson
[1] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[2] Terrance E. Boult,et al. Adversarial Diversity and Hard Positive Generation , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[3] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[4] Pascal Frossard,et al. Manitest: Are classifiers really invariant? , 2015, BMVC.
[5] Aditi Raghunathan,et al. Certified Defenses against Adversarial Examples , 2018, ICLR.
[6] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[7] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[8] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[10] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[11] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[12] Andrew Zisserman,et al. Spatial Transformer Networks , 2015, NIPS.
[13] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[14] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Tandri Gauksson,et al. Adversarial perturbations and deformations for convolutional neural networks , 2017 .
[18] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[19] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[20] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[21] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[23] Rima Alaifari. Adversarial deformations for deep neural networks , 2018 .
[24] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[25] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[26] Harini Kannan,et al. Adversarial Logit Pairing , 2018, NIPS 2018.
[27] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[28] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[29] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[30] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.