On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples
暂无分享,去创建一个
[1] Friedrich Riesz. Untersuchungen über Systeme integrierbarer Funktionen , 1910 .
[2] Mark B. Gardner,et al. Standardizing Auditory Tests , 1950 .
[3] A. Tversky. Features of Similarity , 1977 .
[4] Donald P. Greenberg,et al. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments , 2005, TOGS.
[5] A. Bovik,et al. A universal image quality index , 2002, IEEE Signal Processing Letters.
[6] Eero P. Simoncelli,et al. Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.
[7] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[8] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[9] Fabio Roli,et al. Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.
[10] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[11] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[13] Terrance E. Boult,et al. Adversarial Diversity and Hard Positive Generation , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[14] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[15] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[16] J. Doug Tygar,et al. Evasion and Hardening of Tree Ensemble Classifiers , 2015, ICML.
[17] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[18] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[19] Patrick D. McDaniel,et al. Cleverhans V0.1: an Adversarial Machine Learning Library , 2016, ArXiv.
[20] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[21] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[22] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[23] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[24] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[25] Somesh Jha,et al. Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning , 2017, ACSAC.
[26] Aleksander Madry,et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.
[27] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[28] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[29] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[30] Wen-Chuan Lee,et al. Trojaning Attack on Neural Networks , 2018, NDSS.
[31] Seyed-Mohsen Moosavi-Dezfooli,et al. Geometric Robustness of Deep Networks: Analysis and Improvement , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[32] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[33] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[34] Prateek Mittal,et al. DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.
[35] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[36] Jascha Sohl-Dickstein,et al. Adversarial Examples that Fool both Human and Computer Vision , 2018, ArXiv.
[37] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[38] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.