With Friends Like These, Who Needs Adversaries?
暂无分享,去创建一个
[1] Lewis D. Griffin,et al. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples , 2016, ArXiv.
[2] Pascal Frossard,et al. Measuring the effect of nuisance variables on classifiers , 2016, BMVC.
[3] Kenneth O. Stanley,et al. Compositional Pattern Producing Networks : A Novel Abstraction of Development , 2007 .
[4] H. Piaggio. Differential Geometry of Curves and Surfaces , 1952, Nature.
[5] Seyed-Mohsen Moosavi-Dezfooli,et al. The Robustness of Deep Networks: A Geometrical Perspective , 2017, IEEE Signal Processing Magazine.
[6] Bill Triggs,et al. Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).
[7] Yoshua Bengio,et al. Measuring the tendency of CNNs to Learn Surface Statistical Regularities , 2017, ArXiv.
[8] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[9] Beilun Wang,et al. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples , 2016, ICLR 2017.
[10] David J. Fleet,et al. Adversarial Manipulation of Deep Representations , 2015, ICLR.
[11] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.
[12] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[13] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[15] Stefano Soatto,et al. Robustness of Classifiers to Universal Perturbations: A Geometric Perspective , 2018, ICLR.
[16] Qiang Chen,et al. Network In Network , 2013, ICLR.
[17] Stefano Soatto,et al. An Empirical Evaluation of Current Convolutional Architectures’ Ability to Manage Nuisance Location and Scale Variability , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] A. Maharaj. Improving the adversarial robustness of ConvNets by reduction of input dimensionality , 2016 .
[20] Alan L. Yuille,et al. Mitigating adversarial effects through randomization , 2017, ICLR.
[21] C. Billovits. Hitting Depth : Investigating Robustness to Adversarial Examples in Deep Convolutional Neural Networks , 2016 .
[22] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[23] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[25] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[26] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[27] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[28] Qiyang Zhao,et al. Suppressing the Unusual: towards Robust CNNs using Symmetric Activation Functions , 2016, ArXiv.
[29] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[30] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] David A. Forsyth,et al. SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[32] Beilun Wang,et al. DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples , 2017, ICLR.
[33] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[34] Pascal Vincent,et al. Visualizing Higher-Layer Features of a Deep Network , 2009 .
[35] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[36] Pascal Frossard,et al. Classification regions of deep neural networks , 2017, ArXiv.
[37] Pascal Frossard,et al. Analysis of universal adversarial perturbations , 2017, ArXiv.
[38] Andrea Vedaldi,et al. Visualizing Deep Convolutional Neural Networks Using Natural Pre-images , 2015, International Journal of Computer Vision.