暂无分享,去创建一个
[1] Alan L. Yuille,et al. Feature Denoising for Improving Adversarial Robustness , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Zhi Zhang,et al. Bag of Tricks for Image Classification with Convolutional Neural Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Qilong Wang,et al. Global Second-Order Pooling Convolutional Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] J. Zico Kolter,et al. Provable defenses against adversarial examples via the convex outer adversarial polytope , 2017, ICML.
[5] T. Lindeberg,et al. Scale-Space Theory : A Basic Tool for Analysing Structures at Different Scales , 1994 .
[6] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[7] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] R A Young,et al. The Gaussian derivative model for spatial vision: I. Retinal mechanisms. , 1988, Spatial vision.
[9] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[10] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[11] Hossein Mobahi,et al. Large Margin Deep Networks for Classification , 2018, NeurIPS.
[12] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[13] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[14] Surya Ganguli,et al. Biologically inspired protection of deep networks from adversarial attacks , 2017, ArXiv.
[15] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[16] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[17] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[18] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[19] Christopher Hunt,et al. Notes on the OpenSURF Library , 2009 .
[20] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[21] Leonidas J. Guibas,et al. PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks , 2018, ICLR.
[22] Yan Ke,et al. PCA-SIFT: a more distinctive representation for local image descriptors , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..
[23] Hongjing Lu,et al. Deep convolutional networks do not classify based on global object shape , 2018, PLoS Comput. Biol..
[24] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[25] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[26] Matthias Bethge,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.
[27] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[28] Jinfeng Yi,et al. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach , 2018, ICLR.
[29] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[30] G LoweDavid,et al. Distinctive Image Features from Scale-Invariant Keypoints , 2004 .
[31] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[32] Gaute T. Einevoll,et al. Extended difference-of-Gaussians model incorporating cortical feedback for relay cells in the lateral geniculate nucleus of cat , 2011, Cognitive Neurodynamics.
[33] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[34] Xiaochun Cao,et al. ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] John J. Hopfield,et al. Dense Associative Memory Is Robust to Adversarial Inputs , 2017, Neural Computation.
[37] Sander Stuijk,et al. Near-Memory Computing: Past, Present, and Future , 2019, Microprocess. Microsystems.
[38] Jun Zhu,et al. Boosting Adversarial Attacks with Momentum , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[39] Holger Winnemöller,et al. XDoG: advanced image stylization with eXtended Difference-of-Gaussians , 2011, NPAR '11.
[40] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[41] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[42] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[43] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Enhua Wu,et al. Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[45] Dina Katabi,et al. ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation , 2019, ICML.
[46] Mathieu Salzmann,et al. Statistically Motivated Second Order Pooling , 2018, ECCV.