Assessing Threat of Adversarial Examples on Deep Neural Networks
暂无分享,去创建一个
[1] David J. Fleet,et al. Adversarial Manipulation of Deep Representations , 2015, ICLR.
[2] Jürgen Schmidhuber,et al. Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[3] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[4] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[5] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[6] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[7] Alex Zelinsky,et al. Learning OpenCV---Computer Vision with the OpenCV Library (Bradski, G.R. et al.; 2008)[On the Shelf] , 2009, IEEE Robotics & Automation Magazine.
[8] Harris Drucker,et al. Learning algorithms for classification: A comparison on handwritten digit recognition , 1995 .
[9] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[10] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Shijian Lu,et al. Thresholding of badly illuminated document images through photometric correction , 2007, DocEng '07.
[13] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[14] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.
[15] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[16] William A. Barrett,et al. A recursive Otsu thresholding method for scanned document binarization , 2011, 2011 IEEE Workshop on Applications of Computer Vision (WACV).
[17] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[18] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[19] Terrance E. Boult,et al. Adversarial Diversity and Hard Positive Generation , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[20] N. Otsu. A threshold selection method from gray level histograms , 1979 .
[21] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[22] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[23] Qi Zhao,et al. Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.
[24] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[25] Terrance E. Boult,et al. Are facial attributes adversarially robust? , 2016, 2016 23rd International Conference on Pattern Recognition (ICPR).