1 Adversarial Perturbations of Deep Neural Networks
暂无分享,去创建一个
[1] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[2] Ole Winther,et al. Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.
[3] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[6] Jost Tobias Springenberg,et al. Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks , 2015, ICLR.
[7] David J. Fleet,et al. Adversarial Manipulation of Deep Representations , 2015, ICLR.
[8] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[9] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[10] Gabriel Kreiman,et al. Unsupervised Learning of Visual Structure using Predictive Generative Networks , 2015, ArXiv.
[11] Oriol Vinyals,et al. Towards Principled Unsupervised Learning , 2015, ArXiv.
[12] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[13] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Ian J. Goodfellow,et al. On distinguishability criteria for estimating generative models , 2014, ICLR.
[15] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Simon Osindero,et al. Conditional Generative Adversarial Nets , 2014, ArXiv.
[17] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[18] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[19] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[20] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[21] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[22] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[23] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[24] Yann LeCun,et al. What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[25] J. Lubar,et al. EEG Coherence Effects of Audio-Visual Stimulation (AVS) at Dominant and Twice Dominant Alpha Frequency , 2005 .
[26] R. J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[27] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[28] J. O. Robinson. The Psychology of Visual Illusion , 1972 .