暂无分享,去创建一个
Ryan R. Curtin | Andrew B. Gardner | Saurabh Shintre | Reuben Feinman | Reuben Feinman | S. Shintre | Andrew B. Gardner
[1] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[2] M. C. Jones,et al. A Brief Survey of Bandwidth Selection for Density Estimation , 1996 .
[3] Christopher K. I. Williams,et al. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) , 2005 .
[4] Michel Verleysen,et al. Nonlinear Dimensionality Reduction , 2021, Computer Vision.
[5] Yoshua Bengio,et al. Better Mixing via Deep Representations , 2012, ICML.
[6] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[7] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[8] Matt J. Kusner,et al. Deep Manifold Traversal: Changing Labels with Convolutional Features , 2015, ArXiv.
[9] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[10] Zoubin Ghahramani,et al. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks , 2015, NIPS.
[11] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[12] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[13] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[14] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[15] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[16] Lewis D. Griffin,et al. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples , 2016, ArXiv.
[17] Patrick D. McDaniel,et al. Cleverhans V0.1: an Adversarial Machine Learning Library , 2016, ArXiv.
[18] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[19] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[20] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[21] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).