Sparsity-based Defense Against Adversarial Attacks on Linear Classifiers
暂无分享,去创建一个
Upamanyu Madhow | Ramtin Pedarsani | Soorya Gopalakrishnan | Zhinus Marzi | Ramtin Pedarsani | Upamanyu Madhow | S. Gopalakrishnan | Zhinus Marzi
[1] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[3] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[4] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[5] Daniel Cullina,et al. Enhancing robustness of machine learning systems via data transformations , 2017, 2018 52nd Annual Conference on Information Sciences and Systems (CISS).
[6] I. Daubechies,et al. Biorthogonal bases of compactly supported wavelets , 1992 .
[7] Pascal Frossard,et al. Classification regions of deep neural networks , 2017, ArXiv.
[8] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[9] Seyed-Mohsen Moosavi-Dezfooli,et al. The Robustness of Deep Networks: A Geometrical Perspective , 2017, IEEE Signal Processing Magazine.
[10] Brendan J. Frey,et al. k-Sparse Autoencoders , 2013, ICLR.
[11] Prateek Mittal,et al. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.
[12] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.