Perceptually Constrained Adversarial Attacks
暂无分享,去创建一个
[1] Zoubin Ghahramani,et al. A study of the effect of JPG compression on adversarial images , 2016, ArXiv.
[2] Larry S. Davis,et al. Adversarial Training for Free! , 2019, NeurIPS.
[3] Martha Larson,et al. Towards Large Yet Imperceptible Adversarial Image Perturbations With Perceptual Color Distance , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Lujo Bauer,et al. On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[5] Timothy A. Mann,et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , 2018, ArXiv.
[6] A. Bovik,et al. A universal image quality index , 2002, IEEE Signal Processing Letters.
[7] Luiz Eduardo Soares de Oliveira,et al. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[9] Zhou Wang,et al. On the Mathematical Properties of the Structural Similarity Index , 2012, IEEE Transactions on Image Processing.
[10] Eero P. Simoncelli,et al. Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.
[11] J. Zico Kolter,et al. Certified Adversarial Robustness via Randomized Smoothing , 2019, ICML.
[12] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[13] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[14] Donald P. Greenberg,et al. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments , 2005, TOGS.
[15] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[16] Changshui Zhang,et al. Deep Defense: Training DNNs with Improved Adversarial Robustness , 2018, NeurIPS.
[17] Jinfeng Yi,et al. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples , 2017, AAAI.
[18] Haichao Zhang,et al. Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training , 2019, NeurIPS.
[19] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[20] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[21] Luisa Verdoliva,et al. Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers , 2019, Pattern Recognit. Lett..
[22] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[23] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[24] Ian J. Goodfellow,et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library , 2016 .
[25] Moustapha Cissé,et al. Countering Adversarial Images using Input Transformations , 2018, ICLR.
[26] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[27] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[28] Karthik Sridharan,et al. Two-Player Games for Efficient Non-Convex Constrained Optimization , 2018, ALT.
[29] Dale Schuurmans,et al. Learning with a Strong Adversary , 2015, ArXiv.
[30] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[31] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[32] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.