暂无分享,去创建一个
[1] Tong Zhang,et al. NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks , 2019, ICML.
[2] Bernt Schiele,et al. Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Lewis D. Griffin,et al. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples , 2016, ArXiv.
[4] Xiaosen Wang,et al. Enhancing the Transferability of Adversarial Attacks through Variance Tuning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[6] Hang Su,et al. Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds , 2020, ECCV.
[7] Hang Su,et al. Benchmarking Adversarial Robustness on Image Classification , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[9] Qiang Xu,et al. Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks , 2018, AAAI.
[10] Yang Song,et al. Constructing Unrestricted Adversarial Examples with Generative Models , 2018, NeurIPS.
[11] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.
[12] Dawn Xiaodong Song,et al. Delving into Transferable Adversarial Examples and Black-box Attacks , 2016, ICLR.
[13] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] J. Zico Kolter,et al. Fast is better than free: Revisiting adversarial training , 2020, ICLR.
[16] Seyed-Mohsen Moosavi-Dezfooli,et al. SparseFool: A Few Pixels Make a Big Difference , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[19] Quoc V. Le,et al. Adversarial Examples Improve Image Recognition , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Xingjun Ma,et al. Unlearnable Examples: Making Personal Data Unexploitable , 2021, ArXiv.
[21] Mohammed Bennamoun,et al. Attack to Explain Deep Representation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Yao Zhao,et al. Adversarial Attacks and Defences Competition , 2018, ArXiv.
[23] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[24] Florian Kerschbaum,et al. Deep Neural Network Fingerprinting by Conferrable Adversarial Examples , 2019, ICLR.
[25] Jun-Ho Choi,et al. Just One Moment: Structural Vulnerability of Deep Action Recognition against One Frame Attack , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[26] Martha Larson,et al. Towards Large Yet Imperceptible Adversarial Image Perturbations With Perceptual Color Distance , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Matthias Hein,et al. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks , 2020, ICML.
[28] Alan L. Yuille,et al. Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[29] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[30] Cho-Jui Hsieh,et al. Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[31] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[33] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[34] Jinfeng Yi,et al. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models , 2018, ECCV.
[35] Baoyuan Wu,et al. Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits , 2021, ICLR.
[36] Andrea Cavallaro,et al. ColorFool: Semantic Adversarial Colorization , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[38] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.