Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples
暂无分享,去创建一个
Xiaochun Cao | Xiaoguang Han | Xingxing Wei | Xiaojun Jia | Xiaochun Cao | Xiaoguang Han | Xingxing Wei | Xiaojun Jia
[1] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[2] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[6] Xiaochun Cao,et al. Efficient Adversarial Attacks for Visual Object Tracking , 2020, ECCV.
[7] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[8] Wen-Hsiang Tsai,et al. Generic Lossless Visible Watermarking—A New Approach , 2010, IEEE Transactions on Image Processing.
[9] Jack C. Lee,et al. Toward on-line, worldwide access to Vatican Library materials , 1996, IBM J. Res. Dev..
[10] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[11] Aleksander Madry,et al. Exploring the Landscape of Spatial Robustness , 2017, ICML.
[12] Mohan S. Kankanhalli,et al. Adaptive visible watermarking of images , 1999, Proceedings IEEE International Conference on Multimedia Computing and Systems.
[13] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[15] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[16] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[17] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[18] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Xiaolin Hu,et al. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[20] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[21] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[22] Shengcai Liao,et al. Learning Face Representation from Scratch , 2014, ArXiv.
[23] Xiaochun Cao,et al. ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[26] Minerva M. Yeung,et al. Effective and ineffective digital watermarks , 1997, Proceedings of International Conference on Image Processing.
[27] Yongjian Hu,et al. An algorithm for removable visible watermarking , 2006, IEEE Transactions on Circuits and Systems for Video Technology.
[28] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[29] Honglak Lee,et al. SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing , 2019, ECCV.
[30] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[31] J. Doye,et al. Global Optimization by Basin-Hopping and the Lowest Energy Structures of Lennard-Jones Clusters Containing up to 110 Atoms , 1997, cond-mat/9803344.
[32] Frank Hartung,et al. Multimedia watermarking techniques , 1999, Proc. IEEE.
[33] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[34] Biao-Bing Huang,et al. A contrast-sensitive visible watermarking scheme , 2006, IEEE Multimedia.
[35] Ares Lagae,et al. A Survey of Procedural Noise Functions , 2010, Comput. Graph. Forum.
[36] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[37] Yu Qiao,et al. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks , 2016, IEEE Signal Processing Letters.
[38] Bo Shen,et al. DCT domain alpha blending , 1998, Proceedings 1998 International Conference on Image Processing. ICIP98 (Cat. No.98CB36269).
[39] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[40] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.