Pixel-domain adversarial examples against CNN-based manipulation detectors
暂无分享,去创建一个
[1] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[2] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[3] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[4] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[5] Belhassen Bayar,et al. A Deep Learning Approach to Universal Image Manipulation Detection Using a New Convolutional Layer , 2016, IH&MMSec.
[6] Mauro Barni,et al. Cnn-Based Detection of Generic Contrast Adjustment with Jpeg Post-Processing , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).
[7] Yao Zhao,et al. A gradient-based pixel-domain attack against SVM detection of global image manipulations , 2017, 2017 IEEE Workshop on Information Forensics and Security (WIFS).
[8] Rainer Böhme,et al. Counter-Forensics: Attacking Image Forensics , 2013 .
[9] Luisa Verdoliva,et al. On the vulnerability of deep learning to adversarial attacks for camera model identification , 2018, Signal Process. Image Commun..