Adversarial examples in remote sensing
暂无分享,去创建一个
Wojciech Czaja | Christopher R. Ratto | I-Jeng Wang | Michael J. Pekala | Neil Fendley | Christopher Ratto | M. Pekala | W. Czaja | Neil Fendley | I.-J. Wang | I-J. Wang
[1] Xiao Xiang Zhu,et al. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources , 2017, IEEE Geoscience and Remote Sensing Magazine.
[2] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[3] Geoffrey E. Hinton,et al. Learning to Detect Roads in High-Resolution Aerial Images , 2010, ECCV.
[4] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[5] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[6] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[7] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[8] Seyed-Mohsen Moosavi-Dezfooli,et al. The Robustness of Deep Networks: A Geometrical Perspective , 2017, IEEE Signal Processing Magazine.
[9] Gordon Christie,et al. Functional Map of the World , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[10] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[11] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[12] Stéphane Mallat,et al. Invariant Scattering Convolution Networks , 2012, IEEE transactions on pattern analysis and machine intelligence.
[13] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[14] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[15] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[16] Duen Horng Chau,et al. ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector , 2018, ECML/PKDD.
[17] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[18] Seyed-Mohsen Moosavi-Dezfooli,et al. Robustness of classifiers: from adversarial to random noise , 2016, NIPS.
[19] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[20] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[21] Nicholas Carlini,et al. On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses , 2018, ArXiv.
[22] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.