Adversarial perturbation in remote sensing image recognition
暂无分享,去创建一个
Teng Huang | Voundi Koe Arthur Sandor | Shan Ai | Arthur Sandor Voundi Koe | Teng Huang | Shanshan Ai
[1] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[2] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[3] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[4] Terrance E. Boult,et al. Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Hao Chen,et al. MagNet: A Two-Pronged Defense against Adversarial Examples , 2017, CCS.
[6] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[7] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[8] Masashi Sugiyama,et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.
[9] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[10] Xiaoqiang Lu,et al. Remote Sensing Image Scene Classification: Benchmark and State of the Art , 2017, Proceedings of the IEEE.
[11] Fan Zhang,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[12] Qi Zhao,et al. Foveation-based Mechanisms Alleviate Adversarial Examples , 2015, ArXiv.
[13] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Jure Leskovec,et al. node2vec: Scalable Feature Learning for Networks , 2016, KDD.
[15] Ankur Srivastava,et al. Mitigating Reverse Engineering Attacks on Deep Neural Networks , 2019, 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).
[16] Fabio Roli,et al. Security Evaluation of Pattern Classifiers under Attack , 2014, IEEE Transactions on Knowledge and Data Engineering.
[17] Arunesh Sinha,et al. A Learning and Masking Approach to Secure Learning , 2017, GameSec.
[18] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[19] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[21] Simant Dube,et al. High Dimensional Spaces, Deep Learning and Adversarial Examples , 2018, ArXiv.
[22] Gordon Christie,et al. Functional Map of the World , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[23] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[24] Inderjit S. Dhillon,et al. The Limitations of Adversarial Training and the Blind-Spot Attack , 2019, ICLR.
[25] Patrick D. McDaniel,et al. Extending Defensive Distillation , 2017, ArXiv.
[26] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Jianxin Wu,et al. Improving CNN linear layers with power mean non-linearity , 2019, Pattern Recognit..
[28] Manfred Morari,et al. Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks , 2019, NeurIPS.
[29] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[30] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[31] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[32] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[33] Andrew Zisserman,et al. Return of the Devil in the Details: Delving Deep into Convolutional Nets , 2014, BMVC.
[34] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[35] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[36] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[37] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[38] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[39] Preetum Nakkiran,et al. A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too , 2019, Distill.
[40] Erfu Yang,et al. A Novel Semi-Supervised Convolutional Neural Network Method for Synthetic Aperture Radar Image Recognition , 2019, Cognitive Computation.
[41] Shawn D. Newsam,et al. Bag-of-visual-words and spatial extensions for land-use classification , 2010, GIS '10.
[42] Patrick D. McDaniel,et al. On the Effectiveness of Defensive Distillation , 2016, ArXiv.
[43] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[44] Jascha Sohl-Dickstein,et al. Adversarial Examples that Fool both Computer Vision and Time-Limited Humans , 2018, NeurIPS.
[45] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..
[46] Junaid Qadir,et al. Black-box Adversarial Machine Learning Attack on Network Traffic Classification , 2019, 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC).
[47] Jin Li,et al. A Hybrid Cloud Approach for Secure Authorized Deduplication , 2015, IEEE Transactions on Parallel and Distributed Systems.
[48] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[49] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[50] Ya Li,et al. Adversarial attacks on deep-learning-based SAR image target recognition , 2020, J. Netw. Comput. Appl..
[51] George Kesidis,et al. When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time , 2017, Neural Computation.
[52] Fabio Roli,et al. Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection , 2017, IEEE Transactions on Dependable and Secure Computing.
[53] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[54] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[55] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[56] Beilun Wang,et al. DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples , 2017, ICLR.
[57] Lewis D. Griffin,et al. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples , 2016, ArXiv.
[58] Haifeng Li,et al. Adversarial Example in Remote Sensing Image Recognition , 2019, ArXiv.
[59] Yufeng Li,et al. A Backdoor Attack Against LSTM-Based Text Classification Systems , 2019, IEEE Access.
[60] Luca Rigazio,et al. Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.
[61] Qiang Chen,et al. Network In Network , 2013, ICLR.
[62] Hongyang Yan,et al. Sensitive and Energetic IoT Access Control for Managing Cloud Electronic Health Records , 2019, IEEE Access.
[63] Ajmal Mian,et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey , 2018, IEEE Access.
[64] Erfu Yang,et al. A New Algorithm for SAR Image Target Recognition Based on an Improved Deep Convolutional Neural Network , 2018, Cognitive Computation.
[65] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[66] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[67] Kevin Gimpel,et al. Early Methods for Detecting Adversarial Images , 2016, ICLR.
[68] Moustapha Cissé,et al. Parseval Networks: Improving Robustness to Adversarial Examples , 2017, ICML.
[69] Dacheng Tao,et al. Adversarial Examples for Hamming Space Search , 2020, IEEE Transactions on Cybernetics.
[70] George Kesidis,et al. Adversarial Learning Targeting Deep Neural Network Classification: A Comprehensive Review of Defenses Against Attacks , 2020, Proceedings of the IEEE.
[71] Martin Wattenberg,et al. Adversarial Spheres , 2018, ICLR.
[72] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[73] David A. Forsyth,et al. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles , 2017, ArXiv.
[74] Swami Sankaranarayanan,et al. Regularizing deep networks using efficient layerwise adversarial training , 2017, AAAI.
[75] Nataliia Kussul,et al. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data , 2017, IEEE Geoscience and Remote Sensing Letters.
[76] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[77] Aleksander Madry,et al. On Evaluating Adversarial Robustness , 2019, ArXiv.
[78] Haifeng Li,et al. RSI-CB: A Large Scale Remote Sensing Image Classification Benchmark via Crowdsource Data , 2017, ArXiv.
[79] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[80] Pascal Frossard,et al. Fundamental limits on adversarial robustness , 2015, ICML 2015.
[81] Wojciech Czaja,et al. Adversarial examples in remote sensing , 2018, SIGSPATIAL/GIS.
[82] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[83] Maria Sukhonos,et al. Using Azure Machine Learning Studio with Python Scripts for Induction Motors Optimization Web-Deploy Project , 2019, 2019 IEEE International Scientific-Practical Conference Problems of Infocommunications, Science and Technology (PIC S&T).
[84] Patrick D. McDaniel,et al. On the (Statistical) Detection of Adversarial Examples , 2017, ArXiv.
[85] Yanli Wang,et al. Object Detection in High Resolution Remote Sensing Imagery Based on Convolutional Neural Networks With Suitable Object Scale Features , 2020, IEEE Transactions on Geoscience and Remote Sensing.
[86] Blaine Nelson,et al. Can machine learning be secure? , 2006, ASIACCS '06.
[87] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[88] Wenbo Guo,et al. Adversary Resistant Deep Neural Networks with an Application to Malware Detection , 2016, KDD.
[89] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).