Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning
暂无分享,去创建一个
Lei Ma | Yang Liu | Xiaofei Xie | Tianwei Zhang | Yan Zheng | Jianwen Sun | Kangjie Chen | Xiaofei Xie | L. Ma | Yang Liu | Tianwei Zhang | Kangjie Chen | Yan Zheng | Jianwen Sun
[1] Tsuyoshi Murata,et al. {m , 1934, ACML.
[2] Dawn Song,et al. Robust Physical-World Attacks on Deep Learning Models , 2017, 1707.08945.
[3] DarrellTrevor,et al. End-to-end training of deep visuomotor policies , 2016 .
[4] Lei Ma,et al. DeepHunter: a coverage-guided fuzz testing framework for deep neural networks , 2019, ISSTA.
[5] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[6] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[7] Jianye Hao,et al. Diverse Behavior Is What Game AI Needs: Generating Varied Human-Like Playing Styles Using Evolutionary Multi-Objective Deep Reinforcement Learning , 2019, ArXiv.
[8] Honglak Lee,et al. Action-Conditional Video Prediction using Deep Networks in Atari Games , 2015, NIPS.
[9] Etienne Perot,et al. Deep Reinforcement Learning framework for Autonomous Driving , 2017, Autonomous Vehicles and Machines.
[10] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[11] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[12] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[14] Lei Ma,et al. Wuji: Automatic Online Combat Game Testing Using Evolutionary Deep Reinforcement Learning , 2019, 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[15] Girish Chowdhary,et al. Robust Deep Reinforcement Learning with Adversarial Attacks , 2017, AAMAS.
[16] Mani B. Srivastava,et al. Generating Natural Language Adversarial Examples , 2018, EMNLP.
[18] Atul Prakash,et al. Robust Physical-World Attacks on Machine Learning Models , 2017, ArXiv.
[19] Atul Prakash,et al. Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[20] Matthieu Geist,et al. Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations , 2019, ArXiv.
[21] Arslan Munir,et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.
[22] John B. Shoven,et al. I , Edinburgh Medical and Surgical Journal.
[23] Haijun Wang,et al. DiffChaser: Detecting Disagreements for Deep Neural Networks , 2019, IJCAI.
[24] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[25] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[26] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[27] Dawn Xiaodong Song,et al. Delving into adversarial attacks on deep policies , 2017, ICLR.
[28] Jianjun Zhao,et al. DeepStellar: model-based quantitative analysis of stateful deep learning systems , 2019, ESEC/SIGSOFT FSE.
[29] Samy Bengio,et al. Adversarial examples in the physical world , 2016, ICLR.
[30] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[31] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[32] Arslan Munir,et al. Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger , 2017, ArXiv.
[33] Alexandre Proutière,et al. Optimal Attacks on Reinforcement Learning Policies , 2019, ArXiv.
[34] Matthieu Geist,et al. CopyCAT: : Taking Control of Neural Policies with Constant Attacks , 2020, AAMAS.
[35] Yan Zheng,et al. Weighted Double Deep Multiagent Reinforcement Learning in Stochastic Cooperative Environments , 2018, PRICAI.
[36] Mani B. Srivastava,et al. Did you hear that? Adversarial Examples Against Automatic Speech Recognition , 2018, ArXiv.
[37] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[38] Jianjun Zhao,et al. An Empirical Study Towards Characterizing Deep Learning Development and Deployment Across Different Frameworks and Platforms , 2019, 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE).
[39] David A. Wagner,et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).
[40] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[41] Seong Joon Oh,et al. Sequential Attacks on Agents for Long-Term Adversarial Goals , 2018, ArXiv.
[42] Lei Ma,et al. DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems , 2018, 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).
[43] Ming-Yu Liu,et al. Tactics of Adversarial Attack on Deep Reinforcement Learning Agents , 2017, IJCAI.
[44] Yan Zheng,et al. A Deep Bayesian Policy Reuse Approach Against Non-Stationary Agents , 2018, NeurIPS.
[45] Lujo Bauer,et al. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.