Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence
暂无分享,去创建一个
Christopher Leckie | Tansu Alpcan | Tamas Abraham | Yi Han | Sarah Erfani | David Hubczenko | Paul Montague | Olivier De Vel | Benjamin I.P. Rubinstein
[1] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[2] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[3] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[4] Eduardo F. Morales,et al. An Introduction to Reinforcement Learning , 2011 .
[5] Le Song,et al. Adversarial Attack on Graph Structured Data , 2018, ICML.
[6] David Silver,et al. Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.
[7] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[8] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[9] Ricky Laishram,et al. Curie: A method for protecting SVM Classifier from Poisoning Attack , 2016, ArXiv.
[10] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[11] Tom Schaul,et al. Prioritized Experience Replay , 2015, ICLR.
[12] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[13] Michael Schapira,et al. Verifying Deep-RL-Driven Systems , 2019, NetAI@SIGCOMM.
[14] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Patrick P. K. Chan,et al. Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.
[16] Sailik Sengupta,et al. MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense , 2017, AAAI Workshops.
[17] J. Doug Tygar,et al. Adversarial machine learning , 2019, AISec '11.
[18] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[19] Ming-Yu Liu,et al. Tactics of Adversarial Attack on Deep Reinforcement Learning Agents , 2017, IJCAI.
[20] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[21] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[22] Silvio Savarese,et al. Adversarially Robust Policy Learning: Active construction of physically-plausible perturbations , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[23] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[24] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[25] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[26] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[27] Christopher Leckie,et al. Reinforcement Learning for Autonomous Defence in Software-Defined Networking , 2018, GameSec.
[28] Arslan Munir,et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.
[29] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Daniel Cullina,et al. Enhancing robustness of machine learning systems via data transformations , 2017, 2018 52nd Annual Conference on Information Sciences and Systems (CISS).
[31] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[33] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[35] Prateek Mittal,et al. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.
[36] Ming-Yu Liu,et al. Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight , 2017, ArXiv.
[37] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[38] Stephan Günnemann,et al. Adversarial Attacks on Neural Networks for Graph Data , 2018, KDD.
[39] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[40] Girish Chowdhary,et al. Robust Deep Reinforcement Learning with Adversarial Attacks , 2017, AAMAS.
[41] Hado van Hasselt,et al. Double Q-learning , 2010, NIPS.
[42] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[43] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[44] Radha Poovendran,et al. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems , 2017, ArXiv.
[45] Jan Medved,et al. OpenDaylight: Towards a Model-Driven SDN Controller architecture , 2014, Proceeding of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks 2014.
[46] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[47] Beilun Wang,et al. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples , 2016, ICLR 2017.
[48] Sailik Sengupta,et al. Securing Deep Neural Nets against Adversarial Attacks with Moving Target Defense , 2017, ArXiv.
[49] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[50] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[51] Fabio Roli,et al. Security Evaluation of Support Vector Machines in Adversarial Environments , 2014, ArXiv.
[52] Soumik Sarkar,et al. Online Robust Policy Learning in the Presence of Unknown Adversaries , 2018, NeurIPS.