暂无分享,去创建一个
Christopher Leckie | Tansu Alpcan | Benjamin I. P. Rubinstein | Sarah M. Erfani | Tamas Abraham | Olivier Y. de Vel | Yi Han | David Hubczenko | Paul Montague | T. Alpcan | O. Vel | C. Leckie | Paul Montague | S. Erfani | Tamas Abraham | Yi Han | David Hubczenko
[1] Ming-Yu Liu,et al. Tactics of Adversarial Attack on Deep Reinforcement Learning Agents , 2017, IJCAI.
[2] Zhitao Gong,et al. Adversarial and Clean Data Are Not Twins , 2017, aiDM@SIGMOD.
[3] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Arslan Munir,et al. Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.
[5] Jan Hendrik Metzen,et al. On Detecting Adversarial Perturbations , 2017, ICLR.
[6] Claudia Eckert,et al. Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.
[7] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[8] Daniel Cullina,et al. Enhancing robustness of machine learning systems via data transformations , 2017, 2018 52nd Annual Conference on Information Sciences and Systems (CISS).
[9] Blaine Nelson,et al. Adversarial machine learning , 2019, AISec '11.
[10] Seyed-Mohsen Moosavi-Dezfooli,et al. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Ananthram Swami,et al. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples , 2016, ArXiv.
[12] Laurent Orseau,et al. Reinforcement Learning with a Corrupted Reward Channel , 2017, IJCAI.
[13] David A. Wagner,et al. Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[14] Jan Medved,et al. OpenDaylight: Towards a Model-Driven SDN Controller architecture , 2014, Proceeding of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks 2014.
[15] Xin Li,et al. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Abhinav Gupta,et al. Robust Adversarial Reinforcement Learning , 2017, ICML.
[17] Beilun Wang,et al. A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples , 2016, ICLR 2017.
[18] Luc Beaudoin. Autonomic computer network defence using risk states and reinforcement learning , 2009 .
[19] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[20] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[21] Ricky Laishram,et al. Curie: A method for protecting SVM Classifier from Poisoning Attack , 2016, ArXiv.
[22] Blaine Nelson,et al. The security of machine learning , 2010, Machine Learning.
[23] Mohsen Guizani,et al. Reinforcement learning for resource provisioning in the vehicular cloud , 2016, IEEE Wireless Communications.
[24] Ling Huang,et al. Query Strategies for Evading Convex-Inducing Classifiers , 2010, J. Mach. Learn. Res..
[25] Daoyi Dong,et al. A novel incremental learning scheme for reinforcement learning in dynamic environments , 2016, 2016 12th World Congress on Intelligent Control and Automation (WCICA).
[26] Mohsen Guizani,et al. Software-Defined Networking for RSU Clouds in Support of the Internet of Vehicles , 2015, IEEE Internet of Things Journal.
[27] James Newsome,et al. Paragraph: Thwarting Signature Learning by Training Maliciously , 2006, RAID.
[28] Christian Gagné,et al. Robustness to Adversarial Examples through an Ensemble of Specialists , 2017, ICLR.
[29] Sandy H. Huang,et al. Adversarial Attacks on Neural Network Policies , 2017, ICLR.
[30] Radha Poovendran,et al. Semantic Adversarial Examples , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[31] Sailik Sengupta,et al. Securing Deep Neural Nets against Adversarial Attacks with Moving Target Defense , 2017, ArXiv.
[32] Tansu Alpcan,et al. Network Security , 2010 .
[33] Daniel M. Kane,et al. Robust Estimators in High Dimensions without the Computational Intractability , 2016, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS).
[34] Fabio Roli,et al. Security Evaluation of Support Vector Machines in Adversarial Environments , 2014, ArXiv.
[35] Yevgeniy Vorobeychik,et al. Feature Cross-Substitution in Adversarial Classification , 2014, NIPS.
[36] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.
[37] Yanjun Qi,et al. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks , 2017, NDSS.
[38] Enda Barrett,et al. A reinforcement learning approach for the scheduling of live migration from under utilised hosts , 2016, Memetic Computing.
[39] Ryan R. Curtin,et al. Detecting Adversarial Samples from Artifacts , 2017, ArXiv.
[40] David A. Wagner,et al. Defensive Distillation is Not Robust to Adversarial Examples , 2016, ArXiv.
[41] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[42] Dan Boneh,et al. Ensemble Adversarial Training: Attacks and Defenses , 2017, ICLR.
[43] Blaine Nelson,et al. Exploiting Machine Learning to Subvert Your Spam Filter , 2008, LEET.
[44] Patrick P. K. Chan,et al. Adversarial Feature Selection Against Evasion Attacks , 2016, IEEE Transactions on Cybernetics.
[45] Ian F. Akyildiz,et al. QoS-Aware Adaptive Routing in Multi-layer Hierarchical Software Defined Networks: A Reinforcement Learning Approach , 2016, 2016 IEEE International Conference on Services Computing (SCC).
[46] Sailik Sengupta,et al. MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense , 2017, AAAI Workshops.
[47] Aloysius K. Mok,et al. Advanced Allergy Attacks: Does a Corpus Really Help? , 2007, RAID.
[48] Xiaoli Chu,et al. Energy-Efficient Monitoring in Software Defined Wireless Sensor Networks Using Reinforcement Learning: A Prototype , 2015, Int. J. Distributed Sens. Networks.
[49] Percy Liang,et al. Certified Defenses for Data Poisoning Attacks , 2017, NIPS.
[50] Radha Poovendran,et al. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems , 2017, ArXiv.
[51] Dawn Xiaodong Song,et al. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong , 2017, ArXiv.
[52] Srikanth Kandula,et al. Resource Management with Deep Reinforcement Learning , 2016, HotNets.
[53] Hado van Hasselt,et al. Double Q-learning , 2010, NIPS.
[54] Brent Lagesse,et al. Analysis of Causative Attacks against SVMs Learning from Data Streams , 2017, IWSPA@CODASPY.
[55] Seyed-Mohsen Moosavi-Dezfooli,et al. Universal Adversarial Perturbations , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[56] Ananthram Swami,et al. The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
[57] Yang Song,et al. Improving the Robustness of Deep Neural Networks via Stability Training , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[58] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[59] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[60] Prateek Mittal,et al. Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers , 2017, ArXiv.
[61] Yang Wang,et al. Service Migrations in the Cloud for Mobile Accesses: A Reinforcement Learning Approach , 2017, 2017 International Conference on Networking, Architecture, and Storage (NAS).
[62] Stefan Savage,et al. Inferring Internet denial-of-service activity , 2001, TOCS.
[63] Benjamin I. P. Rubinstein,et al. Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks , 2017, AAAI Workshops.
[64] Li Chen,et al. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression , 2017, ArXiv.
[65] Richard S. Sutton,et al. Introduction to Reinforcement Learning , 1998 .
[66] Tom Schaul,et al. Prioritized Experience Replay , 2015, ICLR.
[67] David Wagner,et al. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods , 2017, AISec@CCS.
[68] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[69] Choong Seon Hong,et al. Congestion prevention mechanism based on Q-leaning for efficient routing in SDN , 2016, 2016 International Conference on Information Networking (ICOIN).
[70] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[71] Yevgeniy Vorobeychik,et al. Data Poisoning Attacks on Factorization-Based Collaborative Filtering , 2016, NIPS.
[72] Xiaojin Zhu,et al. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners , 2015, AAAI.
[73] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[74] Ling Huang,et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors , 2009, IMC '09.
[75] David Silver,et al. Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.
[76] Jean C. Walrand,et al. Knowledge-Defined Networking: Modelització de la xarxa a través de l’aprenentatge automàtic i la inferència , 2016 .