Robustifying Reinforcement Learning Agents via Action Space Adversarial Training

Adoption of machine learning (ML)-enabled cyber-physical systems (CPS) are becoming prevalent in various sectors of modern society such as transportation, industrial, and power grids. Recent studies in deep reinforcement learning (DRL) have demonstrated its benefits in a large variety of data-driven decisions and control applications. As reliance on ML-enabled systems grows, it is imperative to study the performance of these systems under malicious state and actuator attacks. Traditional control systems employ resilient/fault-tolerant controllers that counter these attacks by correcting the system via error observations. However, in some applications, a resilient controller may not be sufficient to avoid a catastrophic failure. Ideally, a robust approach is more useful in these scenarios where a system is inherently robust (by design) to adversarial attacks. While robust control has a long history of development, robust ML is an emerging research area that has already demonstrated its relevance and urgency. However, the majority of robust ML research has focused on perception tasks and not on decision and control tasks, although the ML (specifically RL) models used for control applications are equally vulnerable to adversarial attacks. In this paper, we show that a well-performing DRL agent that is initially susceptible to action space perturbations (e.g. actuator attacks) can be robustified against similar perturbations through adversarial training.

[1]  Sergey Levine,et al.  Trust Region Policy Optimization , 2015, ICML.

[2]  Soumik Sarkar,et al.  Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents , 2020, AAAI.

[3]  Aleksander Madry,et al.  Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.

[4]  Luca Rigazio,et al.  Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.

[5]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[6]  Prateek Mittal,et al.  DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.

[7]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[8]  Arslan Munir,et al.  Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks , 2017, MLDM.

[9]  Michael I. Jordan,et al.  Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.

[10]  Craig Boutilier,et al.  Data center cooling using model-predictive control , 2018, NeurIPS.

[11]  Hamidreza Modares,et al.  Attack Analysis and Resilient Control Design for Discrete-Time Distributed Multi-Agent Systems , 2018, IEEE Robotics and Automation Letters.

[12]  Xin Huang,et al.  Reliable Control Policy of Cyber-Physical Systems Against a Class of Frequency-Constrained Sensor and Actuator Attacks , 2018, IEEE Transactions on Cybernetics.

[13]  Yasaman Esfandiari,et al.  A fast saddle-point dynamical system approach to robust deep learning , 2021, Neural Networks.

[14]  Silvio Savarese,et al.  Adversarially Robust Policy Learning: Active construction of physically-plausible perturbations , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[15]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[16]  W. J. Hendricks The stationary distribution of an interesting Markov chain , 1972, Journal of Applied Probability.

[17]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[18]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[19]  Kun Ji,et al.  Resilient industrial control system (RICS): Concepts, formulation, metrics, and insights , 2010, 2010 3rd International Symposium on Resilient Control Systems.

[20]  Girish Chowdhary,et al.  Robust Deep Reinforcement Learning with Adversarial Attacks , 2017, AAMAS.

[21]  Wade Genders Deep Reinforcement Learning Adaptive Traffic Signal Control , 2018 .

[22]  J. Danskin The Theory of Max-Min, with Applications , 1966 .

[23]  Huaguang Zhang,et al.  Data-Driven Optimal Consensus Control for Discrete-Time Multi-Agent Systems With Unknown Dynamics Using Reinforcement Learning Method , 2017, IEEE Transactions on Industrial Electronics.

[24]  Hanmin Lee,et al.  A Multirate Adaptive Control for MIMO Systems with Application to Cyber-Physical Security , 2018, 2018 IEEE Conference on Decision and Control (CDC).

[25]  Soumik Sarkar,et al.  Online Robust Policy Learning in the Presence of Unknown Adversaries , 2018, NeurIPS.

[26]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[27]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.

[28]  Samy Bengio,et al.  Adversarial Machine Learning at Scale , 2016, ICLR.

[29]  Shie Mannor,et al.  Action Robust Reinforcement Learning and Applications in Continuous Control , 2019, ICML.

[30]  Yasaman Esfandiari,et al.  A Saddle-Point Dynamical System Approach for Robust Deep Learning , 2019, ArXiv.

[31]  John N. Tsitsiklis,et al.  Actor-Critic Algorithms , 1999, NIPS.

[32]  Abhinav Gupta,et al.  Robust Adversarial Reinforcement Learning , 2017, ICML.

[33]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[34]  Uri Shaham,et al.  Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.

[35]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[36]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[37]  Paulo Tabuada,et al.  Secure Estimation and Control for Cyber-Physical Systems Under Adversarial Attacks , 2012, IEEE Transactions on Automatic Control.

[38]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[39]  J. Doyle,et al.  Robust and optimal control , 1995, Proceedings of 35th IEEE Conference on Decision and Control.

[40]  Hyungbo Shim,et al.  Zero-stealthy attack for sampled-data control systems: The case of faster actuation than sensing , 2016, 2016 IEEE 55th Conference on Decision and Control (CDC).