Learn With Imagination: Safe Set Guided State-wise Constrained Policy Optimization

Deep reinforcement learning (RL) excels in various control tasks, yet the absence of safety guarantees hampers its real-world applicability. In particular, explorations during learning usually results in safety violations, while the RL agent learns from those mistakes. On the other hand, safe control techniques ensure persistent safety satisfaction but demand strong priors on system dynamics, which is usually hard to obtain in practice. To address these problems, we present Safe Set Guided State-wise Constrained Policy Optimization (S-3PO), a pioneering algorithm generating state-wise safe optimal policies with zero training violations, i.e., learning without mistakes. S-3PO first employs a safety-oriented monitor with black-box dynamics to ensure safe exploration. It then enforces a unique cost for the RL agent to converge to optimal behaviors within safety constraints. S-3PO outperforms existing methods in high-dimensional robotics tasks, managing state-wise constraints with zero training violation. This innovation marks a significant stride towards real-world safe RL deployment.

[1]  Changliu Liu,et al.  State-wise Constrained Policy Optimization , 2023, ArXiv.

[2]  Changliu Liu,et al.  GUARD: A Safe Reinforcement Learning Benchmark , 2023, ArXiv.

[3]  Changliu Liu,et al.  Safe and Sample-Efficient Reinforcement Learning for Clustered Dynamic Environments , 2023, IEEE Control Systems Letters.

[4]  Changliu Liu,et al.  State-wise Safe Reinforcement Learning: A Survey , 2023, IJCAI.

[5]  Changliu Liu,et al.  AutoCost: Evolving Intrinsic Cost for Zero-violation Reinforcement Learning , 2023, AAAI.

[6]  Changliu Liu,et al.  Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process Models , 2022, L4DC.

[7]  Changliu Liu,et al.  Persistently Feasible Robust Safe Control by Safety Index Synthesis and Convex Semi-Infinite Programming , 2022, IEEE Control Systems Letters.

[8]  Yaodong Yang,et al.  A Review of Safe Reinforcement Learning: Methods, Theory and Applications , 2022, ArXiv.

[9]  Changliu Liu,et al.  Learn Zero-Constraint-Violation Policy in Model-Free Constrained Reinforcement Learning , 2021, ArXiv.

[10]  Avishai Halev,et al.  Policy Learning with Constraints in Model-free Reinforcement Learning: A Survey , 2021, IJCAI.

[11]  Chen Chao,et al.  Reachability-Based Trajectory Safeguard (RTS): A Safe and Fast Reinforcement Learning Safety Layer for Continuous Control , 2020, IEEE Robotics and Automation Letters.

[12]  Brijen Thananjeyan,et al.  Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones , 2020, IEEE Robotics and Automation Letters.

[13]  Karthik Narasimhan,et al.  Projection-Based Constrained Policy Optimization , 2020, ICLR.

[14]  Changliu Liu,et al.  Safe Adaptation with Multiplicative Uncertainties Using Robust Safe Set Algorithm , 2019, IFAC-PapersOnLine.

[15]  Changliu Liu,et al.  Safe Control Algorithms Using Energy Functions: A Uni ed Framework, Benchmark, and New Directions , 2019, 2019 IEEE 58th Conference on Decision and Control (CDC).

[16]  Raia Hadsell,et al.  Value constrained model-free continuous control , 2019, ArXiv.

[17]  Qingkai Liang,et al.  Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning , 2018, ArXiv.

[18]  Yuval Tassa,et al.  Safe Exploration in Continuous Action Spaces , 2018, ArXiv.

[19]  Pieter Abbeel,et al.  Constrained Policy Optimization , 2017, ICML.

[20]  Jaime F. Fisac,et al.  A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems , 2017, IEEE Transactions on Automatic Control.

[21]  Michael I. Jordan,et al.  Trust Region Policy Optimization , 2015, ICML.

[22]  Paulo Tabuada,et al.  Control barrier function based quadratic programs with application to adaptive cruise control , 2014, 53rd IEEE Conference on Decision and Control.

[23]  Masayoshi Tomizuka,et al.  CONTROL IN A SAFE SET: ADDRESSING SAFETY IN HUMAN-ROBOT INTERACTIONS , 2014, HRI 2014.

[24]  Antonio Sala,et al.  Reactive Sliding-Mode Algorithm for Collision Avoidance in Robotic Systems , 2013, IEEE Transactions on Control Systems Technology.

[25]  O. Khatib,et al.  Real-Time Obstacle Avoidance for Manipulators and Mobile Robots , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[26]  Y. Zhu,et al.  A hierarchical long short term safety framework for efficient robot manipulation under uncertainty , 2023, Robotics Comput. Integr. Manuf..

[27]  Changliu Liu,et al.  Model-free Safe Control for Zero-Violation Reinforcement Learning , 2021 .

[28]  Dario Amodei,et al.  Benchmarking Safe Exploration in Deep Reinforcement Learning , 2019 .