Safe Policy Improvement with an Estimated Baseline Policy

Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to control the variance on the trained policy performance. However, in many real-world applications such as dialogue systems, pharmaceutical tests or crop management, data is collected under human supervision and the baseline remains unknown. In this paper, we apply SPIBB algorithms with a baseline estimate built from the data. We formally show safe policy improvement guarantees over the true baseline even without direct access to it. Our empirical experiments on finite and continuous states tasks support the theoretical findings. It shows little loss of performance in comparison with SPIBB when the baseline policy is given, and more importantly, drastically and significantly outperforms competing algorithms both in safe policy improvement, and in average performance.

[1]  Matthijs T. J. Spaan,et al.  Structure Learning for Safe Policy Improvement , 2019, IJCAI.

[2]  Romain Laroche,et al.  Multi-batch Reinforcement Learning , 2019 .

[3]  Garud Iyengar,et al.  Robust Dynamic Programming , 2005, Math. Oper. Res..

[4]  Sergey Levine,et al.  Trust Region Policy Optimization , 2015, ICML.

[5]  John Langford,et al.  Approximately Optimal Approximate Reinforcement Learning , 2002, ICML.

[6]  P. Alam ‘G’ , 2021, Composites Engineering: An A–Z Guide.

[7]  Joelle Pineau,et al.  Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models , 2015, AAAI.

[8]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[9]  Sergey Levine,et al.  Offline policy evaluation across representations with applications to educational games , 2014, AAMAS.

[10]  Philip S. Thomas,et al.  High-Confidence Off-Policy Evaluation , 2015, AAAI.

[11]  Daniele Calandriello,et al.  Safe Policy Iteration , 2013, ICML.

[12]  Tom Schaul,et al.  Unifying Count-Based Exploration and Intrinsic Motivation , 2016, NIPS.

[13]  Marcello Restelli,et al.  Adaptive Batch Size for Safe Policy Gradients , 2017, NIPS.

[14]  P. Alam ‘A’ , 2021, Composites Engineering: An A–Z Guide.

[15]  Laurent El Ghaoui,et al.  Robust Control of Markov Decision Processes with Uncertain Transition Matrices , 2005, Oper. Res..

[16]  Michail G. Lagoudakis,et al.  Least-Squares Policy Iteration , 2003, J. Mach. Learn. Res..

[17]  Peter Stone,et al.  Importance Sampling Policy Evaluation with an Estimated Behavior Policy , 2018, ICML.

[18]  Sergey Levine,et al.  Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction , 2019, NeurIPS.

[19]  Marek Petrik,et al.  Safe Policy Improvement by Minimizing Robust Baseline Regret , 2016, NIPS.

[20]  Martin A. Riedmiller,et al.  Batch Reinforcement Learning , 2012, Reinforcement Learning.

[21]  Pierre Geurts,et al.  Tree-Based Batch Mode Reinforcement Learning , 2005, J. Mach. Learn. Res..

[22]  Romain Laroche,et al.  Safe Policy Improvement with Soft Baseline Bootstrapping , 2019, ECML/PKDD.

[23]  Romain Laroche,et al.  Decentralized Exploration in Multi-Armed Bandits , 2018, ICML.

[24]  John B. Shoven,et al.  I , Edinburgh Medical and Surgical Journal.

[25]  Shie Mannor,et al.  Parametric regret in uncertain Markov decision processes , 2009, Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference.

[26]  Yonatan Loewenstein,et al.  DORA The Explorer: Directed Outreaching Reinforcement Action-Selection , 2018, ICLR.

[27]  Philip S. Thomas,et al.  High Confidence Policy Improvement , 2015, ICML.

[28]  M. Habib Probabilistic methods for algorithmic discrete mathematics , 1998 .

[29]  Romain Laroche,et al.  Safe Policy Improvement with Baseline Bootstrapping , 2017, ICML.

[30]  E. Ordentlich,et al.  Inequalities for the L1 Deviation of the Empirical Distribution , 2003 .

[31]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[32]  Romain Laroche,et al.  Hybrid Reward Architecture for Reinforcement Learning , 2017, NIPS.

[33]  Matthijs T. J. Spaan,et al.  Safe Policy Improvement with Baseline Bootstrapping in Factored Environments , 2019, AAAI.

[34]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[35]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.