Robust Policy Optimization with Baseline Guarantees

Our goal is to compute a policy that guarantees improved return over a baseline policy even when the available MDP model is inaccurate. The inaccurate model may be constructed, for example, by system identification techniques when the true model is inaccessible. When the modeling error is large, the standard solution to the constructed model has no performance guarantees with respect to the true model. In this paper we develop algorithms that provide such performance guarantees and show a trade-off between their complexity and conservatism. Our novel model-based safe policy search algorithms leverage recent advances in robust optimization techniques. Furthermore we illustrate the effectiveness of these algorithms using a numerical example.