A learning based approach to control synthesis of Markov decision processes for linear temporal logic specifications

We propose to synthesize a control policy for a Markov decision process (MDP) such that the resulting traces of the MDP satisfy a linear temporal logic (LTL) property. We construct a product MDP that incorporates a deterministic Rabin automaton generated from the desired LTL property. The reward function of the product MDP is defined from the acceptance condition of the Rabin automaton. This construction allows us to apply techniques from learning theory to the problem of synthesis for LTL specifications even when the transition probabilities are not known a priori. We prove that our method is guaranteed to find a controller that satisfies the LTL property with probability one if such a policy exists, and we suggest empirically that our method produces reasonable control strategies even when the LTL property cannot be satisfied with probability one.

[1]  Calin Belta,et al.  Optimal Control of Markov Decision Processes With Linear Temporal Logic Constraints , 2014, IEEE Transactions on Automatic Control.

[2]  R. Durrett Essentials of Stochastic Processes , 1999 .

[3]  S. Safra,et al.  On the complexity of omega -automata , 1988, [Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science.

[4]  Thierry Siméon,et al.  The Stochastic Motion Roadmap: A Sampling Framework for Planning with Markov Motion Uncertainty , 2007, Robotics: Science and Systems.

[5]  C. Baier,et al.  Experiments with Deterministic ω-Automata for Formulas of Linear Temporal Logic , 2005 .

[6]  Michael Kearns,et al.  Near-Optimal Reinforcement Learning in Polynomial Time , 1998, Machine Learning.

[7]  S. Safra On The Complexity of w-Automata , 1988 .

[8]  Amir Pnueli,et al.  Synthesis of Reactive(1) designs , 2006, J. Comput. Syst. Sci..

[9]  Joost-Pieter Katoen,et al.  Formal Methods for Real-Time and Probabilistic Systems , 1999, Lecture Notes in Computer Science.

[10]  Richard S. Sutton,et al.  Learning to predict by the methods of temporal differences , 1988, Machine Learning.

[11]  Alberto L. Sangiovanni-Vincentelli,et al.  Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior , 2014, AAAI Spring Symposia.

[12]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[13]  Christel Baier,et al.  Principles of model checking , 2008 .

[14]  Moshe Y. Vardi Probabilistic Linear-Time Model Checking: An Overview of the Automata-Theoretic Approach , 1999, ARTS.

[15]  Ufuk Topcu,et al.  Robust control of uncertain Markov Decision Processes with temporal logic specifications , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[16]  Ufuk Topcu,et al.  Receding Horizon Temporal Logic Planning , 2012, IEEE Transactions on Automatic Control.

[17]  Leslie Pack Kaelbling,et al.  Collision Avoidance for Unmanned Aircraft using Markov Decision Processes , 2010 .

[18]  Calin Belta,et al.  A probabilistic approach for control of a stochastic system from LTL specifications , 2009, Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference.