Inference Strategies for Solving Semi−Markov Decision Processes

Semi-Markov decision processes are used to formulate many control problems and also play a key role in hierarchical reinforcement learning. In this chapter we show how to translate the decision making problem into a form that can instead be solved by inference and learning techniques. In particular, we will establish a formal connection between planning in semi-Markov decision processes and inference in probabilistic graphical models, then build on this connection to develop an expectation maximization (EM) algorithm for policy optimization in these models. Introduction Researchers in machine learning have long attempted to join the fields of inference and learning with that of decision making. Influence diagrams, for example, explicitly cast the decision making process as inference in a graphical model (see e.g. Cooper, 1988; Shachter, 1988). However, while these methods are a straight-forward application of inference techniques they only apply to finite-horizon problems and only learn non-stationary policies. For goal-directed decision problems, more general techniques such as that of Attias (2003) exist for finding the maximum a posteriori action sequence. (This technique was later extended by Verma & Rao (2006) to compute the maximal probable explanation.) It is crucial to note, however, that these approaches are not optimal in an expected reward sense. Instead, they can be interpreted as maximizing the probability of reaching the goal. While it is well known in the optimal control literature that there exists a fundamental duality between inference and control for the special case of linear-quadratic Gaussian models (Kalman, 1960), this result does not hold in general. Extending these ideas to more general models has been attempted by locally approximating the optimal solution (see e.g. Toussaint, 2009; Todorov & Li, 2005). A key step in realizing general inference-based approaches while still maintaining optimality with respect to expected rewards was originally addressed by Dayan & Hinton (1997) for immediate reward decision problems. In particular this work proposes an expectation maximization (EM) approach to the problem which works by optimizing a lower bound on the expected rewards. This technique was then greatly formalized by Toussaint & Storkey (2006) who extend it to the infinite-horizon case (see also Toussaint et al., 2006). INFERENCE STRATEGIES FOR SOLVING SMDPS 2 This line of research has since enjoyed substantial success in the field of robotics (Peters & Schaal, 2007; Kober & Peters, 2008; Vijayakumar et al., 2009), where empirical evidence has indicated that these methods can often outperform traditional stochastic planning and control methods as well as more recent policy gradient schemes. The focus of this chapter is two-fold: to act as an introduction to the “planning as inference” methodology and to show how to extend these techniques to semi-Markov Decision Processes (SMDPs). SMDPs are an extension of the MDP formalism that generalize the notion of time—in particular, by allowing the time-intervals between state transitions to vary stochastically. This allows us to handle tradeoffs not only between actions based on their expected rewards, but also based on the amount of time that each action takes to perform. SMDPs are interesting problems in their own right, with applications to call admission control and queueing systems (see e.g. Singh et al., 2007; Das et al., 1999). This formalism also serves as a natural platform in robotics for building complex motions from sequences of smaller motion “templates” as evidenced by Neumann et al. (2009). Finally, SMDPs are a crucial building block for hierarchical reinforcement learning methods (see e.g. Ghavamzadeh & Mahadevan, 2007; Sutton et al., 1998; Dietterich, 2000). While this chapter serves as an introductory text to the paradigm of inference and learning, and its application to SMDPs, we hope that future work in this area will leverage advances in structured inference techniques for hierarchical tasks of this nature. The first section of this work will describe the basic mixture of MDPs model that we build on while the second section will show how to extend this to the SMDP formalism. We then describe an EM algorithm for solving these problems. Finally, in the last section we apply this approach to a small SMDP example. A mixture of finite-time MDPs Following the notation of Hoffman, de Freitas, et al. (2009) an MDP can be succinctly described via the following components: • an initial state model p(x0), • a state transition model p(xn+1|xn, un), • an immediate reward model r(xn, un), • and finally a stochastic policy πθ(un|xn). In this model, n = 1, 2, . . . is a discrete-time index, {xn} is the state process, and {un} is the action process. The model further assumes a randomized policy, but one can also easily adopt a deterministic policy πθ(u|x) = δφθ(x)(u), where δ denotes the Dirac function and φ is a deterministic mapping from states to actions. (By this same reasoning we can also encode knowledge of the initial state using a Dirac mass.) We will assume that the policy-parameters are real-valued, i.e. θ ∈ Rd. Having defined the model, our objective is to maximize the expected future reward with respect to the parameters of the policy θ:

[1]  Nando de Freitas,et al.  New inference strategies for solving Markov Decision Processes using reversible jump MCMC , 2009, UAI.

[2]  Marc Toussaint,et al.  Creating Brain-Like Intelligence: From Principles to Complex Intelligent Systems , 2009 .

[3]  Marc Toussaint,et al.  Probabilistic inference for solving discrete and continuous state Markov Decision Processes , 2006, ICML.

[4]  Nando de Freitas,et al.  Bayesian Policy Learning with Trans-Dimensional MCMC , 2007, NIPS.

[5]  Marc Toussaint,et al.  Probabilistic inference for solving (PO) MDPs , 2006 .

[6]  Thomas G. Dietterich Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition , 1999, J. Artif. Intell. Res..

[7]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[8]  Ross D. Shachter Probabilistic Inference and Influence Diagrams , 1988, Oper. Res..

[9]  U. Rieder,et al.  Markov Decision Processes , 2010 .

[10]  Doina Precup,et al.  Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..

[11]  Rajesh P. N. Rao,et al.  Planning and Acting in Uncertain Environments using Probabilistic Inference , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Marc Toussaint,et al.  Robot trajectory optimization using approximate inference , 2009, ICML '09.

[13]  S. Mahadevan,et al.  Solving Semi-Markov Decision Problems Using Average Reward Reinforcement Learning , 1999 .

[14]  G. McLachlan,et al.  The EM algorithm and extensions , 1996 .

[15]  Stefan Schaal,et al.  Reinforcement Learning for Operational Space Control , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[16]  S. Vijayakumar,et al.  Planning and Moving in Dynamic Environments A Statistical Machine Learning Approach , 2008 .

[17]  E. Todorov,et al.  A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems , 2005, Proceedings of the 2005, American Control Conference, 2005..

[18]  Arnaud Doucet,et al.  A policy gradient method for semi-Markov decision processes with application to call admission control , 2007, Eur. J. Oper. Res..

[19]  Marc Toussaint,et al.  Model-free reinforcement learning as mixture learning , 2009, ICML '09.

[20]  Geoffrey E. Hinton,et al.  Using Expectation-Maximization for Reinforcement Learning , 1997, Neural Computation.

[21]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[22]  Nando de Freitas,et al.  Diagnosis by a waiter and a Mars explorer , 2004, Proceedings of the IEEE.

[23]  Peter L. Bartlett,et al.  Infinite-Horizon Policy-Gradient Estimation , 2001, J. Artif. Intell. Res..

[24]  Jan Peters,et al.  Policy Search for Motor Primitives in Robotics , 2008, NIPS 2008.

[25]  Simon Haykin,et al.  Special Issue on Sequential State Estimation , 2004, Proc. IEEE.

[26]  Nando de Freitas,et al.  An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward , 2009, AISTATS.

[27]  Jan Peters,et al.  Learning complex motions by sequencing simpler motion templates , 2009, ICML '09.

[28]  Gregory F. Cooper,et al.  A Method for Using Belief Networks as Influence Diagrams , 2013, UAI 1988.

[29]  Joshua B. Tenenbaum,et al.  Church: a language for generative models , 2008, UAI.

[30]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[31]  Hagai Attias,et al.  Planning by Probabilistic Inference , 2003, AISTATS.

[32]  Sridhar Mahadevan,et al.  Hierarchical Average Reward Reinforcement Learning , 2007, J. Mach. Learn. Res..

[33]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .