Path integral control and bounded rationality

Path integral methods [1], [2],[3] have recently been shown to be applicable to a very general class of optimal control problems. Here we examine the path integral formalism from a decision-theoretic point of view, since an optimal controller can always be regarded as an instance of a perfectly rational decision-maker that chooses its actions so as to maximize its expected utility [4]. The problem with perfect rationality is, however, that finding optimal actions is often very difficult due to prohibitive computational resource costs that are not taken into account. In contrast, a bounded rational decision-maker has only limited resources and therefore needs to strike some compromise between the desired utility and the required resource costs [5]. In particular, we suggest an information-theoretic measure of resource costs that can be derived axiomatically [6]. As a consequence we obtain a variational principle for choice probabilities that trades off maximizing a given utility criterion and avoiding resource costs that arise due to deviating from initially given default choice probabilities. The resulting bounded rational policies are in general probabilistic. We show that the solutions found by the path integral formalism are such bounded rational policies. Furthermore, we show that the same formalism generalizes to discrete control problems, leading to linearly solvable bounded rational control policies in the case of Markov systems. Importantly, Bellman's optimality principle is not presupposed by this variational principle, but it can be derived as a limit case. This suggests that the information-theoretic formalization of bounded rationality might serve as a general principle in control design that unifies a number of recently reported approximate optimal control methods both in the continuous and discrete domain.

[1]  J. Neumann,et al.  Theory of games and economic behavior , 1945, 100 Years of Math Milestones.

[2]  H. Callen Thermodynamics and an Introduction to Thermostatistics , 1988 .

[3]  M. Tribus,et al.  Energy and information , 1971 .

[4]  W. Fleming Exit probabilities and optimal stochastic control , 1977 .

[5]  H. Simon,et al.  Models of Bounded Rationality: Empirically Grounded Economic Reason , 1997 .

[6]  R. Callen,et al.  Thermodynamics and an Introduction to Thermostatistics, 2nd Edition , 1985 .

[7]  David M. Kreps Notes On The Theory Of Choice , 1988 .

[8]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[9]  Stuart J. Russell Rationality and Intelligence , 1995, IJCAI.

[10]  Anthony J. G. Hey,et al.  Feynman Lectures on Computation , 1996 .

[11]  H. Kappen Linear theory for control of nonlinear stochastic systems. , 2004, Physical review letters.

[12]  Daniel A. Braun,et al.  A conversion between utility and information , 2009, AGI 2010.

[13]  Emanuel Todorov,et al.  Efficient computation of optimal actions , 2009, Proceedings of the National Academy of Sciences.

[14]  P. Fishburn The Foundations Of Expected Utility , 2010 .

[15]  Daniel A. Braun,et al.  A Minimum Relative Entropy Principle for Learning and Acting , 2008, J. Artif. Intell. Res..

[16]  Stefan Schaal,et al.  A Generalized Path Integral Control Approach to Reinforcement Learning , 2010, J. Mach. Learn. Res..

[17]  Stefan Schaal,et al.  Variable Impedance Control - A Reinforcement Learning Approach , 2010, Robotics: Science and Systems.

[18]  Pedro A. Ortega,et al.  A Unified Framework for Resource-Bounded Autonomous Agents Interacting with Unknown Environments , 2011 .

[19]  Vicenç Gómez,et al.  Optimal control as a graphical model inference problem , 2009, Machine Learning.