Model-based Utility Functions

Abstract Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

[1]  James L Olds,et al.  Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. , 1954, Journal of comparative and physiological psychology.

[2]  B. Ripley,et al.  Pattern Recognition , 1968, Nature.

[3]  L. Baum,et al.  A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains , 1970 .

[4]  Ming Li,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 2019, Texts in Computer Science.

[5]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[6]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[7]  Pei Wang,et al.  Non-axiomatic reasoning system: exploring the essence of intelligence , 1996 .

[8]  Zoubin Ghahramani,et al.  Learning Dynamic Bayesian Networks , 1997, Summer School on Neural Networks.

[9]  Paul M. B. Vitányi,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 1997, Graduate Texts in Computer Science.

[10]  S. Lloyd Computational capacity of the universe. , 2001, Physical review letters.

[11]  Ofi rNw8x'pyzm,et al.  The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions , 2002 .

[12]  Marcus Hutter Simulation Algorithms for Computational Systems Biology , 2017, Texts in Theoretical Computer Science. An EATCS Series.

[13]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[14]  N. Caticha,et al.  Online Learning in Discrete Hidden Markov Models , 2006, 0708.2377.

[15]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[16]  Marcus Hutter,et al.  Feature Dynamic Bayesian Networks , 2008, ArXiv.

[17]  B. Hibbard The Technology of Mind and a New Social Contract , 2008 .

[18]  Stephen M. Omohundro,et al.  The Basic AI Drives , 2008, AGI.

[19]  Jürgen Schmidhuber,et al.  Ultimate Cognition à la Gödel , 2009, Cognitive Computation.

[20]  Marcus Hutter,et al.  Feature Reinforcement Learning: Part I. Unstructured MDPs , 2009, J. Artif. Gen. Intell..

[21]  Laurent Orseau,et al.  Self-Modification and Mortality in Artificial Agents , 2011, AGI.

[22]  Mark Waser Rational Universal Benevolence: Simpler, Safer, and Wiser Than "Friendly AI" , 2011, AGI.

[23]  Laurent Orseau,et al.  Delusion, Survival, and Intelligent Agents , 2011, AGI.

[24]  Daniel Dewey,et al.  Learning What to Value , 2011, AGI.

[25]  Jürgen Schmidhuber,et al.  Sequential Constant Size Compressors for Reinforcement Learning , 2011, AGI.