Self-Modification of Policy and Utility Function in Rational Agents

Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify – for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby ‘escaping’ the control of their creators. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.

[1]  Laurent Orseau,et al.  On Thompson Sampling and Asymptotic Optimality , 2017, IJCAI.

[2]  Nate Soares,et al.  The Value Learning Problem , 2018, Artificial Intelligence Safety and Security.

[3]  C. Robert Superintelligence: Paths, Dangers, Strategies , 2017 .

[4]  Laurent Orseau,et al.  Universal knowledge-seeking agents , 2011, Theor. Comput. Sci..

[5]  Marcus Hutter,et al.  Bad Universal Priors and Notions of Optimality , 2015, COLT.

[6]  Laurent Orseau,et al.  Space-Time Embedded Intelligence , 2012, AGI.

[7]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[8]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[9]  Shane Legg,et al.  Universal Intelligence: A Definition of Machine Intelligence , 2007, Minds and Machines.

[10]  Jon Bird,et al.  The evolved radio and its implications for modelling the evolution of novel sensors , 2002, Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600).

[11]  Daniel Dewey,et al.  Learning What to Value , 2011, AGI.

[12]  Laurent Orseau,et al.  Self-Modification and Mortality in Artificial Agents , 2011, AGI.

[13]  Marcus Hutter,et al.  Avoiding Wireheading with Value Reinforcement Learning , 2016, AGI.

[14]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[15]  Stephen M. Omohundro,et al.  The Basic AI Drives , 2008, AGI.

[16]  Tor Lattimore,et al.  General time consistent discounting , 2014, Theor. Comput. Sci..

[17]  Marcus Hutter,et al.  Extreme State Aggregation beyond MDPs , 2014, ALT.

[18]  Jürgen Schmidhuber,et al.  Gödel Machines: Fully Self-referential Optimal Universal Self-improvers , 2007, Artificial General Intelligence.

[19]  Roman V. Yampolskiy,et al.  Artificial Superintelligence: A Futuristic Approach , 2015 .

[20]  Laurent Orseau,et al.  Delusion, Survival, and Intelligent Agents , 2011, AGI.

[21]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[22]  Laurent Orseau,et al.  Thompson Sampling is Asymptotically Optimal in General Environments , 2016, UAI.

[23]  Dr. Marcus Hutter,et al.  Universal artificial intelligence , 2004 .

[24]  Bill Hibbard,et al.  Model-based Utility Functions , 2011, J. Artif. Gen. Intell..