Continuous Homeostatic Reinforcement Learning for Self-Regulated Autonomous Agents

Homeostasis is a prevalent process by which living beings maintain their internal milieu around optimal levels. Multiple lines of evidence suggest that living beings learn to act to predicatively ensure homeostasis (allostasis). A classical theory for such regulation is drive reduction, where a function of the difference between the current and the optimal internal state. The recently introduced homeostatic regulated reinforcement learning theory (HRRL), by defining within the framework of reinforcement learning a reward function based on the internal state of the agent, makes the link between the theories of drive reduction and reinforcement learning. The HRRL makes it possible to explain multiple eating disorders. However, the lack of continuous change in the internal state of the agent with the discrete-time modeling has been so far a key shortcoming of the HRRL theory. Here, we propose an extension of the homeostatic reinforcement learning theory to a continuous environment in space and time, while maintaining the validity of the theoretical results and the behaviors explained by the model in discrete time. Inspired by the self-regulating mechanisms abundantly present in biology, we also introduce a model for the dynamics of the agent internal state, requiring the agent to continuously take actions to maintain homeostasis. Based on the Hamilton-Jacobi-Bellman equation and function approximation with neural networks, we derive a numerical scheme allowing the agent to learn directly how its internal mechanism works, and to choose appropriate action policies via reinforcement learning and an appropriate exploration of the environment. Our numerical experiments show that the agent does indeed learn to behave in a way that is beneficial to its survival in the environment, making our framework promising for modeling animal dynamics and decision-making.

[1]  Stefano Palminteri,et al.  Modelling Stock Markets by Multi-agent Reinforcement Learning , 2020, Computational Economics.

[2]  Zeb Kurth-Nelson,et al.  A distributional code for value in dopamine-based reinforcement learning , 2020, Nature.

[3]  Surya Ganguli,et al.  A deep learning framework for neuroscience , 2019, Nature Neuroscience.

[4]  Antonio R. Damasio,et al.  Homeostasis and soft robotics in the design of feeling machines , 2019, Nature Machine Intelligence.

[5]  A. Valente,et al.  Disentangling the roles of dimensionality and cell classes in neural computations , 2019 .

[6]  Radoslaw Martin Cichy,et al.  Deep Neural Networks as Scientific Models , 2019, Trends in Cognitive Sciences.

[7]  Yann Ollivier,et al.  Making Deep Q-learning methods robust to time discretization , 2019, ICML.

[8]  Demis Hassabis,et al.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play , 2018, Science.

[9]  Nikolaus Kriegeskorte,et al.  Cognitive computational neuroscience , 2018, Nature Neuroscience.

[10]  Tim C Kietzmann,et al.  Deep Neural Networks in Computational Neuroscience , 2018, bioRxiv.

[11]  Marc G. Bellemare,et al.  A Distributional Perspective on Reinforcement Learning , 2017, ICML.

[12]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[13]  Marcel van Gerven,et al.  Modeling the Dynamics of Human Brain Activity with Recurrent Neural Networks , 2016, Front. Comput. Neurosci..

[14]  Shie Mannor,et al.  A Deep Hierarchical Approach to Lifelong Learning in Minecraft , 2016, AAAI.

[15]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[16]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[17]  Vanessa Hertzog,et al.  Adaptive Behavior And Learning , 2016 .

[18]  A. Layton,et al.  Bifurcation study of blood flow control in the kidney. , 2015, Mathematical biosciences.

[19]  Mehdi Keramati,et al.  Homeostatic reinforcement learning for integrating reward collection and physiological stability , 2014, eLife.

[20]  D. Ramsay,et al.  Clarifying the roles of homeostasis and allostasis in physiological regulation. , 2014, Psychological review.

[21]  Y. Loewenstein,et al.  Reinforcement learning and human behavior , 2014, Current Opinion in Neurobiology.

[22]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[23]  M. Keramati A homeostatic reinforcement learning theory, and its implications in cocaine addiction , 2013 .

[24]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[25]  Boris S. Gutkin,et al.  A Reinforcement Learning Theory for Homeostatic Regulation , 2011, NIPS.

[26]  M. di Bernardo,et al.  Comparing different ODE modelling approaches for gene regulatory networks. , 2009, Journal of theoretical biology.

[27]  Y. Niv Reinforcement learning in the brain , 2009 .

[28]  R. Blantz,et al.  Glomerulotubular balance, tubuloglomerular feedback, and salt homeostasis. , 2008, Journal of the American Society of Nephrology : JASN.

[29]  H. A. van den Berg,et al.  Mathematical models of energy homeostasis , 2008, Journal of The Royal Society Interface.

[30]  R. Malott,et al.  Principles of Behavior , 2007 .

[31]  Andrew G. Barto,et al.  An Adaptive Robot Motivational System , 2006, SAB.

[32]  Nadine Le Fort-Piat,et al.  Reward Function and Initial Values: Better Choices for Accelerated Goal-Directed Reinforcement Learning , 2006, ICANN.

[33]  John C. Wingfield,et al.  THE CONCEPT OF ALLOSTASIS: COPING WITH A CAPRICIOUS ENVIRONMENT , 2005 .

[34]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[35]  Kenji Doya,et al.  Reinforcement Learning in Continuous Time and Space , 2000, Neural Computation.

[36]  Steven Seidman,et al.  A synthesis of reinforcement learning and robust control theory , 2000 .

[37]  D H Evans,et al.  Effect of CO2 on dynamic cerebral autoregulation measurement , 1999, Physiological measurement.

[38]  Guy Mittleman,et al.  Reinforcer Magnitude and Progressive Ratio Responding in the Rat: Effects of Increased Effort, Prefeeding, and Extinction , 1993 .

[39]  W HODOS,et al.  Progressive Ratio as a Measure of Reward Strength , 1961, Science.