Multi-timescale Nexting in a Reinforcement Learning Robot

The term “nexting” has been used by psychologists to refer to the propensity of people and many other animals to continually predict what will happen next in an immediate, local, and personal sense. The ability to “next” constitutes a basic kind of awareness and knowledge of one’s environment. In this paper we present results with a robot that learns to next in real time, predicting thousands of features of the world’s state, including all sensory inputs, at timescales from 0.1 to 8 seconds. This was achieved by treating each state feature as a reward-like target and applying temporal-difference methods to learn a corresponding value function with a discount rate corresponding to the timescale. We show that two thousand predictions, each dependent on six thousand state features, can be learned and updated online at better than 10Hz on a laptop computer, using the standard TD(λ) algorithm with linear function approximation. We show that this approach is efficient enough to be practical, with most of the learning complete within 30 minutes. We also show that a single tile-coded feature representation suffices to accurately predict many different signals at a significant range of timescales. Finally, we show that the accuracy of our learned predictions compares favorably with the optimal off-line solution.

[1]  E. Tolman Purposive behavior in animals and men , 1932 .

[2]  W. Brogden Sensory pre-conditioning. , 1939 .

[3]  R. Rescorla Simultaneous and successive associations in sensory preconditioning. , 1980, Journal of experimental psychology. Animal behavior processes.

[4]  Richard S. Sutton,et al.  Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming , 1990, ML.

[5]  Geoffrey E. Hinton,et al.  Feudal Reinforcement Learning , 1992, NIPS.

[6]  Satinder P. Singh,et al.  Reinforcement Learning with a Hierarchy of Abstract Models , 1992, AAAI.

[7]  Leslie Pack Kaelbling,et al.  Learning to Achieve Goals , 1993, IJCAI.

[8]  Richard S. Sutton,et al.  TD Models: Modeling the World at a Mixture of Time Scales , 1995, ICML.

[9]  Michael I. Jordan,et al.  An internal model for sensorimotor integration. , 1995, Science.

[10]  Doina Precup,et al.  Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..

[11]  K. Carlsson,et al.  Tickling Expectations: Neural Processing in Anticipation of a Sensory Stimulus , 2000, Journal of Cognitive Neuroscience.

[12]  Rick Grush,et al.  The emulation theory of representation: Motor control, imagery, and perception , 2004, Behavioral and Brain Sciences.

[13]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[14]  Richard S. Sutton,et al.  Learning to predict by the methods of temporal differences , 1988, Machine Learning.

[15]  Daniel Gilbert,et al.  Stumbling on Happiness , 2015 .

[16]  Justyna Humięcka-Jakubowska,et al.  Sweet Anticipation : Music and , 2006 .

[17]  D. Levitin This Is Your Brain on Music , 2006 .

[18]  Giovanni Pezzulo,et al.  Coordinating with the Future: The Anticipatory Nature of Representation , 2008, Minds and Machines.

[19]  P. I. Pavlov Conditioned reflexes: An investigation of the physiological activity of the cerebral cortex. , 1929, Annals of Neurosciences.

[20]  Patrick M. Pilarski,et al.  Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction , 2011, AAMAS.

[21]  A. Clark Whatever next? Predictive brains, situated agents, and the future of cognitive science. , 2013, The Behavioral and brain sciences.