Reservoirs learn to learn

We consider reservoirs in the form of liquid state machines, i.e., recurrently connected networks of spiking neurons with randomly chosen weights. So far only the weights of a linear readout were adapted for a specific task. We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly. After all, weights of recurrent connections in the brain are also not assumed to be randomly chosen. Rather, these weights were probably optimized during evolution, development, and prior learning experiences for specific task domains. In order to examine the benefits of choosing recurrent weights within a liquid with a purpose, we applied the Learning-to-Learn (L2L) paradigm to our model: We optimized the weights of the recurrent connections -- and hence the dynamics of the liquid state machine -- for a large family of potential learning tasks, which the network might have to learn later through modification of the weights of readout neurons. We found that this two-tiered process substantially improves the learning speed of liquid state machines for specific tasks. In fact, this learning speed increases further if one does not train the weights of linear readouts at all, and relies instead on the internal dynamics and fading memory of the network for remembering salient information that it could extract from preceding examples for the current learning task. This second type of learning has recently been proposed to underlie fast learning in the prefrontal cortex and motor cortex, and hence it is of interest to explore its performance also in models. Since liquid state machines share many properties with other types of reservoirs, our results raise the question whether L2L conveys similar benefits also to these other reservoirs.

[1]  Sepp Hochreiter,et al.  Learning to Learn Using Gradient Descent , 2001, ICANN.

[2]  Peter L. Bartlett,et al.  RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.

[3]  Robert A. Legenstein,et al.  Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets , 2019, ArXiv.

[4]  Wolfgang Maass,et al.  Cerebral Cortex Advance Access published February 15, 2006 A Statistical Analysis of Information- Processing Properties of Lamina-Specific , 2022 .

[5]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, NIPS.

[6]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[7]  Wolfgang Maass,et al.  A solution to the learning dilemma for recurrent networks of spiking neurons , 2019, Nature Communications.

[8]  Sanjiv Kumar,et al.  On the Convergence of Adam and Beyond , 2018 .

[9]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[10]  Benjamin Schrauwen,et al.  An experimental unification of reservoir computing methods , 2007, Neural Networks.

[11]  M. Bear,et al.  Metaplasticity: the plasticity of synaptic plasticity , 1996, Trends in Neurosciences.

[12]  Henry Markram,et al.  Fading memory and kernel properties of generic cortical microcircuit models , 2004, Journal of Physiology-Paris.

[13]  Joel Z. Leibo,et al.  Prefrontal cortex as a meta-reinforcement learning system , 2018, bioRxiv.

[14]  Herbert Jaeger,et al.  The''echo state''approach to analysing and training recurrent neural networks , 2001 .

[15]  Toshiyuki Yamane,et al.  Recent Advances in Physical Reservoir Computing: A Review , 2018, Neural Networks.

[16]  Lee E. Miller,et al.  A neural population mechanism for rapid learning , 2017 .

[17]  Zeb Kurth-Nelson,et al.  Learning to reinforcement learn , 2016, CogSci.

[18]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[19]  Karlheinz Meier,et al.  Neuromorphic Hardware Learns to Learn , 2019, Front. Neurosci..

[20]  Andrew S. Cassidy,et al.  Convolutional networks for fast, energy-efficient neuromorphic computing , 2016, Proceedings of the National Academy of Sciences.