Spatiotemporal Information Processing with a Reservoir Decision-making Network

Spatiotemporal information processing is fundamental to brain functions. The present study investigates a canonic neural network model for spatiotemporal pattern recognition. Specifically, the model consists of two modules, a reservoir subnetwork and a decision-making subnetwork. The former projects complex spatiotemporal patterns into spatially separated neural representations, and the latter reads out these neural representations via integrating information over time; the two modules are combined together via supervised-learning using known examples. We elucidate the working mechanism of the model and demonstrate its feasibility for discriminating complex spatiotemporal patterns. Our model reproduces the phenomenon of recognizing looming patterns in the neural system, and can learn to discriminate gait with very few training examples. We hope this study gives us insight into understanding how spatiotemporal information is processed in the brain and helps us to develop brain-inspired application algorithms.

[1]  W. Newsome,et al.  Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. , 2001, Journal of neurophysiology.

[2]  Robert A. Legenstein,et al.  At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks , 2004, NIPS.

[3]  Qian Wang,et al.  A parvalbumin-positive excitatory visual pathway to trigger fear responses in mice , 2015, Science.

[4]  Richard P. Wildes,et al.  Spatiotemporal Multiplier Networks for Video Action Recognition , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Qian Tao,et al.  A retinoraphe projection regulates serotonergic activity and looming-evoked defensive behaviour , 2017, Nature Communications.

[6]  L. F. Abbott,et al.  Generating Coherent Patterns of Activity from Chaotic Neural Networks , 2009, Neuron.

[7]  Lorenzo Torresani,et al.  Learning Spatiotemporal Features with 3D Convolutional Networks , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[8]  Christian Wolf,et al.  Sequential Deep Learning for Human Action Recognition , 2011, HBU.

[9]  Herbert Jaeger,et al.  The''echo state''approach to analysing and training recurrent neural networks , 2001 .

[10]  Ming Yang,et al.  3D Convolutional Neural Networks for Human Action Recognition , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Stefan J. Kiebel,et al.  Re-visiting the echo state property , 2012, Neural Networks.

[12]  Andrew Zisserman,et al.  Two-Stream Convolutional Networks for Action Recognition in Videos , 2014, NIPS.

[13]  M. Pu,et al.  A Visual Circuit Related to Habenula Underlies the Antidepressive Effects of Light Therapy , 2019, Neuron.

[14]  Yang Li,et al.  Divergent midbrain circuits orchestrate escape and freezing responses to looming stimuli in mice , 2018, Nature Communications.

[15]  Bernard M. C. Stienen,et al.  Intact navigation skills after bilateral loss of striate cortex , 2008, Current Biology.

[16]  Xintian Hu,et al.  Processing of visually evoked innate fear by a non-canonical thalamic pathway , 2015, Nature Communications.

[17]  Cordelia Schmid,et al.  Action Recognition with Improved Trajectories , 2013, 2013 IEEE International Conference on Computer Vision.

[18]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[19]  Xiao-Jing Wang,et al.  A Recurrent Network Mechanism of Time Integration in Perceptual Decisions , 2006, The Journal of Neuroscience.

[20]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.