Consideration on liquid structure contributing to discrimination capability of Liquid State Machine

Liquid State Machine (LSM) is one of reservoir computing models and due to universal computing capability its application and improvement attract researchers. Meanwhile development of measurement technology reveals the existence of specific structure in brain networks, such as scale-free, small-world, and modular properties, which contribute to higherorder brain functions such as cognition and memory. In this paper, we apply various network models to a recurrent neural network of an LSM and investigate the relationship between structural properties and accuracy of discrimination. Results suggest that modularity of a recurrent neural network enhances discrimination capability of LSM.

[1]  Tadashi Yamazaki,et al.  The cerebellum as a liquid state machine , 2007, Neural Networks.

[2]  Panos E. Trahanias,et al.  Improving the Classification Performance of Liquid State Machines Based on the Separation Property , 2011, EANN/AIAI.

[3]  P. Erdos,et al.  On the evolution of random graphs , 1984 .

[4]  Wulfram Gerstner,et al.  Spiking Neuron Models , 2002 .

[5]  Vince Grolmusz,et al.  How to Direct the Edges of the Connectomes: Dynamics of the Consensus Connectomes and the Development of the Connections in the Human Brain , 2015, PloS one.

[6]  R. Fisher THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .

[7]  Dan Ventura,et al.  Preparing More Effective Liquid State Machines Using Hebbian Learning , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[8]  Eris Chinellato,et al.  Facial expression recognition based on Liquid State Machines built of alternative neuron models , 2009, 2009 International Joint Conference on Neural Networks.

[9]  Timothy Ravasi,et al.  From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks , 2013, Scientific Reports.

[10]  Athanasios V. Vasilakos,et al.  Small-world human brain networks: Perspectives and challenges , 2017, Neuroscience & Biobehavioral Reviews.

[11]  Edward T. Bullmore,et al.  Modular and Hierarchically Modular Organization of Brain Networks , 2010, Front. Neurosci..

[12]  O. Sporns,et al.  Structural and Functional Aspects Relating to Cost and Benefit of Rich Club Organization in the Human Cerebral Cortex , 2013, Cerebral cortex.

[13]  B. Bollobás The evolution of random graphs , 1984 .

[14]  M E J Newman,et al.  Finding and evaluating community structure in networks. , 2003, Physical review. E, Statistical, nonlinear, and soft matter physics.

[15]  Olaf Sporns,et al.  Generative models of the human connectome , 2015, NeuroImage.

[16]  M. Newman Mathematics of networks , 2018, Oxford Scholarship Online.

[17]  Jian-Xin Xu,et al.  Effects of synaptic connectivity on liquid state machine performance , 2013, Neural Networks.

[18]  Stefan Burr,et al.  The Mathematics of networks , 1982 .

[19]  Mariano Sigman,et al.  A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks , 2011, Proceedings of the National Academy of Sciences.

[20]  Leonard M. Freeman,et al.  A set of measures of centrality based upon betweenness , 1977 .

[21]  Vince Grolmusz,et al.  The braingraph.org database of high resolution structural connectomes and the brain graph tools , 2017, Cognitive Neurodynamics.

[22]  Duncan J. Watts,et al.  Collective dynamics of ‘small-world’ networks , 1998, Nature.

[23]  Albert-László Barabási,et al.  Statistical mechanics of complex networks , 2001, ArXiv.

[24]  G. Cecchi,et al.  Scale-free brain functional networks. , 2003, Physical review letters.

[25]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[26]  Sarah Feldt Muldoon,et al.  Small-World Propensity and Weighted Brain Networks , 2016, Scientific Reports.