Using brain inspired principles to unsupervisedly learn good representations for visual pattern recognition

Although deep learning has solved difficult problems in visual pattern recognition, it is mostly successful in tasks where there are lots of labeled training data available. Furthermore, the global back-propagation based training rule and the amount of employed layers represents a departure from biological inspiration. The brain is able to perform most of these tasks in a very general way from limited to no labeled data. For these reasons it is still a key research question to look into computational principles in the brain that can help guide models to unsupervisedly learn good representations which can then be used to perform tasks like classification. In this work we explore some of these principles to generate such representations for the MNIST data set. We compare the obtained results with similar recent works and verify extremely competitive results.

[1]  Simon Haykin,et al.  Neural Networks and Learning Machines , 2010 .

[2]  Andreas Wichert,et al.  Attention Inspired Network: Steep learning curve in an invariant pattern recognition model , 2019, Neural Networks.

[3]  P. Ulinski Fundamentals of Computational Neuroscience , 2007 .

[4]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[5]  Kunihiko Fukushima,et al.  Neocognitron for handwritten digit recognition , 2003, Neurocomputing.

[6]  R. Weale Vision. A Computational Investigation Into the Human Representation and Processing of Visual Information. David Marr , 1983 .

[7]  F ROSENBLATT,et al.  The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.

[8]  Kevin P. Murphy,et al.  Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.

[9]  J. Knott The organization of behavior: A neuropsychological theory , 1951 .

[10]  David Zipser,et al.  Feature Discovery by Competive Learning , 1986, Cogn. Sci..

[11]  Fabio Anselmi,et al.  Visual Cortex and Deep Networks: Learning Invariant Representations , 2016 .

[12]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[13]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[14]  T. Poggio,et al.  Hierarchical models of object recognition in cortex , 1999, Nature Neuroscience.

[15]  Nikos A. Vlassis,et al.  The global k-means clustering algorithm , 2003, Pattern Recognit..

[16]  Chunhui Yuan,et al.  Research on K-Value Selection Method of K-Means Clustering Algorithm , 2019, J.

[17]  A. Lansner,et al.  A Bayesian attractor network with incremental learning , 2002 .

[18]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[19]  A. Lansner,et al.  A Bayesian attractor network with incremental learning. , 2002, Network.

[20]  D. Hubel,et al.  Receptive fields and functional architecture of monkey striate cortex , 1968, The Journal of physiology.

[21]  Xiaolin Hu,et al.  Sparsity-Regularized HMAX for Visual Recognition , 2014, PloS one.

[22]  D. Sculley,et al.  Web-scale k-means clustering , 2010, WWW '10.

[23]  Dileep George,et al.  From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence , 2020, Frontiers in Computational Neuroscience.

[24]  Yoshua Bengio,et al.  Convolutional networks for images, speech, and time series , 1998 .

[25]  Bhaskara Marthi,et al.  A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs , 2017, Science.

[26]  H. Bennet-Clark,et al.  The deep fovea as a focus indicator , 1978, Nature.

[27]  Thomas Serre,et al.  Robust Object Recognition with Cortex-Like Mechanisms , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Andreas Wichert,et al.  Storing Object-Dependent Sparse Codes in a Willshaw Associative Network , 2020, Neural Computation.

[29]  Kunihiko Fukushima,et al.  Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position , 1980, Biological Cybernetics.

[30]  W S McCulloch,et al.  A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.

[31]  Terrence J. Sejnowski,et al.  The Hebb Rule for Synaptic Plasticity: Algorithms and Implementations , 1989 .

[32]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[33]  Anders Lansner,et al.  Learning representations in Bayesian Confidence Propagation neural networks , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).

[34]  S. Liversedge,et al.  Saccadic eye movements and cognition , 2000, Trends in Cognitive Sciences.

[35]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[36]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[37]  Ângelo Cardoso,et al.  Neocognitron and the Map Transformation Cascade , 2010, Neural Networks.

[38]  Wulfram Gerstner,et al.  Biologically plausible deep learning - but how far can we go with shallow networks? , 2019, Neural Networks.