Cognitive Computing and Neural Networks: Reverse Engineering the Brain

Abstract Cognitive computing seeks to build applications which model and mimic human thinking. One approach toward achieving this goal is to develop brain-inspired computational models. A prime example of such a model is the class of deep convolutional networks which is currently used in pattern recognition, machine vision, and machine learning. We offer a brief review of the mammalian neocortex, the minicolumn, and the ventral pathway. We provide descriptions of abstract neural circuits that have been used to model these areas of the brain. This include Poisson spiking networks, liquid computing networks, spiking models of feature discovery in the ventral pathway, spike-timing-dependent plasticity learning, restricted Boltzmann machines, deep belief networks, and deep convolutional networks. In summary, this chapter explores abstractions of neural networks found within the mammalian neocortex that support cognition and the beginnings of cognitive computation.

[1]  William R. Gray Roncal,et al.  Saturated Reconstruction of a Volume of Neocortex , 2015, Cell.

[2]  Stevan Harnad,et al.  Symbol grounding problem , 1990, Scholarpedia.

[3]  Alessandro Curioni,et al.  Rebasing I/O for Scientific Computing: Leveraging Storage Class Memory in an IBM BlueGene/Q Supercomputer , 2014, ISC.

[4]  Daniel L. K. Yamins,et al.  Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition , 2014, PLoS Comput. Biol..

[5]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[6]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[7]  Honglak Lee,et al.  Unsupervised learning of hierarchical representations with convolutional deep belief networks , 2011, Commun. ACM.

[8]  Yoshua Bengio,et al.  Towards Biologically Plausible Deep Learning , 2015, ArXiv.

[9]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[10]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[11]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[12]  David Terman,et al.  Mathematical foundations of neuroscience , 2010 .

[13]  Y. Dan,et al.  Spike timing-dependent plasticity: a Hebbian learning rule. , 2008, Annual review of neuroscience.

[14]  Alicia Karspeck,et al.  Comparison of Filtering Methods for the Modeling and Retrospective Forecasting of Influenza Epidemics , 2014, PLoS Comput. Biol..

[15]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[16]  Silvio Savarese,et al.  3D generic object categorization, localization and pose estimation , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[17]  Thomas Serre,et al.  Robust Object Recognition with Cortex-Like Mechanisms , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Eugene M. Izhikevich,et al.  Which model to use for cortical spiking neurons? , 2004, IEEE Transactions on Neural Networks.

[19]  A V Herz,et al.  Neural codes: firing rates and beyond. , 1997, Proceedings of the National Academy of Sciences of the United States of America.

[20]  James G. King,et al.  The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex , 2015, Front. Neural Circuits.

[21]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.

[22]  Kunihiko Fukushima,et al.  Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position , 1980, Biological Cybernetics.

[23]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[24]  Wulfram Gerstner,et al.  Predicting spike timing of neocortical pyramidal neurons by simple threshold models , 2006, Journal of Computational Neuroscience.

[25]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[26]  Wulfram Gerstner,et al.  Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons , 2013, PLoS Comput. Biol..

[27]  Honglak Lee,et al.  Sparse deep belief net model for visual area V2 , 2007, NIPS.

[28]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[29]  Wolfgang Maass,et al.  Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity , 2013, PLoS Comput. Biol..

[30]  E H Adelson,et al.  Spatiotemporal energy models for the perception of motion. , 1985, Journal of the Optical Society of America. A, Optics and image science.

[31]  Alessandra Angelucci,et al.  Induction of visual orientation modules in auditory cortex , 2000, Nature.

[32]  Jason Weston,et al.  Scaling Learning Algorithms toward AI , 2007 .

[33]  Yann LeCun,et al.  What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[34]  David Kappel,et al.  STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning , 2014, PLoS Comput. Biol..

[35]  Wulfram Gerstner,et al.  Theory and Simulation in Neuroscience , 2012, Science.

[36]  James G. King,et al.  Reconstruction and Simulation of Neocortical Microcircuitry , 2015, Cell.

[37]  Wolfgang Maass,et al.  To Spike or Not to Spike: That Is the Question , 2015, Proc. IEEE.

[38]  Deepak Khosla,et al.  Spiking Deep Convolutional Neural Networks for Energy-Efficient Object Recognition , 2014, International Journal of Computer Vision.

[39]  G. Shepherd The Synaptic Organization of the Brain , 1979 .

[40]  Dharmendra S. Modha,et al.  Cognitive Computing , 2011, Informatik-Spektrum.

[41]  G. Shepherd,et al.  The neocortical circuit: themes and variations , 2015, Nature Neuroscience.

[42]  V. Mountcastle Perceptual Neuroscience: The Cerebral Cortex , 1998 .

[43]  Shimon Ullman,et al.  A computational model of perceptual fill-in following retinal degeneration. , 2008, Journal of neurophysiology.

[44]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[45]  D. Buxhoeveden,et al.  The minicolumn hypothesis in neuroscience. , 2002, Brain : a journal of neurology.

[46]  Wolfgang Maass,et al.  Lower Bounds for the Computational Power of Networks of Spiking Neurons , 1996, Neural Computation.

[47]  Timothée Masquelier,et al.  Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition , 2015, Neurocomputing.

[48]  MaassWolfgang,et al.  Real-time computing without stable states , 2002 .

[49]  Wolfgang Maass,et al.  STDP enables spiking neurons to detect hidden causes of their inputs , 2009, NIPS.

[50]  D. Hubel,et al.  Receptive fields and functional architecture of monkey striate cortex , 1968, The Journal of physiology.

[51]  V. Mountcastle The columnar organization of the neocortex. , 1997, Brain : a journal of neurology.

[52]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[53]  Pedro M. Domingos,et al.  The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World , 2015 .

[54]  Michael N. Shadlen,et al.  Rate versus Temporal Coding Models , 2006 .

[55]  John M. Allman,et al.  Evolution of Neocortex , 1990 .

[56]  Sinan Kalkan,et al.  Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision? , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[57]  B. Schrauwen,et al.  Isolated word recognition with the Liquid State Machine: a case study , 2005, Inf. Process. Lett..

[58]  Simon Haykin,et al.  Neural Networks and Learning Machines , 2010 .

[59]  Wolfgang Maass,et al.  On the Computational Power of Winner-Take-All , 2000, Neural Computation.

[60]  Michael L. Hines,et al.  The NEURON Book , 2006 .

[61]  Geoffrey E. Hinton,et al.  To recognize shapes, first learn to generate images. , 2007, Progress in brain research.

[62]  Yoshua Bengio,et al.  Scaling learning algorithms towards AI , 2007 .

[63]  Timothée Masquelier,et al.  Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity , 2007, PLoS Comput. Biol..

[64]  Frederico A. C. Azevedo,et al.  Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled‐up primate brain , 2009, The Journal of comparative neurology.

[65]  Tomaso Poggio,et al.  Fast Readout of Object Identity from Macaque Inferior Temporal Cortex , 2005, Science.

[66]  Timothée Masquelier,et al.  Acquisition of visual features through probabilistic spike-timing-dependent plasticity , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[67]  Kevin P. Murphy,et al.  Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.

[68]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[69]  D. Modha,et al.  Network architecture of the long-distance pathways in the macaque brain , 2010, Proceedings of the National Academy of Sciences.

[70]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[71]  Keiji Tanaka,et al.  Inferotemporal cortex and object vision. , 1996, Annual review of neuroscience.

[72]  Timothée Masquelier,et al.  Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[73]  A. Treisman Focused attention in the perception and retrieval of multidimensional stimuli , 1977 .

[74]  J. H. Hateren,et al.  Independent component filters of natural images compared with simple cells in primary visual cortex , 1998 .

[75]  Dan Ventura,et al.  Preparing More Effective Liquid State Machines Using Hebbian Learning , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[76]  Lawrence D. Jackel,et al.  Handwritten Digit Recognition with a Back-Propagation Network , 1989, NIPS.

[77]  David Marr,et al.  VISION A Computational Investigation into the Human Representation and Processing of Visual Information , 2009 .

[78]  Wolfgang Maass,et al.  Emergence of Dynamic Memory Traces in Cortical Microcircuit Models through STDP , 2013, The Journal of Neuroscience.

[79]  David H. Wolpert,et al.  The Lack of A Priori Distinctions Between Learning Algorithms , 1996, Neural Computation.

[80]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..