Pavlov principle and brain reverse engineering

The general principle of the operation of neural systems is considered, which makes clear the efficiency of information-computational structures consisting of neural-like elements. At the heart of the formulated principle lies the highly anticipated hypothesis of I.P. Pavlov that the "correctly organized" modifications of the connections between the elements and structures of the nervous system is the basis of "higher nervous activity of man" (in the terminology proposed by Pavlov). The possibility of a clear formulation of the principle, which is naturally called the Pavlov Principle (PP), arose only recently, as a result of the victorious procession of the family of computer training algorithms such as Deep Learning, which led to extremely efficient schemes for using "fine-grained" computing devices (GPU, etc.) solving cognitively complex problems. The article briefly outlines the line of development of ideas about the work of the nervous system, starting with the discovery by I.P. Pavlov of systemic conditioned reflexes, then through the neural and synaptic schemes of E. Konorsky and D. Hebb, the perceptron of F. Rosenblatt, the connectionist works of the 1980s, to modern schemes of deep learning. For the proper functioning of PP based neural networks essential is (initial) randomness of the interneuronal connections. A computational example of the solution of the model problem based on the "direct" application of the Pavlov Principle is given.

[1]  Kae Nakamura,et al.  Neurobiological circuit function and computation of the serotonergic and related systems , 2015 .

[2]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[3]  W. L. Dunin-Barkowski,et al.  Computational verification of approximate probabilistic estimates of operational efficiency of random neural networks , 2015, Optical Memory and Neural Networks.

[4]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[5]  Colin J. Akerman,et al.  Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.

[6]  Joel Z. Leibo,et al.  How Important Is Weight Symmetry in Backpropagation? , 2015, AAAI.

[7]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[8]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[9]  Daniel Cownden,et al.  Random feedback weights support learning in deep neural networks , 2014, ArXiv.

[10]  Witali L. Dunin-Barkowski,et al.  Models of Innate Neural Attractors and Their Applications for Neural Information Processing , 2015, Frontiers in Systems Neuroscience.

[11]  Tomaso Poggio,et al.  From the Retina to the Neocortex: Selected Papers of David Marr , 2012 .

[12]  Arild Nøkland,et al.  Direct Feedback Alignment Provides Learning in Deep Neural Networks , 2016, NIPS.

[13]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[14]  Yann LeCun,et al.  The Loss Surfaces of Multilayer Networks , 2014, AISTATS.

[15]  Jean Leclercq,et al.  The Complete Works , 1987 .

[16]  Mohammed Raonak-Uz-Zaman Applications of neural networks in Computer Go , 1998 .