Learning Hierarchically Structured Concepts

We study the question of how concepts that have structure get represented in the brain. Specifically, we introduce a model for hierarchically structured concepts and we show how a biologically plausible neural network can recognize these concepts, and how it can learn them in the first place. Our main goal is to introduce a general framework for these tasks and prove formally how both (recognition and learning) can be achieved. We show that both tasks can be accomplished even in presence of noise. For learning, we analyze Oja's rule formally, a well-known biologically-plausible rule for adjusting the weights of synapses. We complement the learning results with lower bounds asserting that, in order to recognize concepts of a certain hierarchical depth, neural networks must have a corresponding number of layers.

[1]  W. Singer,et al.  Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex , 1990, Nature.

[2]  Bolei Zhou,et al.  Interpreting Deep Visual Representations via Network Dissection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Nancy A. Lynch,et al.  Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks , 2016, ITCS.

[4]  W. Hoeffding Probability Inequalities for sums of Bounded Random Variables , 1963 .

[5]  Nikola T. Markov,et al.  Anatomy of hierarchy: Feedforward and feedback pathways in macaque visual cortex , 2013, The Journal of comparative neurology.

[6]  P. Foldiak,et al.  Adaptive network for optimal linear feature extraction , 1989, International 1989 Joint Conference on Neural Networks.

[7]  Nancy A. Lynch,et al.  Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits , 2019, Neural Computation.

[8]  Merav Parter,et al.  Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks , 2020, ITCS.

[9]  D. Hubel,et al.  Receptive fields of single neurones in the cat's striate cortex , 1959, The Journal of physiology.

[10]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[11]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[12]  R. Kempter,et al.  Hebbian learning and spiking neurons , 1999 .

[13]  W. Singer,et al.  Long-term depression of excitatory synaptic transmission and its relationship to long-term potentiation , 1993, Trends in Neurosciences.

[14]  J. M. Hupé,et al.  Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons , 1998, Nature.

[15]  C. Koch,et al.  Invariant visual representation by single neurons in the human brain , 2005, Nature.

[16]  Swastik Kopparty,et al.  Certifying polynomials for AC^0(parity) circuits, with applications , 2012, FSTTCS.

[17]  Devdatt P. Dubhashi,et al.  Concentration of Measure for the Analysis of Randomized Algorithms: Contents , 2009 .

[18]  Tomaso Poggio,et al.  Learning Functions: When Is Deep Better Than Shallow , 2016, 1603.00988.

[19]  D. J. Felleman,et al.  Distributed hierarchical processing in the primate cerebral cortex. , 1991, Cerebral cortex.

[20]  Nancy A. Lynch,et al.  A Basic Compositional Model for Spiking Neural Networks , 2018, A Journey from Process Algebra via Timed Automata to Model Learning.

[21]  Matus Telgarsky,et al.  Benefits of Depth in Neural Networks , 2016, COLT.

[22]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.