In-Place Learning for Positional and Scale Invariance

In-place learning is a biologically inspired concept, meaning that the computational network is responsible for its own learning. With in-place learning, there is no need for a separate learning network. We present in this paper a multiple-layer in-place learning network (MILN) for learning positional and scale invariance. The network enables both unsupervised and supervised learning to occur concurrently. When supervision is available (e.g., from the environment during autonomous development), the network performs supervised learning through its multiple layers. When supervision is not available, the network practices while using its own practice motor signal as self-supervision (i.e., unsupervised per classical definition). We present principles based on which MILN automatically develops positional and scale invariant neurons in different layers. From sequentially sensed video streams, the proposed in-place learning algorithm develops a hierarchy of network representations. The global invariance was achieved through multi-layer quasi-invariances, with increasing invariance from early layers to the later layers. Experimental results are presented to show the effects of the principles.

[1]  P. A. Kolers,et al.  Size in the visual processing of faces and words. , 1985, Journal of experimental psychology. Human perception and performance.

[2]  Narendra Ahuja,et al.  Learning Recognition and Segmentation Using the Cresceptron , 1997, International Journal of Computer Vision.

[3]  N. Drasdo Eye, brain, and vision David H. Hubel Scientific American Library Book — distributed by W. H. Freeman, New York, £15.95 , 1990 .

[4]  M. V. Velzen,et al.  Self-organizing maps , 2007 .

[5]  Aapo Hyvärinen,et al.  Survey on Independent Component Analysis , 1999 .

[6]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[7]  E. L. Lehmann,et al.  Theory of point estimation , 1950 .

[8]  F. Sengpiel,et al.  Influence of experience on orientation maps in cat visual cortex , 1999, Nature Neuroscience.

[9]  Joseph J. Atick,et al.  Towards a Theory of Early Visual Processing , 1990, Neural Computation.

[10]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[11]  G. F. Cooper,et al.  Development of the Brain depends on the Visual Environment , 1970, Nature.

[12]  J. Davenport Editor , 1960 .

[13]  H. Sebastian Seung,et al.  Learning the parts of objects by non-negative matrix factorization , 1999, Nature.

[14]  Michael J. O'Donovan The origin of spontaneous activity in developing networks of the vertebrate nervous system , 1999, Current Opinion in Neurobiology.

[15]  D. Hubel Eye, brain, and vision , 1988 .

[16]  P. Lennie Receptive fields , 2003, Current Biology.

[17]  David J. Field,et al.  What Is the Goal of Sensory Coding? , 1994, Neural Computation.

[18]  M. Alexander,et al.  Principles of Neural Science , 1981 .

[19]  Nan Zhang,et al.  A developing sensory mapping for robots , 2002, Proceedings 2nd International Conference on Development and Learning. ICDL 2002.

[20]  Juyang Weng,et al.  Developmental Robotics: Theory and Experiments , 2004, Int. J. Humanoid Robotics.

[21]  Risto Miikkulainen,et al.  Computational Maps in the Visual Cortex , 2005 .

[22]  Mriganka Sur,et al.  PLASTICITY OF ORIENTATION PROCESSING IN ADULT VISUAL CORTEX , 2004 .

[23]  Juyang Weng,et al.  Candid Covariance-Free Incremental Principal Component Analysis , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[24]  P. Thompson,et al.  Margaret Thatcher: A New Illusion , 1980, Perception.

[25]  Juyang Weng,et al.  Hierarchical Discriminant Regression , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Terrence J. Sejnowski,et al.  Slow Feature Analysis: Unsupervised Learning of Invariances , 2002, Neural Computation.

[27]  Kunihiko Fukushima,et al.  Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position , 1980, Biological Cybernetics.

[28]  A. Grinvald,et al.  Spatial Relationships among Three Columnar Systems in Cat Area 17 , 1997, The Journal of Neuroscience.

[29]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[30]  J. O'Regan,et al.  Some results on translation invariance in the human visual system. , 1990, Spatial vision.

[31]  Aapo Hyvärinen,et al.  Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces , 2000, Neural Computation.

[32]  R. Bäuerle Vibrotaktile Informationsübertragung mit Folgen binärer Zeichen , 2004, Kybernetik.

[33]  Helge J. Ritter,et al.  A neural network model for the formation of topographic maps in the CNS: development of receptive fields , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[34]  R. Gray,et al.  Vector quantization , 1984, IEEE ASSP Magazine.

[35]  S. Nelson,et al.  An emergent model of orientation selectivity in cat visual cortical simple cells , 1995, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[36]  Joseph J. Atick,et al.  Convergent Algorithm for Sensory Receptive Field Development , 1993, Neural Computation.

[37]  C. Malsburg Self-organization of orientation sensitive cells in the striate cortex , 2004, Kybernetik.

[38]  Roman Bek,et al.  Discourse on one way in which a quantum-mechanics language on the classical logical base can be built up , 1978, Kybernetika.