Location-Independent Transformations: a General Strategy for Implementing Neural Networks

Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and typically suffer from a number of short-comings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing neural networks that use static and dynamic topologies. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. Two simple networks, the single-layer competitive learning network, and the counterpropagation network, which combines elements of supervised learning with competitive learning, are used in this paper to illustrate the LIT strategy. These two networks are localist in the sense that ultimately one node is responsible for each output. LITs for other models are presented in other papers.

[1]  Tony R. Martinez,et al.  AN EFFICIENT STATIC TOPOLOGY FOR MODELING ASOCS , 1991 .

[2]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[3]  A. Hall Applied Optics. , 2022, Science.

[4]  D Psaltis,et al.  Optical implementation of the Hopfield model. , 1985, Applied optics.

[5]  Tony R. Martinez,et al.  A VLSI implementation of a parallel, self-organizing learning model , 1994, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2 - Conference B: Computer Vision & Image Processing. (Cat. No.94CH3440-5).

[6]  Lawrence D. Jackel,et al.  VLSI implementation of a neural network model , 1988, Computer.

[7]  R. Hecht-Nielsen Counterpropagation networks. , 1987, Applied optics.

[8]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[9]  Tony R. Martinez,et al.  A transformation for implementing localist neural networks , 1995, Neural Parallel Sci. Comput..

[10]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[11]  Jean-Luc Gaudiot,et al.  Dream machine: a platform for efficient implementation of neural networks with arbitrarily complex interconnection structures , 1992 .

[12]  Carver Mead,et al.  Analog VLSI and neural systems , 1989 .

[13]  Allan Gottlieb,et al.  Highly parallel computing , 1989, Benjamin/Cummings Series in computer science and engineering.

[14]  Widrow,et al.  DARPA Neural Network Stdy , 1988 .

[15]  Tony R. Martinez,et al.  An efficient transformation for implementing two-layer feedforward neural networks , 1995 .

[16]  Dušan Petrovački,et al.  Evolutional development of a multilevel neural network , 1993, Neural Networks.

[17]  Tony R. Martinez,et al.  A Self-Adjusting Dynamic Logic Module , 1991, J. Parallel Distributed Comput..

[18]  Reinhard Männer,et al.  Multiprocessor And Memory Architecture Of The Neurocomputer Synapse-1 , 1993, Int. J. Neural Syst..

[19]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.