Dimensionality-Reduction Using Connectionist Networks

A method is presented for using connectionist networks of simple computing elements to discover a particular type of constraint in multidimensional data. Suppose that some data source provides samples consisting of n-dimensional feature-vectors, but that this data all happens to lie on an m-dimensional surface embedded in the n-dimensional feature space. Then occurrences of data can be more concisely described by specifying an m-dimensional location of the embedded surface than by reciting all n components of the feature vector. The recording of data in such a way is known as dimensionality-reduction. A method is presented for performing dimensionality-reduction in a wide class of situations for which an assumption of linearity need not be made about the underlying constraint surface. The method takes advantage of self-organizing properties of connectionist networks of simple computing elements. The authors present a scheme for representing the values of continuous (scalar) variables in subsets of units. >

[1]  A. A. Mullin,et al.  Principles of neurodynamics , 1962 .

[2]  Keinosuke Fukunaga,et al.  Application of the Karhunen-Loève Expansion to Feature Selection and Ordering , 1970, IEEE Trans. Computers.

[3]  Josef Kittler,et al.  A new approach to feature selection based on the Karhunen-Loeve expansion , 1973, Pattern Recognit..

[4]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[5]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[6]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[7]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[8]  D. Rumelhart Learning internal representations by back-propagating errors , 1986 .

[9]  Eric Saund Abstraction and Representation of Continuous Variables in Connectionist Networks , 1986, AAAI.

[10]  C Koch,et al.  Analog "neuronal" networks in early vision. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[11]  Dana H. Ballard,et al.  Cortical connections and parallel processing: Structure and function , 1986, Behavioral and Brain Sciences.

[12]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[13]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory , 1988 .

[14]  Terrence J. Sejnowski,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cognitive Sciences.

[15]  Teuvo Kohonen,et al.  Self-organization and associative memory: 3rd edition , 1989 .