Abstraction and Representation of Continuous Variables in Connectionist Networks
暂无分享,去创建一个
A method is presented for using connectionist networks of simple computing elements to discover a particular type of constraint in multidimensional data. Suppose that some data source provides samples consisting of n-dimensional feature-vectors, but that this data all happens to lie on an m-dimensional surface embedded in the n-dimensional feature space. Then occurrences of data can be more concisely described by specifying an m-dimensional location on the embedded surface than by reciting all n components of the feature vector. The recoding of data in such a way is a form of abstraction. This paper describes a method for performing this type of abstraction in connectionist networks of simple computing elements. We present a scheme for representing the values of continuous (scalar) variables in subsets of units. The backpropagation weight updating method for training connectionist networks is extended by the use of auxiliary pressure in order to coax hidden units into the prescribed representation for scalar-valued variables.
[1] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[2] C. D. Gelatt,et al. Optimization by Simulated Annealing , 1983, Science.
[3] C Koch,et al. Analog "neuronal" networks in early vision. , 1986, Proceedings of the National Academy of Sciences of the United States of America.
[4] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[5] S. Ullman,et al. The interpretation of visual motion , 1977 .