Encoding Geometric Invariances in Higher-Order Neural Networks
暂无分享,去创建一个
We describe a method of constructing higher-order neural networks that respond invariantly under geometric transformations on the input space. By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a first-order network is usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies.
[1] H. Kalmus. Biological Cybernetics , 1972, Nature.
[2] Geoffrey E. Hinton. A Parallel Computation that Assigns Canonical Object-Based Frames of Reference , 1981, IJCAI.
[3] James L. McClelland,et al. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .
[4] Azriel Rosenfeld,et al. Computer Vision , 1988, Adv. Comput..