Rapid training of higher-order neural networks for invariant pattern recognition
暂无分享,去创建一个
The authors demonstrate a second-order neural network that has learned to distinguish between two objects, regardless of their size or translational position, after being trained on only one view of each object. Using an image size of 16*16 pixels, the training took less than 1 min of run time on a Sun 3 workstation. A recognition accuracy of 100% was achieved by the resulting network for several test-object pairs, including the standard T-C problem, for any translational position and over a scale factor of five. The second-order network takes advantage of known relationships between input pixels to build invariance into the network architecture. The use of a third-order neural network to achieve simultaneous rotation, scale, and position invariance is described. Because of the high level of invariance and rapid, efficient training, initial results show higher order neural networks to be vastly superior to multilevel first-order networks trained by backpropagation for applications where invariant pattern recognition is required.<<ETX>>
[1] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[2] Colin Giles,et al. Learning, invariance, and generalization in high-order neural networks. , 1987, Applied optics.
[3] A. A. Mullin,et al. Principles of neurodynamics , 1962 .
[4] C. Lee Giles,et al. Encoding Geometric Invariances in Higher-Order Neural Networks , 1987, NIPS.