The performance of neural networks used for pattern recognition and classification may be improved by introducing some capacity for invariance into the network. Two measures of similarity and their relationship to the network architecture are discussed. A very efficient neural network that may be used not only as a content-addressable memory but as a general symbolic substitution network is discussed. In addition to invariance to input errors, invariance to translations and rotations are considered. This may be accomplished by modifying the network itself, or changing the interconnection scheme, or by means of some pre-processing of the input data. In some cases the preprocessing could be done by the network itself or by another network, or by optical means. The techniques discussed include the introduction of more input neurons, the preprocessing of data by means of invariant matched filters, the use of new invariant image representations and the projection of input data on stored invariant principal components. The trade-offs involved in the various proposed schemes are discussed.
[1]
H H Arsenault,et al.
Optical pattern recognition using circular harmonic expansion.
,
1982,
Applied optics.
[2]
R. Lippmann,et al.
An introduction to computing with neural nets
,
1987,
IEEE ASSP Magazine.
[3]
H H Arsenault,et al.
Pattern discrimination by multiple circular harmonic components.
,
1984,
Applied optics.
[4]
H H Arsenault,et al.
Modification of the threshold condition for a contentaddressable memory based on the Hopfield model.
,
1987,
Applied optics.
[5]
H Stark,et al.
Rotation-invariant pattern recognition using a vector reference
,
1984
.
[6]
B V Kumar,et al.
Evaluation of the use of the Hopfield neural network model as a nearest-neighbor algorithm.
,
1986,
Applied optics.