Invariant Object Recognition Using Fahlman and Lebiere's Learning Algorithm

A new neural network system for object recognition is proposed which is invariant to translation, scaling and rotation. The system consists of two parts. The first is a preprocessor which obtains projection from the input image such that, for any rotation and scaling of standard image, the projection results are reduced to cyclically shifted ones, and then adopts the Rapid Transform [9] which makes the projected images cyclic shift invariant. The second part is a neural net classifier which receives the outputs of preprocessing part as the input signals. The most attractive feature of this system is that, by using only a simple shift invariant transformation (Rapid Transform) in conjunction with the projection of the input image plane, invariancy is achieved and the system is reasonably small. Experiments with six geometrical objects with different degree of scaling and rotation show that the proposed system performs excellent when the neural net classifier is trained by Fahlman and Lebiere's learning algorithm [8].

[1]  Lilly Spirkovska,et al.  Connectivity strategies for higher-order neural networks applied to pattern recognition , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[2]  Joarder Kamruzzaman,et al.  Network synthesis and generalization properties of artificial neural net using Fahlman and Lebiere's learning algorithm , 1992, [1992] Proceedings of the 35th Midwest Symposium on Circuits and Systems.

[3]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[4]  Herbert J. Reitboeck,et al.  A Transformation with Invariance Under Cyclic Permutation for Applications in Pattern Recognition , 1969, Inf. Control..

[5]  Robert B. McGhee,et al.  Aircraft Identification by Moment Invariants , 1977, IEEE Transactions on Computers.

[6]  Joarder Kamruzzaman,et al.  Generalization ability of artificial neural network using Fahlman and Lebiere's learning algorithm , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[7]  Rama Chellappa,et al.  Stochastic models for closed boundary analysis: Representation and reconstruction , 1981, IEEE Trans. Inf. Theory.

[8]  Ralph Roskies,et al.  Fourier Descriptors for Plane Closed Curves , 1972, IEEE Transactions on Computers.

[9]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[10]  Bernard Widrow,et al.  Layered neural nets for pattern recognition , 1988, IEEE Trans. Acoust. Speech Signal Process..

[11]  S.D. You,et al.  Object recognition based on projection , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[12]  Takayuki Ito,et al.  Neocognitron: A neural network model for a mechanism of visual pattern recognition , 1983, IEEE Transactions on Systems, Man, and Cybernetics.

[13]  Joarder Kamruzzaman,et al.  Incremental Learning and Generalization Ability of Artificial Neural Network Trained by Fahlman and Lebiere's Learning Algorithm , 1993 .