Object recognition using a neural network and invariant Zernike features

A neural-network (NN) approach for translation-, scale-, and rotation-invariant recognition of objects is presented. The network utilized is a multilayer perceptron (MLP) classifier with one hidden layer. Backpropagation learning is used for its training. The image is represented by rotation-invariant features which are the magnitudes of the Zernike moments of the image. To achieve translation and scale invariancy, the image is first normalized with respect to these two parameters using its geometrical moments. The performance of the NN classifier on a database consisting of binary images of all English characters is reported and compared to those of nearest-neighbor and minimum-mean-distance classifiers. The results show that: (1) the MLP outperforms the other two classifiers, especially when noise is present; (2) the nearest-neighbor classifier performs about the same as the NN for the noiseless case; and (3) the Zernike-moment-based features possess strong class separability power.<<ETX>>