Principal component training of multilayer perceptron neural networks

This paper addresses the problem of training a multi-layer perceptron neural network for use in statistical pattern recognition applications. In particular it suggests a method for training such a network which significantly reduces the number of iterations that usually accompanies the use of the back propagation learning algorithm. The use of principal component analysis is proposed, and an example is given that demonstrates significant improvements in convergence speed as well as the number of hidden layer neurons needed, while maintaining accuracy comparable to that of a conventional perceptron network trained using back propagation. The accuracy obtained by the principal component trained network is also compared to that of a Bayes classifier used as a reference for evaluating accuracies. in addition, a cursory examination of the network performance with uniformly distributed feature classes is included. This work is still of a preliminary nature, but the initial examples we have considered suggest the method has promise for statistical classification applications.