Efficient Neural Network Training on a Cray y-MP

An efficient implementation of a quasi-Newton algorithm for training feed-forward neural network on a Cray Y-MP is presented. The most time-consuming step of a neural network training using the quasi-Newton algorithm is the computation of the error function and its gradient. Parallelization embedded in these computations can be exploited through vectorization in a Cray Y-MP supercomputer. We show how they can be carried out such that the overall performance of the neural network training process can be enhanced substantially.