An analysis of coarse-grain parallel training of a neural net

In modern day pattern recognition, neural nets are used extensively. General use of a feedforward neural net consists of a training phase followed by a classification phase. Classification of an unknown test vector is very fast and only consists of the propagation of the test vector through the neural net. Training involves an optimization procedure and is very time consuming since a feasible local minimum is sought in weight space. If the training algorithm is based on error backpropagation the optimization procedure consists of the following steps: computation of the activation of the net when all the training examples are presented to it;computation of an error function based on the activation;computation of the gradients at a point in weight space; and finally,the adaptation of the weight values of the net. In this paper we present an analysis of a parallel implementation of the backpropagation algorithm using conjugate-gradient optimization for a three-layered, feedforward neural network, using netwo...

[1]  S. Kung,et al.  VLSI Array processors , 1985, IEEE ASSP Magazine.

[2]  Etienne Barnard,et al.  Optimization for training neural nets , 1992, IEEE Trans. Neural Networks.

[3]  M Maarten Steinbuch,et al.  Uncertainty modelling and structured singular-value computation applied to an electromechanical system , 1992 .

[4]  Dan Jones,et al.  Choosing a network: matching the architecture to the application , 1990 .

[5]  M. Vellasco,et al.  VLSI architectures for neural networks , 1989, IEEE Micro.

[6]  Etienne Barnard,et al.  A comparative study of optimization techniques for backpropagation , 1994, Neurocomputing.

[7]  Robert J. Schalkoff,et al.  Pattern recognition : statistical, structural and neural approaches / Robert J. Schalkoff , 1992 .

[8]  Michael J. Flynn,et al.  Very high-speed computing systems , 1966 .

[9]  Nathan H. Brown Neural Network Implementation Approaches for the Connection Machine , 1987, NIPS.

[10]  Jean-Luc Gaudiot,et al.  Parallel Implementations of Neural Networks , 1993, Int. J. Artif. Intell. Tools.

[11]  Renato Stefanelli,et al.  Mapping neural nets onto a massively parallel architecture: a defect-tolerance solution , 1991, Proc. IEEE.

[12]  M. Misra,et al.  Implementation of neural networks on massive memory organizations , 1992 .

[13]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[14]  C. Charalambous,et al.  Conjugate gradient algorithm for efficient training of artifi-cial neural networks , 1990 .