Performance Evaluation of Two Distributed BackPropagation Implementations

This article presents the results of some experiments in parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a parallelization strategy of the widely used neural net learning algorithm called back-propagation. We describe two strategies for parallelizing the back-propagation algorithm. We implemented these algorithms on several LANs, permitting us to evaluate and analyze their performances based on the results of actual runs. We were interested on the qualitative aspect of the analysis, in order to achieve a fair understanding of the factors determining the behavior of this parallel algorithms. We were interested in discovering and dealing with some of the specific circumstances that have to be considered when a parallelized neural net learning algorithm is to be implemented on a set of workstations in a LAN. Part of our purpose is to investigate whether it is possible to exploit the computational resources of such a set of workstations.

[1]  B. Boser,et al.  Backpropagation Learning for Multi-layer Feed-forward Neural Networks Using the Conjugate Gradient Method. Ieee Transactions on Neural Networks, 1991. [31] M. F. Mller. a Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Technical Report Pb-339 , 2007 .

[2]  José del R. Millán,et al.  Learning by Back-Propagation: Computing in a Systolic Way , 1989, PARLE.

[3]  D. S. Touretzky,et al.  Neural network simulation at Warp speed: how we got 17 million connections per second , 1988, IEEE 1988 International Conference on Neural Networks.

[4]  Michael J. Witbrock,et al.  An implementation of backpropagation learning on GF11, a large SIMD parallel computer , 1990, Parallel Comput..

[5]  Jill P. Mesirov,et al.  The backpropagation algorithm on grid and hypercube architectures , 1990, Parallel Comput..

[6]  J J Hopfield,et al.  Collective computation in neuronlike circuits. , 1987, Scientific American.

[7]  Farid U. Dowla,et al.  Backpropagation Learning for Multilayer Feed-Forward Neural Networks Using the Conjugate Gradient Method , 1991, Int. J. Neural Syst..

[8]  D. Roweth,et al.  Implementing Neural Network Models on Parallel Computers , 1987, Comput. J..

[9]  Demetri Psaltis,et al.  Optical Neural Computers , 1987, Topical Meeting on Optical Computing.

[10]  S. Y. King Parallel architectures for artificial neural nets , 1988, [1988] Proceedings. International Conference on Systolic Arrays.

[11]  Hiroshi Nakashima,et al.  Exploiting Parallel Computers to Reduce Neural Network Training Time of Real Applications , 1997, ISHPC.