PVM-based training of large neural architectures

A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of various architectures. The methodology proposed has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the relatively easy setup of the PVM (using existing workstations), and parallelization of the training algorithms results in considerable speed-ups especially when large network architectures and training vectors are used.

[1]  M.,et al.  Statistical and Structural Approaches to Texture , 2022 .

[2]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[3]  Jack Dongarra,et al.  PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing , 1995 .

[4]  George D. Magoulas,et al.  Effective Backpropagation Training with Variable Stepsize , 1997, Neural Networks.

[5]  George D. Magoulas,et al.  Hybrid methods using evolutionary algorithms for on-line training , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[6]  G. D. Magoulas,et al.  Image recognition and neuronal networks: Intelligent systems for the improvement of imaging information , 2000, Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy.

[7]  Thomas Sterling,et al.  How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters 2nd Printing , 1999 .

[8]  Benjamin Ray Seyfarth,et al.  How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters , 2000, Scalable Comput. Pract. Exp..

[9]  Vassilis P. Plagianakos,et al.  Locating and computing in parallel all the simple roots of special functions using PVM , 2001 .

[10]  Bernard Widrow,et al.  Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[11]  Louis Coetzee,et al.  An analysis of coarse-grain parallel training of a neural net , 1995 .

[12]  George D. Magoulas,et al.  Nonmonotone methods for backpropagation training with adaptive learning rate , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).