Parallel batch pattern BP training algorithm of recurrent neural network

The development of parallel algorithm for batch pattern training of a recurrent neural network with the back propagation training algorithm and the research of its efficiency on general-purpose parallel computer are presented in this paper. The recurrent neural network model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is introduced. The efficiency of parallelization of the developed algorithm is investigated by progressively increasing the dimension of the parallelized problem. The results of the experimental researches show that the parallelization efficiency of the algorithm is high enough for its efficient usage on general-purpose parallel computers available within modern computational grid systems.

[1]  Zdenek Hanzálek A Parallel Algorithm for Gradient Training of Feedforward Neural Networks , 1998, Parallel Comput..

[2]  Sudipta Mahapatra,et al.  A Parallel Formulation of Back-Propagation Learning on Distributed Memory Multiprocessors , 1997, Parallel Comput..

[3]  Chien-Min Wang,et al.  Dynamic resource selection heuristics for a non-reserved bidding-based Grid environment , 2010, Future Gener. Comput. Syst..

[4]  José Luis Bosque,et al.  Study of neural net training methods in parallel and distributed architectures , 2010, Future Gener. Comput. Syst..

[5]  Fazilah Haron,et al.  A framework for grid-based neural networks , 2005, First International Conference on Distributed Frameworks for Multimedia Applications.

[6]  Volodymyr Turchenko,et al.  Efficiency Analysis of Parallel Batch Pattern NN Training Algorithm on General-Purpose Supercomputer , 2009, IWANN.

[7]  Volodymyr Turchenko Scalability of Parallel Batch Pattern Neural Network Training Algorithm , 2009 .

[8]  Barry Hilary Valentine Topping,et al.  Parallel training of neural networks for finite element mesh decomposition , 1997 .

[9]  Jacob M. J. Murre,et al.  Transputers and neural networks: an analysis of implementation constraints and performance , 1993, IEEE Trans. Neural Networks.

[10]  Michal Cernanský Training Recurrent Neural Network Using Multistream Extended Kalman Filter on Multicore Processor and Cuda Enabled Graphic Processor Unit , 2009, ICANN.

[11]  G. Fagg,et al.  Flexible collective communication tuning architecture applied to Open MPI , 2006 .

[12]  Uros Lotric,et al.  Parallel Implementations of Recurrent Neural Network Learning , 2009, ICANNGA.

[13]  Erich Schikuta,et al.  A Grid based Neural Network Execution Service , 2006, Parallel and Distributed Computing and Networks.