Impact of learning rate and momentum factor in the performance of back-propagation neural network to identify internal dynamics of chaotic motion

The utilization of back-propagation neural network in identification of internal dynamics of chaotic motion is found appropriate. However, during its training through Rumelhart algorithm, it is found that, a high learning rate ( ) leads to rapid learning but the weights may oscillate, while a lower value of ` ' leads to slower learning process in weight updating formula   Momentum factor ( ) is to accelerate the convergence of error during the training in the equation   and   while transfer function sigmoid  . It is the most complicated and experimental task to identify optimum value of ` ' and ` ' during the training. To identify optimum value of   ` ' and  ` ' , firstly the network is trained with 103 epochs under different values of  ` ' in the close interval   and   At   the convergence of initial weights and minimization of error (i.e., mean square error) process is found appropriate. Afterwards to find optimum value of  , the network was trained again with   = 0.3 (fixed) and with different values of   in the close interval   for 103 epochs. It was observed that the convergence of initial weights and minimization of error was appropriate with  = 0.3 and   = 0.9. On this optimum value of   and   the network was trained successfully from local minima of error = 1.67029292416874E-03 at 103 epochs to global minima of error = 4.99180426869658E-04 at 15   105 epochs. At the global minima, the network has exhibited excellent performance in identification of internal dynamics of chaotic motion and in prediction of future values by past recorded data series. These essentials are presented through this research paper.