The Development of Improved Back-Propagation Neural Networks Algorithm for Predicting Patients with Heart Disease

A study on improving training efficiency of Artificial Neural Networks algorithm was carried out throughout many previous papers. This paper presents a new approach to improve the training efficiency of back propagation neural network algorithms. The proposed algorithm (GDM/AG) adaptively modifies the gradient based search direction by introducing the value of gain parameter in the activation function. It has been shown that this modification significantly enhance the computational efficiency of training process. The proposed algorithm is generic and can be implemented in almost all gradient based optimization processes. The robustness of the proposed algorithm is shown by comparing convergence rates and the effectiveness of gradient descent methods using the proposed method on heart disease data.

[1]  Sang-Hoon Oh,et al.  A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons , 1995 .

[2]  Guo-An Chen,et al.  Acceleration of backpropagation learning using optimised learning rate and momentum , 1993 .

[3]  Michael K. Weir,et al.  A method for self-determination of adaptive learning rates in back propagation , 1991, Neural Networks.

[4]  Maslina Darus,et al.  Classification of reduction invariants with improved backpropagation , 2002 .

[5]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[6]  Arjen van Ooyen,et al.  Improving the convergence of the back-propagation algorithm , 1992, Neural Networks.

[7]  Sang-Hoon Oh Improving the error backpropagation algorithm with a modified error function , 1997, IEEE Trans. Neural Networks.

[8]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[9]  C. M. Reeves,et al.  Function minimization by conjugate gradients , 1964, Comput. J..

[10]  Adam Krzyzak,et al.  Classification of large set of handwritten characters using modified back propagation model , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[11]  Shu-Hung Leung,et al.  Fast convergence for backpropagation network with magnified gradient function , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[12]  Nazri Mohd Nawi,et al.  An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks , 2008 .

[13]  Roger Fletcher,et al.  A Rapidly Convergent Descent Method for Minimization , 1963, Comput. J..

[14]  Yogesh Singh,et al.  An activation function adapting training algorithm for sigmoidal feedforward networks , 2004, Neurocomputing.

[15]  Chih-Ming Chen,et al.  Learning efficiency improvement of back-propagation algorithm by error saturation prevention method , 2001, Neurocomputing.

[16]  H. Y. Huang Unified approach to quadratically convergent algorithms for function minimization , 1970 .

[17]  Chih-Ming Chen,et al.  Learning efficiency improvement of back propagation algorithm by error saturation prevention method , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[18]  M. Hestenes,et al.  Methods of conjugate gradients for solving linear systems , 1952 .