CAST: A constant Adaptive Skipping Training Algorithm for Improving the Learning Rate of Multilayer Feedforward Neural Networks

Multilayer Feedforward Neural Network (MFNN) has been administered widely for solving a wide range of supervised pattern recognition tasks. The major problem in the MFNN training phase is its long training time especially when it is trained on very huge training datasets. In this accordance, an enhanced training algorithm called Constant Adaptive Skipping Training (CAST) Algorithm is proposed in this research paper which intensifies on reducing the training time of the MFNN through stochastic manifestation of training datasets. The stochastic manifestation is accomplished by partitioning the training dataset into two completely separate classes, classified and misclassified class, based on the comparison result of the calculated error measure with the threshold value. Only the input samples in the misclassified class are exhibited to the MFNN for training in the next epoch, whereas the correctly classified class is skipped constantly which dynamically reducing the number of training input samples exhibited at every single epoch. Thus decreasing the size of the training dataset constantly can reduce the total training time, thereby speeding up the training process. This CAST algorithm can be merged with any training algorithms used for supervised task, can be used to train the dataset with any number of patterns and also it is very simple to implement. The evaluation of the proposed CAST algorithm is demonstrated effectively using the benchmark datasets Iris, Waveform, Heart Disease and Breast Cancer for different learning rate. Simulation study proved that CAST training algorithm results in faster training than LAST and standard BPN algorithm.

[1]  Stavros J. Perantonis,et al.  Two highly efficient second-order algorithms for training feedforward networks , 2002, IEEE Trans. Neural Networks.

[2]  Bryan A. Tolson,et al.  A New Formulation for Feedforward Neural Networks , 2011, IEEE Transactions on Neural Networks.

[3]  Vassilis P. Plagianakos,et al.  A Nonmonotone Backpropagation Training Method for Neural Networks , 1998 .

[4]  Andrew J. Meade,et al.  An Initialization Method for Feedforward Artificial Neural Networks Using Polynomial Bases , 2011, Adv. Data Sci. Adapt. Anal..

[5]  Hao Yu,et al.  Improved Computation for Levenberg–Marquardt Training , 2010, IEEE Transactions on Neural Networks.

[6]  Bernard Widrow,et al.  Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[7]  R. C. Suganthe,et al.  Fast Linear Adaptive Skipping Training Algorithm for Training Artificial Neural Network , 2013 .

[8]  Laxmidhar Behera,et al.  On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks , 2006, IEEE Transactions on Neural Networks.

[9]  Hao Yu,et al.  Neural Network Training with Second Order Algorithms , 2012 .

[10]  Guang-Bin Huang,et al.  Classification ability of single hidden layer feedforward neural networks , 2000, IEEE Trans. Neural Networks Learn. Syst..

[11]  Hongmei Shao,et al.  A New BP Algorithm with Adaptive Momentum for FNNs Training , 2009, 2009 WRI Global Congress on Intelligent Systems.

[12]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.