Relaxed conditions for convergence analysis of online back-propagation algorithm with L2 regularizer for Sigma-Pi-Sigma neural network

Abstract The properties of a boundedness estimations are investigated during the training of online back-propagation method with L 2 regularizer for Sigma-Pi-Sigma neural network. This brief presents a unified convergence analysis, exploiting theorems of White for the method of stochastic approximation. We apply the method of regularizer to derive estimation bounds for Sigma-Pi-Sigma network, and also give conditions for determinating convergence ensuring that the back-propagation estimator converges almost surely to a parameter value which locally minimizes the expected squared error loss. Besides, some weight boundedness estimations are derived through the squared regularizer, after that the boundedness is exploited to prove the convergence of the algorithm. A simulation is also given to verify the theoretical findings.

[1]  Zongben Xu,et al.  Essential rate for approximation by spherical neural networks , 2011, Neural Networks.

[2]  Wei Wu,et al.  A modified gradient learning algorithm with smoothing L1/2 regularization for Takagi-Sugeno fuzzy models , 2014, Neurocomputing.

[3]  Lixiang Li,et al.  Stochastic synchronization of complex network via a novel adaptive nonlinear controller , 2014 .

[4]  H. White Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models , 1989 .

[5]  Wei Wu,et al.  Convergence Analysis of Batch Gradient Algorithm for Three Classes of Sigma-Pi Neural Networks , 2007, Neural Processing Letters.

[6]  Chien-Kuo Li A Sigma-Pi-Sigma Neural Network (SPSNN) , 2004, Neural Processing Letters.

[7]  Xin Li,et al.  Training Multilayer Perceptrons Via Minimization of Sum of Ridge Functions , 2002, Adv. Comput. Math..

[8]  Wei Wu,et al.  Boundedness and Convergence of Online Gradient Method with Penalty for Linear Output Feedforward Neural Networks , 2009, Neural Processing Letters.

[9]  Ashraf M. Abdelbar,et al.  Advanced learning methods and exponent regularization applied to a high order neural network , 2014, Neural Computing and Applications.

[10]  Wei Wu,et al.  Boundedness and Convergence of Online Gradient Method With Penalty for Feedforward Neural Networks , 2009, IEEE Transactions on Neural Networks.

[11]  Pascal Bianchi,et al.  Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization , 2011, IEEE Transactions on Automatic Control.

[12]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.

[13]  Ovidiu Radulescu,et al.  Convergence of stochastic gene networks to hybrid piecewise deterministic processes , 2011, 1101.1431.

[14]  Lennart Ljung,et al.  Analysis of recursive stochastic algorithms , 1977 .

[15]  Jing Wang,et al.  Convergence of batch gradient learning algorithm with smoothing L1/2 regularization for Sigma-Pi-Sigma neural networks , 2015, Neurocomputing.

[16]  Nan Nan,et al.  Strong Convergence Analysis of Batch Gradient-Based Learning Algorithm for Training Pi-Sigma Network Based on TSK Fuzzy Models , 2015, Neural Processing Letters.