MPANN is a combination of gradient descent and a multi-objective evolutionary algorithm for learning the internal structure and internal weights of artificial neural networks. It has lower training time than gradient based algorithms. PSO-QI combines mechanical quantum concepts with PSO

Abstract This paper introduces a new method to train recurrent neural networks using dynamical trajectory-based optimization. The optimization method utilizes a projected gradient system (PGS) and a quotient gradient system (QGS) to determine the feasible regions of an optimization problem and search the feasible regions for local minima. By exploring the feasible regions, local minima are identified and the local minimum with the lowest cost is chosen as the global minimum of the optimization problem. Lyapunov theory is used to prove the stability of the local minima and their stability in the presence of measurement errors. Numerical examples show that the new approach provides better results than genetic algorithm and backpropagation through time (BPTT) trained networks.

[1]  Nikolay I. Nikolaev,et al.  Recursive Bayesian Levenberg-Marquardt Training of Recurrent Neural Networks , 2007, 2007 International Joint Conference on Neural Networks.

[2]  H. Chiang,et al.  A systematic search method for obtaining multiple local optimal solutions of nonlinear programming problems , 1996 .

[3]  Nikolay I. Nikolaev,et al.  Recursive Bayesian Recurrent Neural Networks for Time-Series Modeling , 2010, IEEE Transactions on Neural Networks.

[4]  Michiel Hermans,et al.  Optoelectronic Systems Trained With Backpropagation Through Time , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[5]  Mohammed Ismail,et al.  The Hysteresis Bouc-Wen Model, a Survey , 2009 .

[6]  J. Salerno,et al.  Using the particle swarm optimization technique to train a recurrent neural model , 1997, Proceedings Ninth IEEE International Conference on Tools with Artificial Intelligence.

[7]  Herbert Jaeger,et al.  Adaptive Nonlinear System Identification with Echo State Networks , 2002, NIPS.

[8]  Alberto Tesi,et al.  On the Problem of Local Minima in Backpropagation , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Carl D. Meyer,et al.  Matrix Analysis and Applied Linear Algebra , 2000 .

[10]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[11]  Hubertus Th. Jongen,et al.  Parametric optimization - singularities, pathfollowing and jumps , 1990 .

[12]  Amir F. Atiya,et al.  New results on recurrent network training: unifying the algorithms and accelerating convergence , 2000, IEEE Trans. Neural Networks Learn. Syst..

[13]  Jaewook Lee,et al.  Computation of multiple type-one equilibrium points on the stability boundary using generalized fixed-point homotopy methods , 2001, ISCAS 2001. The 2001 IEEE International Symposium on Circuits and Systems (Cat. No.01CH37196).

[14]  Wen Yu,et al.  Recurrent neural networks training with optimal bounded ellipsoid algorithm , 2007, 2007 American Control Conference.

[15]  Animesh Anant Sharma Univariate short term forecasting of solar irradiance using modified online backpropagation through time , 2016, 2016 International Computer Science and Engineering Conference (ICSEC).

[16]  Hsiao-Dong Chiang,et al.  A dynamical trajectory-based methodology for systematically computing multiple optimal solutions of general nonlinear programming problems , 2004, IEEE Transactions on Automatic Control.

[17]  Lifeng Xi,et al.  An Improved Particle Swarm Optimization for Evolving Feedforward Artificial Neural Networks , 2007, Neural Processing Letters.

[18]  Armando Blanco,et al.  A real-coded genetic algorithm for training recurrent neural networks , 2001, Neural Networks.

[19]  Jie Yao,et al.  Nonlinear System Identification using Genetic Algorithm Based Recurrent Neural Networks , 2006, 2006 Canadian Conference on Electrical and Computer Engineering.

[20]  Hussein A. Abbass,et al.  Speeding Up Backpropagation Using Multiobjective Evolutionary Algorithms , 2003, Neural Computation.

[21]  J. P. Noël,et al.  Hysteretic benchmark with a dynamic nonlinearity , 2016 .

[22]  Frank L. Lewis,et al.  Identification of nonlinear dynamical systems using multilayered neural networks , 1996, Autom..

[23]  M. Sami Fadali,et al.  Active neural predictive control of seismically isolated structures , 2018 .

[24]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[25]  Ilya Sutskever,et al.  Learning Recurrent Neural Networks with Hessian-Free Optimization , 2011, ICML.

[26]  Wen Yu,et al.  Recurrent Neural Networks Training With Stable Bounding Ellipsoid Algorithm , 2009, IEEE Transactions on Neural Networks.

[27]  Reza Monsefi,et al.  Genetic Regulatory Network Inference using Recurrent Neural Networks trained by a Multi Agent System , 2011 .

[28]  Taskin Koçak,et al.  Learning in the feed-forward Random Neural Network: A Critical Review , 2010, ISCIS.

[29]  Wen Yu,et al.  Neural network training with optimal bounded ellipsoid algorithm , 2008, Neural Computing and Applications.

[30]  Hamid Khodabandehlou,et al.  A quotient gradient method to train artificial neural networks , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[31]  Shuhui Li,et al.  Training Recurrent Neural Networks With the Levenberg–Marquardt Algorithm for Optimal Control of a Grid-Connected Converter , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Sybert H. Stroeve,et al.  An analysis of learning control by backpropagation through time , 1998, Neural Networks.

[33]  Jong-Bae Park,et al.  An Improved Particle Swarm Optimization for , 2010 .

[34]  Donald C. Wunsch,et al.  Evolutionary swarm neural network game engine for Capture Go , 2010, Neural Networks.

[35]  Moncef Gabbouj,et al.  Evolutionary artificial neural networks by multi-dimensional particle swarm optimization , 2009, Neural Networks.

[36]  Le Zhang,et al.  A survey of randomized algorithms for training neural networks , 2016, Inf. Sci..

[37]  James A. Reggia,et al.  A generalized LSTM-like training algorithm for second-order recurrent neural networks , 2012, Neural Networks.

[38]  Simon de Montigny New approximation method for smooth error backpropagation in a quantron network. , 2014, Neural networks : the official journal of the International Neural Network Society.

[39]  Zhiyuan Tang,et al.  Recurrent neural network training with dark knowledge transfer , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).