Enhanced Velocity-Driven Particle Swarm Optimization for Evolving Artificial Neural Network

∗Neural network has been considered as an effective tool for predicting chaotic time series and approximating functions. However, traditional methods for training neural networks have some shortcomings, such as slow convergence speed, easy to fall into local optimum and so on. To obtain an efficient performance, an enhanced velocity-driven particle swarm optimization, called EVDPSO, is proposed in this paper. The proposed algorithm introduces an adaptive method for the parameter φ used in the velocity-driven mechanism, which helps to achieve an effective trade-off between exploration and exploitation. To verify the effectiveness of the proposed algorithm, the algorithm is employed as a new optimize method to train artificial neural networks for nonlinear function approximation problems and chaotic time series prediction. The experimental results demonstrate the effectiveness of the proposed algorithm over other traditional optimization algorithms.

[1]  Jing J. Liang,et al.  Comprehensive learning particle swarm optimizer for global optimization of multimodal functions , 2006, IEEE Transactions on Evolutionary Computation.

[2]  Wei-Der Chang,et al.  A modified particle swarm optimization with multiple subpopulations for multimodal function optimization problems , 2015, Appl. Soft Comput..

[3]  Udo Seiffert,et al.  Multiple Layer Perceptron training using genetic algorithms , 2001, ESANN.

[4]  Vasilii A. Gromov,et al.  Chaotic time series prediction with employment of ant colony optimization , 2012, Expert Syst. Appl..

[5]  Narasimhan Sundararajan,et al.  Self regulating particle swarm optimization algorithm , 2015, Inf. Sci..

[6]  Siti Zaiton Mohd Hashim,et al.  Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm , 2012, Appl. Math. Comput..

[7]  R. Rao Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems , 2016 .

[8]  Dan Simon,et al.  Biogeography-Based Optimization , 2022 .

[9]  Konstantinos Demertzis,et al.  Adaptive Elitist Differential Evolution Extreme Learning Machines on Big Data: Intelligent Recognition of Invasive Species , 2016, INNS Conference on Big Data.

[10]  Christian Blum,et al.  Training feed-forward neural networks with ant colony optimization: an application to pattern classification , 2005, Fifth International Conference on Hybrid Intelligent Systems (HIS'05).

[11]  Paulo Cortez,et al.  Particle swarms for feedforward neural network training , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[12]  Yaochu Jin,et al.  A social learning particle swarm optimization algorithm for scalable optimization , 2015, Inf. Sci..

[13]  R. Eberhart,et al.  Empirical study of particle swarm optimization , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[14]  Riccardo Poli,et al.  Particle swarm optimization , 1995, Swarm Intelligence.

[15]  Andries Petrus Engelbrecht,et al.  A Cooperative approach to particle swarm optimization , 2004, IEEE Transactions on Evolutionary Computation.

[16]  Yue Shi,et al.  A modified particle swarm optimizer , 1998, 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360).

[17]  R. Venkata Rao,et al.  Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems , 2011, Comput. Aided Des..

[18]  Russell C. Eberhart,et al.  A new optimizer using particle swarm theory , 1995, MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science.