HIOPGA : A New Hybrid Metaheuristic Algorithm to Train Feedforward Neural Networks for Prediction

Most of neural network training algorithms make use of gradient-based search and because of their disadvantages, researchers always interested in using alternative methods. In this paper to train feedforward, neural network for prediction problems a new Hybrid Improved Opposition-based Particle swarm optimization and Genetic Algorithm (HIOPGA) is proposed. The opposition-based PSO is utilized to search better in solution space. In addition, to restrain model overfit with training pattern, a new cross validation method is proposed. Several benchmark problems with varying dimensions are chosen to investigate the capabilities of the proposed algorithm as a training algorithm. The result of HIOPGA is compared with standard backpropagation algorithm with momentum term.

[1]  Fuqing Zhao,et al.  Application of An Improved Particle Swarm Optimization Algorithm for Neural Network Training* , 2005, 2005 International Conference on Neural Networks and Brain.

[2]  Lutz Prechelt,et al.  Automatic early stopping using cross validation: quantifying the criteria , 1998, Neural Networks.

[3]  Tamás D. Gedeon,et al.  Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm , 1998, IEEE Trans. Neural Networks.

[4]  Andries Petrus Engelbrecht,et al.  Cooperative learning in neural networks using particle swarm optimizers , 2000, South Afr. Comput. J..

[5]  Teresa B. Ludermir,et al.  Particle Swarm Optimization of Neural Network Architectures and Weights , 2007 .

[6]  Martin Mandischer A comparison of evolution strategies and backpropagation for neural network training , 2002, Neurocomputing.

[7]  Lifeng Xi,et al.  Evolving artificial neural networks using an improved PSO and DPSO , 2008, Neurocomputing.

[8]  Marco Castellani,et al.  Evolutionary Artificial Neural Network Design and Training for wood veneer classification , 2009, Eng. Appl. Artif. Intell..

[9]  Hamid R. Tizhoosh,et al.  Opposition-Based Learning: A New Scheme for Machine Intelligence , 2005, International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06).

[10]  Moncef Gabbouj,et al.  Evolutionary artificial neural networks by multi-dimensional particle swarm optimization , 2009, Neural Networks.

[11]  Thomas Kiel Rasmussen,et al.  Hybrid Particle Swarm Optimiser with breeding and subpopulations , 2001 .

[12]  Riccardo Poli,et al.  Particle swarm optimization , 1995, Swarm Intelligence.

[13]  Hitoshi Iba,et al.  Particle swarm optimization with Gaussian mutation , 2003, Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706).

[14]  Bahram Alidaee,et al.  Global optimization for artificial neural networks: A tabu search application , 1998, Eur. J. Oper. Res..

[15]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[16]  Jianzhou Wang,et al.  A Novel Hybrid Evolutionary Algorithm Based on PSO and AFSA for Feedforward Neural Network Training , 2008, 2008 4th International Conference on Wireless Communications, Networking and Mobile Computing.

[17]  Zhiwei Ni,et al.  Opposition based comprehensive learning particle swarm optimization , 2008, 2008 3rd International Conference on Intelligent System and Knowledge Engineering.

[18]  El-Ghazali Talbi,et al.  Metaheuristics - From Design to Implementation , 2009 .

[19]  Saman K. Halgamuge,et al.  Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients , 2004, IEEE Transactions on Evolutionary Computation.

[20]  Mahamed G.H. Omran Using Opposition-based Learning with Particle Swarm Optimization and Barebones Differential Evolution , 2009 .

[21]  Masoud Yaghini,et al.  PREDICTING PASSENGER TRAIN DELAYS USING NEURAL NETWORK , 2010 .

[22]  Maurice Clerc,et al.  The particle swarm - explosion, stability, and convergence in a multidimensional complex space , 2002, IEEE Trans. Evol. Comput..

[23]  C.K. Mohan,et al.  Training feedforward neural networks using multi-phase particle swarm optimization , 2002, Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02..

[24]  David B. Fogel,et al.  Alternative Neural Network Training Methods , 1995, IEEE Expert.

[25]  Rudolf Jaksa,et al.  Simultaneous gradient and evolutionary neural network weights adaptation methods , 2007, 2007 IEEE Congress on Evolutionary Computation.

[26]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[27]  Lin Han,et al.  A Novel Opposition-Based Particle Swarm Optimization for Noisy Problems , 2007, Third International Conference on Natural Computation (ICNC 2007).

[28]  Roberto Battiti,et al.  Training neural nets with the reactive tabu search , 1995, IEEE Trans. Neural Networks.

[29]  Paulo Cortez,et al.  Particle swarms for feedforward neural network training , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[30]  Stephan K. Chalup,et al.  A study on hill climbing algorithms for neural network training , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[31]  W. Marsden I and J , 2012 .

[32]  Enrique Alba,et al.  Training Neural Networks with GA Hybrid Algorithms , 2004, GECCO.

[33]  Christian Blum,et al.  Training feed-forward neural networks with ant colony optimization: an application to pattern classification , 2005, Fifth International Conference on Hybrid Intelligent Systems (HIS'05).