A Hybrid Gravitational Search Algorithm and Back-Propagation for Training Feedforward Neural Networks

Presenting a satisfactory and efficient training algorithm for artificial neural networks (ANN) has been a challenging task. The Gravitational Search Algorithm (GSA) is a novel heuristic algorithm based on the law of gravity and mass interactions. Like most other heuristic algorithms, this algorithm has a good ability to search for the global optimum, but suffers from slow searching speed. On the contrary, the Back-Propagation (BP) algorithmcan achieve a faster convergent speed around the global optimum. In this study, a hybrid of GSA and BP is proposed to make use of the advantage of both the GSA and BP algorithms. The proposed hybrid algorithm is employed as a new training method for feedforward neural networks (FNNs). To investigate the performance of the proposed approach, two benchmark problems are used and the results are compared with those obtained from FNNs trained by original GSA and BP algorithms. The experimental results show that the proposed hybrid algorithm outperforms both GSA and BP in training FNNs.

[1]  Behnam Malakooti,et al.  Approximating polynomial functions by Feedforward Artificial Neural Networks: Capacity analysis and design , 1998 .

[2]  Patrick Siarry,et al.  A survey on optimization metaheuristics , 2013, Inf. Sci..

[3]  Mukta Paliwal,et al.  Neural networks and statistical techniques: A review of applications , 2009, Expert Syst. Appl..

[4]  Randall S. Sexton,et al.  Comparing backpropagation with a genetic algorithm for neural network training , 1999 .

[5]  Michael R. Lyu,et al.  A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training , 2007, Appl. Math. Comput..

[6]  Lamberto Cesari,et al.  Optimization-Theory And Applications , 1983 .

[7]  Serhat Duman,et al.  GRAVITATIONAL SEARCH ALGORITHM FOR ECONOMIC DISPATCH WITH VALVE-POINT EFFECTS , 2010 .

[8]  Tony J. Dodd,et al.  Why ‘GSA: a gravitational search algorithm’ is not genuinely based on the law of gravity , 2011, Natural Computing.

[9]  George-Christopher Vosniakos,et al.  Optimizing feedforward artificial neural network architecture , 2007, Eng. Appl. Artif. Intell..

[10]  Y. Ho,et al.  Simple Explanation of the No-Free-Lunch Theorem and Its Implications , 2002 .

[11]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[12]  Rich Caruana,et al.  Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping , 2000, NIPS.

[13]  Naimin Zhang An online gradient method with momentum for two-layer feedforward neural networks , 2009, Appl. Math. Comput..

[14]  Moncef Gabbouj,et al.  Evolutionary artificial neural networks by multi-dimensional particle swarm optimization , 2009, Neural Networks.

[15]  Siti Zaiton Mohd Hashim,et al.  Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm , 2012, Appl. Math. Comput..

[16]  Alberto Tesi,et al.  On the Problem of Local Minima in Backpropagation , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Marco Dorigo,et al.  Ant system: optimization by a colony of cooperating agents , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[18]  S. Mirjalili,et al.  A new hybrid PSOGSA algorithm for function optimization , 2010, 2010 International Conference on Computer and Information Application.

[19]  James Kennedy,et al.  Particle swarm optimization , 2002, Proceedings of ICNN'95 - International Conference on Neural Networks.

[20]  Nor Ashidi Mat Isa,et al.  Clustered-Hybrid Multilayer Perceptron network for pattern recognition application , 2011, Appl. Soft Comput..

[21]  Hojjat Adeli,et al.  An adaptive conjugate gradient learning algorithm for efficient training of neural networks , 1994 .

[22]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[23]  Hossein Nezamabadi-pour,et al.  GSA: A Gravitational Search Algorithm , 2009, Inf. Sci..

[24]  Ka Fai Cedric Yiu,et al.  A new optimization algorithm for single hidden layer feedforward neural networks , 2013, Appl. Soft Comput..

[25]  Niels Kjølstad Poulsen,et al.  Neural Networks for Modelling and Control of Dynamic Systems: A Practitioner’s Handbook , 2000 .

[26]  D.R. Hush,et al.  Progress in supervised neural networks , 1993, IEEE Signal Processing Magazine.

[27]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[28]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .