Magnetic Optimization Algorithm for training Multi Layer Perceptron

Recently, feedforward neural network (FNN), especially Multi Layer Perceptron (MLP) has become one of the most widely-used computational tools, applied to many fields. Back Propagation is the most common method to learn MLP. This learning algorithm is a gradient-based algorithm, but it suffers some drawbacks such as trapping in local minima and slow convergence. These weaknesses make MLP unreliable in solving real-world problems. Using heuristic optimization algorithms is a popular approach to improve the drawbacks of BP. Magnetic Optimization Algorithm (MOA) is a novel heuristic optimization algorithm, inspired from the magnetic field theory. It has been proven that this algorithm is capable of solving optimization problems quickly and accurately. In this paper, MOA is employed as a new training method for MLP in order to improve the aforementioned shortcomings. The proposed learning method was compared with PSO and GA-based learning algorithms using 3-bit XOR and function approximation benchmark problems. The results prove the high performance of this new learning algorithm for large numbers of training samples.

[1]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[2]  Lawrence Davis,et al.  Training Feedforward Neural Networks Using Genetic Algorithms , 1989, IJCAI.

[3]  T.,et al.  Training Feedforward Networks with the Marquardt Algorithm , 2004 .

[4]  Alberto Tesi,et al.  On the Problem of Local Minima in Backpropagation , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  M.H. Tayarani-N,et al.  Magnetic Optimization Algorithms a new synthesis , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[6]  C. H. Chen,et al.  An Empirical Study Of The Gradient Descent And The Conjugate Gradient Backpropagation Neural Networks , 1992, OCEANS 92 Proceedings@m_Mastering the Oceans Through Technology.

[7]  F. Grimaccia,et al.  PSO as an effective learning algorithm for neural network applications , 2004, Proceedings. ICCEA 2004. 2004 3rd International Conference on Computational Electromagnetics and Its Applications, 2004..

[8]  Ramesh C. Jain,et al.  A robust backpropagation learning algorithm for function approximation , 1994, IEEE Trans. Neural Networks.

[9]  Arjen van Ooyen,et al.  Improving the convergence of the back-propagation algorithm , 1992, Neural Networks.

[10]  Michael R. Lyu,et al.  A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training , 2007, Appl. Math. Comput..

[11]  Michael K. Weir,et al.  A method for self-determination of adaptive learning rates in back propagation , 1991, Neural Networks.

[12]  B. Irie,et al.  Capabilities of three-layered perceptrons , 1988, IEEE 1988 International Conference on Neural Networks.

[13]  Ge Xiurun,et al.  An improved PSO-based ANN with simulated annealing technique , 2005, Neurocomputing.