Moth-flame optimization for training Multi-Layer Perceptrons

Multi-Layer Perceptron (MLP) is one of the Feed-Forward Neural Networks (FFNNs) types. Searching for weights and biases in MLP is important to achieve minimum training error. In this paper, Moth-Flame Optimizer (MFO) is used to train Multi-Layer Perceptron (MLP). MFO-MLP is used to search for the weights and biases of the MLP to achieve minimum error and high classification rate. Five standard classification datasets are utilized to evaluate the performance of the proposed method. Moreover, three function-approximation datasets are used to test the performance of the proposed method. The proposed method (i.e. MFO-MLP) is compared with four well-known optimization algorithms, namely, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Evolution Strategy (ES). The experimental results prove that the MFO algorithm is very competitive, solves the local optima problem, and it achieves a high accuracy.

[1]  Aboul Ella Hassanien,et al.  Networks Community Detection Using Artificial Bee Colony Swarm Optimization , 2014, IBICA.

[2]  Andrew Lewis,et al.  Let a biogeography-based optimizer train your Multi-Layer Perceptron , 2014, Inf. Sci..

[3]  Seyed Mohammad Mirjalili,et al.  Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm , 2015, Knowl. Based Syst..

[4]  W S McCulloch,et al.  A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.

[5]  Aboul Ella Hassanien,et al.  Towards an Automated Zebrafish-based Toxicity Test Model Using Machine Learning , 2015 .

[6]  Teuvo Kohonen,et al.  Improved versions of learning vector quantization , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[7]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[8]  Jing Peng,et al.  An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories , 1990, Neural Computation.

[9]  Aboul Ella Hassanien,et al.  SIFT-Based Arabic Sign Language Recognition System , 2014, AECIA.

[10]  Jooyoung Park,et al.  Approximation and Radial-Basis-Function Networks , 1993, Neural Computation.

[11]  K. N. Dollman,et al.  - 1 , 1743 .

[12]  Teuvo Kohonen,et al.  The self-organizing map , 1990, Neurocomputing.

[13]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[14]  Hojjat Adeli,et al.  Spiking Neural Networks , 2009, Int. J. Neural Syst..

[15]  M. Georgiopoulos,et al.  Feed-forward neural networks , 1994, IEEE Potentials.

[16]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.