Salp Swarm Algorithm (SSA) for Training Feed-Forward Neural Networks

Artificial neural networks (ANNs) have shown efficient results in statistics and computer science applications. Feed-forward neural network (FNN) is the most popular and simplest neural network architecture, capable of solving nonlinearity. In this paper, feed-forward neural networks’ weight and bias figuring using a newly proposed metaheuristic Salp Swarm Algorithm (SSA) are proposed. SSA is a swarm-based metaheuristic inspired by the navigating and foraging behaviour of salp swarm. The performance is evaluated for some of the benchmarked datasets and compared with some well-known metaheuristics.

[1]  Jouni Lampinen,et al.  A Trigonometric Mutation Operation to Differential Evolution , 2003, J. Glob. Optim..

[2]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[3]  Gisbert Schneider,et al.  Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training , 2006, BMC Bioinformatics.

[4]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[5]  Christian Blum,et al.  An ant colony optimization algorithm for continuous optimization: application to feed-forward neural network training , 2007, Neural Computing and Applications.

[6]  Wei Chen,et al.  An Improved Genetic Algorithm and Its Application , 2012 .

[7]  M. Bialko,et al.  Training of artificial neural networks using differential evolution algorithm , 2008, 2008 Conference on Human System Interactions.

[8]  Francisco Herrera,et al.  Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power , 2010, Inf. Sci..

[9]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[10]  Marjan Mernik,et al.  Exploration and exploitation in evolutionary algorithms: A survey , 2013, CSUR.

[11]  Dervis Karaboga,et al.  Hybrid Artificial Bee Colony algorithm for neural network training , 2011, 2011 IEEE Congress of Evolutionary Computation (CEC).

[12]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[13]  Hossam Faris,et al.  Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems , 2017, Adv. Eng. Softw..

[14]  Seyed Mohammad Mirjalili How effective is the Grey Wolf optimizer in training multi-layer perceptrons , 2014, Applied Intelligence.

[15]  Andrew Lewis,et al.  Let a biogeography-based optimizer train your Multi-Layer Perceptron , 2014, Inf. Sci..

[16]  Christian Blum,et al.  Training feed-forward neural networks with ant colony optimization: an application to pattern classification , 2005, Fifth International Conference on Hybrid Intelligent Systems (HIS'05).

[17]  Enrique Alba,et al.  The exploration/exploitation tradeoff in dynamic cellular genetic algorithms , 2005, IEEE Transactions on Evolutionary Computation.

[18]  Andries Petrus Engelbrecht,et al.  Measuring exploration/exploitation in particle swarms using swarm diversity , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[19]  Tung-Kuan Liu,et al.  Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm , 2006, IEEE Trans. Neural Networks.