Training of Radial Basis Function Using Particle Swarm Optimization

Particle swarm optimization (PSO) is a population based optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling. It is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It is an optimization of continuous nonlinear functions that are used to find the best solution in problem space.. In this paper, this nature inspired behaviour has been considered to optimise the neural network i.e Radial Basis Function by PSO. The RBF is a Neural Network. It is based on moving or directed along the radius. Radial Basis Function emerged as a variant of artificial neural networks in late 80's. RBF's are embedded as two layer neural network, where each hidden unit implements a radial activated function. The output units implement a weighted sum of hidden unit outputs. In order to use a Radial Basis Function Network we need to specify the hidden unit activation function, the number of processing units, a criterion for modeling a given task and a training algorithm for finding the parameters of the network. Finding the RBF weights is called network training. If we have at hand a set of input-output pairs, called training set, we optimize the network parameters in order to fit the network outputs to the given inputs. The fit is evaluated by the cost function. By means of training the neural network models the underlying the function of the certain mapping. In order to model such a mapping we have to find the network weights. In Radial Basis Function parameters are found such that they minimize the cost function.

[1]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[2]  Mirela Ovreiu,et al.  Cardiomyopathy Detection from Electrocardiogram Features , 2012 .

[3]  James Kennedy,et al.  Defining a Standard for Particle Swarm Optimization , 2007, 2007 IEEE Swarm Intelligence Symposium.

[4]  Riccardo Poli,et al.  Analysis of the publications on the applications of particle swarm optimisation , 2008 .

[5]  Marco Wiering,et al.  Proceedings of IEEE International Conference on Evolutionary Computation , 2013 .

[6]  D. Simon Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches , 2006 .

[7]  Casimir C. Klimasauskas,et al.  The 1989 neuro-computing bibliography , 1989 .

[8]  James Kennedy,et al.  The particle swarm: social adaptation of knowledge , 1997, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).

[9]  Zhen Zhu,et al.  Optimized Approximation Algorithm in Neural Networks Without Overfitting , 2008, IEEE Transactions on Neural Networks.

[10]  Chunming Yang,et al.  A new particle swarm optimization technique , 2005, 18th International Conference on Systems Engineering (ICSEng'05).

[11]  David S. Broomhead,et al.  Multivariable Functional Interpolation and Adaptive Networks , 1988, Complex Syst..

[12]  Teuvo Kohonen,et al.  An introduction to neural computing , 1988, Neural Networks.

[13]  Dan Simon,et al.  Training radial basis neural networks with the extended Kalman filter , 2002, Neurocomputing.

[14]  Gisbert Schneider,et al.  Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training , 2006, BMC Bioinformatics.

[15]  Riccardo Poli,et al.  Particle swarm optimization , 1995, Swarm Intelligence.

[16]  Igor Aleksander,et al.  Introduction to Neural Computing , 1990 .

[17]  Ian F. Croall,et al.  Industrial Applications of Neural Networks , 1992, Research Reports ESPRIT.