A Radial Basis Function Neural Network with Adaptive Structure via Particle Swarm Optimization

Radial Basis Function neural network (RBFNN) is a combination of learning vector quantizer LVQ-I and gradient descent. RBFNN is first proposed by (Broomhead & Lowe, 1988), and their interpolation and generalization properties are thoroughly investigated in (Lowe, 1989), (Freeman & Saad, 1995). Since the mid-1980s, RBFNN has been used to apply on many applications, such as pattern classification, system identification, nonlinear function approximation, adaptive control, speech recognition, and time-series prediction, and so on. In contrast to the well-known Multilayer Perceptron (MLP) Networks, the RBF network utilizes a radial construction mechanism. MLP were trained by the error Back Propagation (BP) algorithm, since the RBFNN has a faster training procedure substantially and adopts typical two-stage training scheme, it can avoid solution to fall into local optima. A key point of RBFNN is to decide a proper number of hidden nodes. If the hidden node number of RBFNN is too small, the generated output vectors may be in low accuracy. On the contrary, it with too large number of hidden nodes may cause over-fitting for the input data, and influences global generalization performance. In conventional RBF training approach, the number of hidden node is usually decided according to the statistic properties of input data, then determine the centers and spread width for each hidden nodes by means of k-means clustering algorithm (Moddy & Darken, 1989). The drawback of this approach is that the network performance is depended on the pre-selected number of hidden nodes. If an unsuitable number is chosen, RBFNN may present a poor global generalization capability, as slow training speed, and requirement for large memory space. To solve this problem, the self-growing RBF techniques were proposed in (Karayiannis & Mi, 1997), (Zheng et al, 1999). However, the predefined parameters and local searching on solution space cause the inaccuracy of approximation from a sub-solution. Evolutionary computation is a globally optimization technique, where the aim is to improve the ability of individual to survive. Among that, Genetic Algorithm (GA) is a parallel searching technique that mimics natural genetics and the evolutionary process. In (Back et al, 1997), they employed GA to determine the RBFNN structure so the optimal number and distribution of RBF hidden nodes can be obtained automatically. A common approach is applied GA to search for the optimal network structure among several candidates

[1]  S. Aiguo,et al.  Evolving Gaussian RBF network for nonlinear time series modelling and prediction , 1998 .

[2]  Nanning Zheng,et al.  Self-creating and adaptive learning of RBF networks: merging soft-competition clustering algorithm with network growth technique , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[3]  Nicolaos B. Karayiannis,et al.  Growing radial basis neural networks: merging supervised and unsupervised learning with network growth techniques , 1997, IEEE Trans. Neural Networks.

[4]  Sheng Chen Nonlinear time series modelling and prediction using Gaussian RBF networks with enhanced clustering and RLS learning , 1995 .

[5]  David Saad,et al.  Learning and Generalization in Radial Basis Function Networks , 1995, Neural Computation.

[6]  Tsung-Ying Sun,et al.  Particle Swarm Optimization Incorporated with Dis-turbance for Improving the Efficiency of Macrocell Overlap Removal and Placement , 2005, IC-AI.

[7]  Chan-Cheng Liu,et al.  Proceedings of 2005 International Symposium on Intelligent Signal Processing and Communication Systems Pso-based Learning Rate Adjustment for Blind Source Separation , 2022 .

[8]  Sheng Chen,et al.  Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks , 1999, IEEE Trans. Neural Networks.

[9]  D. Broomhead,et al.  Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks , 1988 .

[10]  S. Nonlinear time series modelling and prediction using Gaussian RBF networks with enhanced clustering and RLS learning , 2004 .

[11]  David S. Broomhead,et al.  Multivariable Functional Interpolation and Adaptive Networks , 1988, Complex Syst..

[12]  John Moody,et al.  Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.

[13]  Thomas Bäck,et al.  Evolutionary computation: comments on the history and current state , 1997, IEEE Trans. Evol. Comput..

[14]  Alan F. Murray,et al.  International Joint Conference on Neural Networks , 1993 .

[15]  Yunfei Bai,et al.  Genetic algorithm based self-growing training for RBF neural networks , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[16]  Russell C. Eberhart,et al.  A new optimizer using particle swarm theory , 1995, MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science.

[17]  D. Lowe,et al.  Adaptive radial basis function nonlinearities, and the problem of generalisation , 1989 .