RBF neural network, basis functions and genetic algorithm

The radial basis function (RBF) network is an efficient function approximator. Theoretical researches focus on the capabilities of the network to reach an optimal solution. Unfortunately, few results concerning the design, and training of the network are available. When dealing with a specific application, the performances of the network dramatically depend on the number of neurons and on the distribution of the hidden neurons in the input space. Generally, the network resulting from learning applied to a predetermined architecture, is either insufficient or over-complicated. In this study, we focus on genetic learning for the RBF network applied to prediction of chaotic time series. The centers and widths of the hidden layer neurons basis function-defined as the barycenter and distance between two input patterns-are coded into a chromosome. It is shown that the basis functions which are also coded as a parameter of the neurons provide an additional degree of freedom resulting in a smaller optimal network. A direct matrix inversion provides the weights between the hidden layer and the output layer and avoids the risk of getting stuck into a local minimum. The performances of a network with Gaussian basis functions is compared with those of a network with genetic determination of the basis functions on the Mackey-Glass delay differential equation.