On the learning and convergence of the radial basis networks

A convergence result for training radial basis networks based on a modified gradient descent training rule, which is the same as the standard gradient descent algorithm except that a deadzone around the origin of the error coordinates is incorporated in the training rule. If the deadzone size is large enough to cover the modeling error and if the learning rate is selected within a certain range, then the norm of the parameter error will converge to a constant, and the output error between the network and the nonlinear function will convergence into a small ball. Simulations are used to verify the theoretical results.<<ETX>>