A multilayer nodal link perceptron network with least squares training algorithm

A multilayer perceptron network employing local basis functions is introduced. The basic structure of the network and its functional approximation properties are first presented. A recursive least-squares algorithm with covariant resetting for training the weights of the network is then introduced. This algorithm is shown to be exponentially convergent and in practice can approximate functions with timevarying characteristics. Furthermore, it is computationally efficient and can be parallel processed, hence preserving the massively parallel architecture of the network. Finally, the training algorithm is extended to adjust the weights and the domain of the local basis functions simultaneously, for further improving the approximation accuracy of he network. The convergence properties of the combined training algorithms are rigorously analysed and demonstrated using numerical simulations.