Theory of networks for learning

Many neural networks are constructed to learn an input-output mapping from examples. This problem is related to classical approximation techniques including regularization theory. Regularization is equivalent to a class of threelayer networks which we call regularization networks or Hyper Basis Functions. The strong theoretical foundation of regularization networks provides us with a better understanding of why they work and how to best choose a specific network and parameters for a given problem. Classical regularization theory can be extended in order to improve the quality of learning performed by Hyper Basis Functions. For example the centers of the basis functions and the norm weights can be optimized. Many Radial Basis Functions often used for function interpolation are provably Hyper Basis Functions. 1.