Prior knowledge and the creation of "virtual" examples for RBF networks

Considers the problem of how to incorporate prior knowledge in supervised learning techniques. The authors set the problem in the framework of regularization theory, and consider the case in which one knows that the approximated function has radial symmetry. The problem can be solved in two alternative ways: 1) use the invariance as a constraint in the regularization theory framework to derive a rotation invariant version of radial basis functions; 2) use the radial symmetry to create new, "virtual" examples from a given data set. The authors show that these two apparently different methods of learning from "hints" (Abu-Mostafa, 1993) lead to exactly the same analytical solution.