Regularization networks for approximating multi-valued functions: learning ambiguous input-output mappings from examples

The regularization network (RN) is extended to approximate multi-valued functions so that the one-to-h mapping, where h denotes the multiplicity of the mapping, can be represented and learned from a finite number of input-output samples without clustering operations on the sample data set. Multi-valued function approximations are useful for learning ambiguous input-output relations from examples. This extension, which we call the multi-valued regularization network (MVRN), is derived from the multi-valued standard regularization theory (MVSRT) which is an extension of the standard regularization theory to multi-valued functions. MVSRT is based on a direct algebraic representation of multi-valued functions. By simple transformation of the unknown functions, we can obtain linear Euler-Lagrange equations. Therefore, the learning algorithm for MVRN is reduced to solving a linear system. The proposed theory can be specialized and extended to radial basis function (REP), generalized RBF (GRBF), and hyperBF networks of multi-valued functions.<<ETX>>