Pattern classification as an ill-posed, inverse problem: a regularization approach

Pattern classification can be viewed as an ill-posed, inverse problem to which the method of regularization can be applied. In doing so, a proper theoretical framework is provided for the application of radial basis function (RBF) networks to pattern classification, with strong links to the classical kernel regression estimator (KRE)-based classifiers that estimate the underlying posterior class densities. Assuming that the training patterns are labeled with binary-valued vectors indicating their class membership, a regularized solution can be designed so that each resultant network output (one for each class) can be interpreted as a nonparametric estimator of the corresponding posterior, i.e., conditional, class distribution. These RBFs generalize the classical KREs, e.g., the Parzen window estimators (PWEs), which can therefore be recovered as a particular limiting case. The authors describe analytically how constraining the classifier network coefficients to be positive during their solution alters the nature of the original regularization problem, and demonstrate experimentally the beneficial effect that such a constraint has on classifier complexity.<<ETX>>