Ultimate performance of QEM classifiers

Supervised learning of classifiers often resorts to the minimization of a quadratic error, even if this criterion is more especially matched to nonlinear regression problems. It is shown that the mapping built by a quadratic error minimization (QEM) tends to output the Bayesian discriminating rules even with nonuniform losses, provided the desired responses are chosen accordingly. This property is for instance shared by the multilayer perceptron (MLP). It is shown that their ultimate performance can be assessed with finite learning sets by establishing links with kernel estimators of density.