The minimum number of misclassifications in a multi-class classifier is reached when the borders between classes are set according to the Bayes criterion. Unfortunately, this criterion necessitates the knowledge of the probability density function of each class of data, which is unknown in practical problems. The theory of kernel estimators (Parzen windows) provides a way to estimate these probability densities, given a set of data in each class. The computational complexity of these estimators is however much too large in most practical applications; the authors propose here a neural network aimed to estimate the probability density function underlying a set of data, in a sub-optimal way (while performances are quite similar to those in the optimal case), but with a strongly reduced complexity which makes the method useful in practical situations. The algorithm is based on a "competitive learning" vector quantization of the data, and on the choice of optimal widths for the kernels. the authors study the influence of this factor on the classification error rate, and provide examples of the use of the algorithm on real-world data.
[1]
Pierre Comon,et al.
Supervised Design of Optimal Receivers
,
1993
.
[2]
Robert M. Gray,et al.
An Algorithm for Vector Quantizer Design
,
1980,
IEEE Trans. Commun..
[3]
Jean-Didier Legat,et al.
Handwritten digit recognition by suboptimal bayesian classifier
,
1994
.
[4]
Michel Verleysen,et al.
Estimation of performance bounds in supervised classification
,
1994,
ESANN.
[5]
T. Cacoullos.
Estimation of a multivariate density
,
1966
.
[6]
Michel Verleysen,et al.
Suboptimal Bayesian classification by vector quantization with small clusters
,
1995,
ESANN.
[7]
Michel Verleysen,et al.
A Practical View of Suboptimal Bayesian Classification with Radial Gaussian Kernels
,
1995,
IWANN.