Empirical measure of multiclass generalization performance: the K-winner machine case

Combining the K-winner machine (KWM) model with empirical measurements of a classifier's Vapnik-Chervonenkis (VC)-dimension gives two major results. First, analytical derivations refine the theory that characterizes the generalization performances of binary classifiers. Second, a straightforward extension of the theoretical framework yields bounds to the generalization error for multiclass problems.

[1]  Yann LeCun,et al.  Measuring the VC-Dimension of a Learning Machine , 1994, Neural Computation.

[2]  Hava T. Siegelmann,et al.  A support vector clustering method , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[3]  William Li,et al.  Measuring the VC-Dimension Using Optimized Experimental Design , 2000, Neural Computation.

[4]  Hava T. Siegelmann,et al.  A Support Vector Method for Clustering , 2000, NIPS.

[5]  Teuvo Kohonen,et al.  Self-Organization and Associative Memory, Third Edition , 1989, Springer Series in Information Sciences.

[6]  Sandro Ridella,et al.  K-winner machines for pattern classification , 2001, IEEE Trans. Neural Networks.