Leave-One-Out Support Vector Machines

We present a new learning algorithm for pattern recognition inspired by a recent upper bound on leave-one-out error [Jaakkola and Haussler, 1999] proved for Support Vector Machines (SVMs) [Vapnik, 1995; 1998]. The new approach directly minimizes the expression given by the bound in an attempt to minimize leave-one-out error. This gives a convex optimization problem which constructs a sparse linear classifier in feature space using the kernel technique. As such the algorithm possesses many of the same properties as SVMs. The main novelty of the algorithm is that apart from the choice of kernel, it is parameterless - the selection of the number of training errors is inherent in the algorithm and not chosen by an extra free parameter as in SVMs. First experiments using the method on benchmark datasets from the UCI repository show results similar to SVMs which have been tuned to have the best choice of parameter.

[1]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[2]  Bernhard Schölkopf,et al.  Support vector learning , 1997 .

[3]  David Haussler,et al.  Probabilistic kernel regression models , 1999, AISTATS.

[4]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[5]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[6]  D. Prowe Berlin , 1855, Journal of public health, and sanitary review.