SVM maximizing margin in the input space

We propose a new type of support vector machine (SVM) that maximizes the margin in the input space, not in the feature space. Parameters are initialized by the original SVM, and they are updated by solving a quadratic programming problem iteratively. The derived algorithm preserves the sparsity of support vectors. It is also shown that the original SVM can be seen as a special case. The algorithm is confirmed to work by a simple simulation.

[1]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[2]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[3]  Marti A. Hearst Trends & Controversies: Support Vector Machines , 1998, IEEE Intell. Syst..

[4]  Shotaro Akaho,et al.  Curve fitting that minimizes the mean square of perpendicular distances from sample points , 1993, Other Conferences.

[5]  Gunnar Rätsch,et al.  An introduction to kernel-based learning algorithms , 2001, IEEE Trans. Neural Networks.