Editorial: Support Vector Machines for Computer Vision and Pattern Recognition

With their introduction in 1995, Support Vector Machines (SVMs) marked the beginning of a new era in the learning from examples paradigm. Rooted in the Statistical Learning Theory developed by Vladimir Vapnik at AT&T, SVMs quickly gained attention from the pattern recognition community due to a number of theoretical and computational merits. These include, for example, the simple geometrical interpretation of the margin, uniqueness of the solution, statistical robustness of the loss function, modularity of the kernel function, and overfit control through the choice of a single regularization parameter. Like all really good and far reaching ideas, SVMs raised a number of interesting problems for both theoreticians and practitioners. New approaches to Statistical Learning Theory are under development and new and more efficient methods for computing SVM with a large number of examples are being studied. Being interested in the development of trainable systems ourselves, we organized an international workshop as a satellite event of the 16th International Conference on Pattern Recognition and decided to publish this special issue, emphasizing the practical impact and relevance of SVMs for computer vision and pattern recognition. The contributions to this special issue are extended versions of a selection of papers presented at the First International Workshop on Pattern Recognition with Support Vector Machines, SVM2002, held in Niagara Falls, Canada, on August 2002. SVM2002 was organized by the Center for Artificial Vision Research at Korea University and by the Department of Computer and Information Science at University of Genova. By March 2002, a total of 57 full papers had been submitted from 21 countries.