A Geometrical Method to Improve Performance of the Support Vector Machine

The performance of a support vector machine (SVM) largely depends on the kernel function used. This letter investigates a geometrical method to optimize the kernel function. The method is a modification of the one proposed by S. Amari and S. Wu. Its concern is the use of the prior knowledge obtained in a primary step training to conformally rescale the kernel function, so that the separation between the two classes of data is enlarged. The result is that the new algorithm works efficiently and overcomes the susceptibility of the original method

[1]  Nello Cristianini,et al.  An introduction to Support Vector Machines , 2000 .

[2]  Alexander J. Smola,et al.  Learning with kernels , 1998 .

[3]  Zoubin Ghahramani,et al.  Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions , 2003, ICML 2003.

[4]  Patrick Siarry,et al.  Tabu Search applied to global optimization , 2000, Eur. J. Oper. Res..

[5]  Bernhard Schölkopf,et al.  Learning with Local and Global Consistency , 2003, NIPS.

[6]  Marco Gori,et al.  Optimal learning in artificial neural networks: A review of theoretical results , 1996, Neurocomputing.

[7]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[8]  Si Wu,et al.  Improving support vector machine classifiers by modifying kernel functions , 1999, Neural Networks.

[9]  Roberto Battiti,et al.  Training neural nets with the reactive tabu search , 1995, IEEE Trans. Neural Networks.

[10]  Harald Niederreiter,et al.  Implementation and tests of low-discrepancy sequences , 1992, TOMC.

[11]  Bernhard Schölkopf,et al.  Prior Knowledge in Support Vector Kernels , 1997, NIPS.

[12]  Chee Kheong Siew,et al.  Real-time learning capability of neural networks , 2006, IEEE Trans. Neural Networks.

[13]  Bernhard Schölkopf,et al.  The connection between regularization operators and support vector kernels , 1998, Neural Networks.

[14]  Dustin Boswell,et al.  Introduction to Support Vector Machines , 2002 .

[15]  I. Sobol On the Systematic Search in a Hypercube , 1979 .

[16]  Tung-Kuan Liu,et al.  Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm , 2006, IEEE Trans. Neural Networks.

[17]  Si Wu,et al.  Conformal Transformation of Kernel Functions: A Data-Dependent Way to Improve Support Vector Machine Classifiers , 2002, Neural Processing Letters.

[18]  José Neves,et al.  Evolutionary Neural Network Learning , 2003, EPIA.

[19]  T. Poggio,et al.  The Mathematics of Learning: Dealing with Data , 2005, 2005 International Conference on Neural Networks and Brain.

[20]  Christopher J. C. Burges,et al.  Geometry and invariance in kernel based methods , 1999 .

[21]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[22]  Nello Cristianini,et al.  Latent Semantic Kernels , 2001, Journal of Intelligent Information Systems.

[23]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[24]  Ping Shum,et al.  A Staged Continuous Tabu Search Algorithm for the Global Optimization and its Applications to the Design of Fiber Bragg Gratings , 2005, Comput. Optim. Appl..

[25]  Federico Girosi,et al.  An Equivalence Between Sparse Approximation and Support Vector Machines , 1998, Neural Computation.