Support Vector Machines and Other Kernel Methods
暂无分享,去创建一个
Support Vector Machines (SVMs) and other Kernel Methods have emerged as a predominant family of methods in machine learning in the last ten years. The goal of this tutorial is to provide a basic understanding of the ideas behind kernel methods sufficient to allow practical application of the methods and to provide background for those interested in researching further into the subject. First we will use an intuitive geometric examination of SVM classification to introduce SVMs and understand the principals behind them. We will provide intuitions behind the extensive statistical learning theory underlying the approach without going into mathematical detail. Then, we will use case studies in regression and principal component analysis to show how these basic principals can be applied to make practical robust nonlinear versions of other linear inference methods. The power and flexibility of kernel methods comes from their ability to easily use different kernel functions. We will investigate how by changing kernel functions, the same kernel method can be applied to both traditional vector data as well as non-vector data such as strings and graphs. We will then examine some of the practical issues in support vector machines such as parameter selection, algorithms, and software. We will conclude with a discussion of the strengths and limitations of kernel methods. Outline: I. What are Support Vector Machine and Kernel Methods? II. Intuitive guide to SVM Classification a. Linear classification b. Capacity control c. Nonlinear classification III. Case studies a. Regression b. PCA