PAC-BAYESIAN SUPERVISED CLASSIFICATION: The Thermodynamics of Statistical Learning

This monograph deals with adaptive supervised classification, using tools borrowed from statistical mechanics and information theory, stemming from the PACBayesian approach pioneered by David McAllester and applied to a conception of statistical learning theory forged by Vladimir Vapnik. Using convex analysis on the set of posterior probability measures, we show how to get local measures of the complexity of the classification model involving the relative entropy of posterior distributions with respect to Gibbs posterior measures. We then discuss relative bounds, comparing the generalization error of two classification rules, showing how the margin assumption of Mammen and Tsybakov can be replaced with some empirical measure of the covariance structure of the classification model.We show how to associate to any posterior distribution an effective temperature relating it to the Gibbs prior distribution with the same level of expected error rate, and how to estimate this effective temperature from data, resulting in an estimator whose expected error rate converges according to the best possible power of the sample size adaptively under any margin and parametric complexity assumptions. We describe and study an alternative selection scheme based on relative bounds between estimators, and present a two step localization technique which can handle the selection of a parametric model from a family of those. We show how to extend systematically all the results obtained in the inductive setting to transductive learning, and use this to improve Vapnik's generalization bounds, extending them to the case when the sample is made of independent non-identically distributed pairs of patterns and labels. Finally we review briefly the construction of Support Vector Machines and show how to derive generalization bounds for them, measuring the complexity either through the number of support vectors or through the value of the transductive or inductive margin.

[1]  Frans M. J. Willems,et al.  The context-tree weighting method: basic properties , 1995, IEEE Trans. Inf. Theory.

[2]  Neri Merhav,et al.  Hierarchical universal coding , 1996, IEEE Trans. Inf. Theory.

[3]  Frans M. J. Willems,et al.  Context weighting for general finite-context sources , 1996, IEEE Trans. Inf. Theory.

[4]  P. Massart,et al.  From Model Selection to Adaptive Estimation , 1997 .

[5]  Noga Alon,et al.  Scale-sensitive dimensions, uniform convergence, and learnability , 1997, JACM.

[6]  John Shawe-Taylor,et al.  Structural Risk Minimization Over Data-Dependent Hierarchies , 1998, IEEE Trans. Inf. Theory.

[7]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[8]  M. Habib Probabilistic methods for algorithmic discrete mathematics , 1998 .

[9]  Yuhong Yang,et al.  Information-theoretic determination of minimax rates of convergence , 1999 .

[10]  David A. McAllester PAC-Bayesian model averaging , 1999, COLT '99.

[11]  E. Mammen,et al.  Smooth Discrimination Analysis , 1999 .

[12]  S. Geer Applications of empirical process theory , 2000 .

[13]  Olivier Catoni,et al.  DATA COMPRESSION AND ADAPTIVE HISTOGRAMS , 2002 .

[14]  Nello Cristianini,et al.  An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .

[15]  O. Catoni Laplace transform estimates and deviation inequalities , 2001 .

[16]  Jean-Philippe Vert,et al.  Adaptive context trees and text clustering , 2001, IEEE Trans. Inf. Theory.

[17]  Jean-Philippe Vert Text Categorization Using Adaptive Context Trees , 2001, CICLing.

[18]  John Langford,et al.  An Improved Predictive Accuracy Bound for Averaging Classifiers , 2001, ICML.

[19]  Matthias W. Seeger,et al.  PAC-Bayesian Generalisation Error Bounds for Gaussian Process Classification , 2003, J. Mach. Learn. Res..

[20]  Nello Cristianini,et al.  On the generalization of soft margin algorithms , 2002, IEEE Trans. Inf. Theory.

[21]  Manfred K. Warmuth,et al.  Relating Data Compression and Learnability , 2003 .

[22]  A. Tsybakov,et al.  Optimal aggregation of classifiers in statistical learning , 2003 .

[23]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[24]  Olivier Catoni,et al.  Statistical learning theory and stochastic optimization , 2004 .

[25]  David A. McAllester Some PAC-Bayesian Theorems , 1998, COLT' 98.

[26]  Jean-Yves Audibert Aggregated estimators and empirical complexity for least square regression , 2004 .

[27]  John Langford,et al.  Computable Shell Decomposition Bounds , 2000, J. Mach. Learn. Res..

[28]  S. Geer,et al.  Square root penalty: Adaptation to the margin in classification and in edge estimation , 2005, math/0507422.

[29]  Tong Zhang From ɛ-entropy to KL-entropy: Analysis of minimum information complexity density estimation , 2006, math/0702653.

[30]  Tong Zhang,et al.  Information-theoretic upper and lower bounds for statistical estimation , 2006, IEEE Transactions on Information Theory.