Linear regression through PAC-Bayesian truncation

We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression under L^\infty constraints on the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log(n) factor, complex constants involving the conditioning of the Gram matrix of the covariates, kurtosis coefficients or some geometric quantity characterizing the relation between L^2 and L^\infty-balls and require some additional assumptions like exponential moments of the output. This work provides a PAC-Bayesian shrinkage procedure with a simple excess risk bound of order d/n holding in expectation and in deviations, under various assumptions. The common surprising factor of these results is their simplicity and the absence of exponential moment condition on the output distribution while achieving exponential deviations. The risk bounds are obtained through a PAC-Bayesian analysis on truncated differences of losses. We also show that these results can be generalized to other strongly convex loss functions.

[1]  Kenneth Levenberg A METHOD FOR THE SOLUTION OF CERTAIN NON – LINEAR PROBLEMS IN LEAST SQUARES , 1944 .

[2]  J. Riley Solving systems of linear equations with a positive definite, symmetric, but possibly ill-conditioned matrix , 1955 .

[3]  Arthur E. Hoerl,et al.  Application of ridge analysis to regression problems , 1962 .

[4]  G. Lorentz Metric entropy and approximation , 1966 .

[5]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[6]  Yuhong Yang MODEL SELECTION FOR NONPARAMETRIC REGRESSION , 1997 .

[7]  P. Massart,et al.  Minimum contrast estimators on sieves: exponential bounds and rates of convergence , 1998 .

[8]  Yuhong Yang Aggregating Regression Procedures for a Better Performance , 1999 .

[9]  Y. Baraud Model selection for regression on a fixed design , 2000 .

[10]  A. Juditsky,et al.  Functional aggregation for nonparametric regression , 2000 .

[11]  A. V. D. Vaart,et al.  Lectures on probability theory and statistics , 2002 .

[12]  Adam Krzyzak,et al.  A Distribution-Free Theory of Nonparametric Regression , 2002, Springer series in statistics.

[13]  M. Wegkamp Model selection in nonparametric regression , 2003 .

[14]  Alexandre B. Tsybakov,et al.  Optimal Rates of Aggregation , 2003, COLT.

[15]  Olivier Catoni,et al.  Statistical learning theory and stochastic optimization , 2004 .

[16]  O. Catoni A PAC-Bayesian approach to adaptive classification , 2004 .

[17]  P. Bartlett,et al.  Local Rademacher complexities , 2005, math/0508275.

[18]  V. Koltchinskii Local Rademacher complexities and oracle inequalities in risk minimization , 2006, 0708.0083.

[19]  Pierre Alquier PAC-Bayesian bounds for randomized empirical risk minimizers , 2007, 0712.1698.

[20]  O. Catoni PAC-BAYESIAN SUPERVISED CLASSIFICATION: The Thermodynamics of Statistical Learning , 2007, 0712.0248.

[21]  A. Caponnetto,et al.  Optimal Rates for the Regularized Least-Squares Algorithm , 2007, Found. Comput. Math..

[22]  Arnak S. Dalalyan,et al.  Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity , 2008, Machine Learning.

[23]  Jean-Yves Audibert Fast learning rates in statistical inference through aggregation , 2007, math/0703854.

[24]  Arnak S. Dalalyan,et al.  Sparse Regression Learning by Aggregation and Langevin Monte-Carlo , 2009, COLT.

[25]  O. Catoni Challenging the empirical mean and empirical variance: a deviation study , 2010, 1009.2048.

[26]  Karim Lounici,et al.  Pac-Bayesian Bounds for Sparse Regression Estimation with Exponential Weights , 2010, 1009.2707.

[27]  Jean-Yves Audibert,et al.  Robust linear least squares regression , 2010, 1010.0074.

[28]  Arnak S. Dalalyan,et al.  Mirror averaging with sparsity priors , 2010, 1003.1189.