The composite absolute penalties family for grouped and hierarchical variable selection

Extracting useful information from high-dimensional data is an important focus of today's statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the $L_1$-penalized squared error minimization method Lasso has been popular in regression models and beyond. In this paper, we combine different norms including $L_1$ to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penalties (CAP) family, which allows given grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and combining the properties of norm penalties at the across-group and within-group levels. Grouped selection occurs for nonoverlapping groups. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. We propose using the BLASSO and cross-validation to compute CAP estimates in general. For a subfamily of CAP estimates involving only the $L_1$ and $L_{\infty}$ norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived so that the regularization parameter is selected without cross-validation. CAP is shown to improve on the predictive performance of the LASSO in a series of simulated experiments, including cases with $p\gg n$ and possibly mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments.

[1]  H. Akaike,et al.  Information Theory and an Extension of the Maximum Likelihood Principle , 1973 .

[2]  C. L. Mallows Some comments on C_p , 1973 .

[3]  M. Stone Cross‐Validatory Choice and Assessment of Statistical Predictions , 1976 .

[4]  N. Sugiura Further analysts of the data by akaike' s information criterion and the finite corrections , 1978 .

[5]  G. Schwarz Estimating the Dimension of a Model , 1978 .

[6]  B. Efron The jackknife, the bootstrap, and other resampling plans , 1987 .

[7]  Ali S. Hadi,et al.  Finding Groups in Data: An Introduction to Chster Analysis , 1991 .

[8]  J. Friedman,et al.  A Statistical View of Some Chemometrics Regression Tools , 1993 .

[9]  J. Friedman,et al.  [A Statistical View of Some Chemometrics Regression Tools]: Response , 1993 .

[10]  I. Johnstone,et al.  Ideal spatial adaptation by wavelet shrinkage , 1994 .

[11]  L. Breiman Better subset regression using the nonnegative garrote , 1995 .

[12]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[13]  C. Mallows More comments on C p , 1995 .

[14]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[15]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[16]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[17]  J. Mesirov,et al.  Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. , 1999, Science.

[18]  Arthur E. Hoerl,et al.  Ridge Regression: Biased Estimation for Nonorthogonal Problems , 2000, Technometrics.

[19]  M. R. Osborne,et al.  A new approach to variable selection in least squares problems , 2000 .

[20]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[21]  R. Tibshirani,et al.  On the “degrees of freedom” of the lasso , 2007, 0712.0881.

[22]  B. Efron The Estimation of Prediction Error , 2004 .

[23]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[24]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[25]  Michael I. Jordan,et al.  Multi-task feature selection , 2006 .

[26]  M. Yuan,et al.  Model selection and estimation in regression with grouped variables , 2006 .

[27]  S. Rosset,et al.  Piecewise linear regularized solution paths , 2007, 0708.2197.

[28]  Peng Zhao,et al.  Stagewise Lasso , 2007, J. Mach. Learn. Res..

[29]  P. Zhao,et al.  Grouped and Hierarchical Model Selection through Composite Absolute Penalties , 2007 .