Massachusetts Institute of Technology Department of Economics Working Paper Series Least Squares after Model Selection in High-dimensional Sparse Models Least Squares after Model Selection in High-dimensional Sparse Models
暂无分享,去创建一个
[1] S. Geer,et al. Oracle Inequalities and Optimal Inference under Group Sparsity , 2010, 1007.1771.
[2] A. Tsybakov,et al. Exponential Screening and optimal rates of sparse estimation , 2010, 1003.2654.
[3] A. Belloni,et al. L1-Penalized Quantile Regression in High Dimensional Sparse Models , 2009, 0904.2931.
[4] Massimiliano Pontil,et al. Taking Advantage of Sparsity in Multi-Task Learning , 2009, COLT.
[5] V. Koltchinskii. Sparsity in penalized empirical risk minimization , 2009 .
[6] A. Tsybakov,et al. Sparse recovery under matrix uncertainty , 2008, 0812.2818.
[7] Alexandre B. Tsybakov,et al. Introduction to Nonparametric Estimation , 2008, Springer series in statistics.
[8] M. Rudelson,et al. On sparse reconstruction from Fourier and Gaussian measurements , 2008 .
[9] Cun-Hui Zhang,et al. The sparsity and bias of the Lasso selection in high-dimensional linear regression , 2008, 0808.0967.
[10] N. Meinshausen,et al. LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA , 2008, 0806.0145.
[11] S. Geer. HIGH-DIMENSIONAL GENERALIZED LINEAR MODELS AND THE LASSO , 2008, 0804.0703.
[12] Karim Lounici. Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators , 2008, 0801.4610.
[13] P. Bickel,et al. SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.
[14] A. Tsybakov,et al. Aggregation for Gaussian regression , 2007, 0710.3654.
[15] A. Tsybakov,et al. Sparsity oracle inequalities for the Lasso , 2007, 0705.3308.
[16] Stergios B. Fotopoulos,et al. All of Nonparametric Statistics , 2007, Technometrics.
[17] Jianqing Fan,et al. Sure independence screening for ultrahigh dimensional feature space , 2006, math/0612857.
[18] Peng Zhao,et al. On Model Selection Consistency of Lasso , 2006, J. Mach. Learn. Res..
[19] Florentina Bunea,et al. Aggregation and sparsity via 1 penalized least squares , 2006 .
[20] D. Donoho. For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .
[21] M. Rudelson,et al. L_p moments of random vectors via majorizing measures , 2005, math/0507023.
[22] E. Candès,et al. The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.
[23] Dimitris Achlioptas,et al. Database-friendly random projections: Johnson-Lindenstrauss with binary coins , 2003, J. Comput. Syst. Sci..
[24] S. R. Jammalamadaka,et al. Empirical Processes in M-Estimation , 2001 .
[25] Lianfen Qian,et al. Nonparametric Curve Estimation: Methods, Theory, and Applications , 1999, Technometrics.
[26] I. Johnstone,et al. Ideal spatial adaptation by wavelet shrinkage , 1994 .
[27] M. Talagrand,et al. Probability in Banach Spaces: Isoperimetry and Processes , 1991 .
[28] Soumendu Sundar Mukherjee,et al. Weak convergence and empirical processes , 2019 .
[29] Michael Elad,et al. Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.
[30] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .