A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers
暂无分享,去创建一个
Martin J. Wainwright | Pradeep Ravikumar | Bin Yu | Sahand N. Negahban | Bin Yu | M. Wainwright | Pradeep Ravikumar | S. Negahban
[1] E. Wigner. Characteristic Vectors of Bordered Matrices with Infinite Dimensions I , 1955 .
[2] L. Pastur. On the spectrum of random matrices , 1972 .
[3] S. Portnoy. Asymptotic Behavior of $M$-Estimators of $p$ Regression Parameters when $p^2/n$ is Large. I. Consistency , 1984 .
[4] S. Portnoy. Asymptotic behavior of M-estimators of p regression parameters when p , 1985 .
[5] M. Talagrand,et al. Probability in Banach Spaces: Isoperimetry and Processes , 1991 .
[6] V. Girko. Statistical analysis of observations of increasing dimension , 1995 .
[7] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[8] Stephen P. Boyd,et al. Semidefinite Programming , 1996, SIAM Rev..
[9] Michael A. Saunders,et al. Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..
[10] David A. Landgrebe,et al. Hyperspectral Image Data Analysis as a High Dimensional Signal Processing Problem , 2002 .
[11] David A. Landgrebe,et al. Hyperspectral image data analysis , 2002, IEEE Signal Process. Mag..
[12] Noga Alon,et al. Generalization Error Bounds for Collaborative Prediction with Low-Rank Matrices , 2004, NIPS.
[13] Y. Ritov,et al. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization , 2004 .
[14] Stephen J. Wright,et al. Simultaneous Variable Selection , 2005, Technometrics.
[15] R. Tibshirani,et al. Sparsity and smoothness via the fused lasso , 2005 .
[16] Emmanuel J. Candès,et al. Decoding by linear programming , 2005, IEEE Transactions on Information Theory.
[17] D. Donoho,et al. Neighborliness of randomly projected simplices in high dimensions. , 2005, Proceedings of the National Academy of Sciences of the United States of America.
[18] M. Stephanov,et al. Random Matrices , 2005, hep-ph/0509286.
[19] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[20] N. Meinshausen,et al. High-dimensional graphs and variable selection with the Lasso , 2006, math/0608017.
[21] Joel A. Tropp,et al. ALGORITHMS FOR SIMULTANEOUS SPARSE APPROXIMATION , 2006 .
[22] Joel A. Tropp,et al. Just relax: convex programming methods for identifying sparse signals in noise , 2006, IEEE Transactions on Information Theory.
[23] M. Yuan,et al. Model selection and estimation in regression with grouped variables , 2006 .
[24] Peng Zhao,et al. On Model Selection Consistency of Lasso , 2006, J. Mach. Learn. Res..
[25] J. Lafferty,et al. Sparse additive models , 2007, 0711.4555.
[26] Y. Nesterov. Gradient methods for minimizing composite objective function , 2007 .
[27] A. Tsybakov,et al. Aggregation for Gaussian regression , 2007, 0710.3654.
[28] M. Yuan,et al. Dimension reduction and coefficient estimation in multivariate linear regression , 2007 .
[29] Terence Tao,et al. The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.
[30] P. Zhao,et al. Grouped and Hierarchical Model Selection through Composite Absolute Penalties , 2007 .
[31] A. Tsybakov,et al. Sparsity oracle inequalities for the Lasso , 2007, 0705.3308.
[32] Larry A. Wasserman,et al. SpAM: Sparse Additive Models , 2007, NIPS.
[33] Noureddine El Karoui,et al. Operator norm consistent estimation of large-dimensional sparse covariance matrices , 2008, 0901.3220.
[34] S. Geer. HIGH-DIMENSIONAL GENERALIZED LINEAR MODELS AND THE LASSO , 2008, 0804.0703.
[35] Bin Yu,et al. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence , 2008, 0811.3628.
[36] A. Rinaldo,et al. On the asymptotic properties of the group lasso estimator for linear models , 2008 .
[37] Francis R. Bach,et al. Consistency of trace norm minimization , 2007, J. Mach. Learn. Res..
[38] M. Lustig,et al. Compressed Sensing MRI , 2008, IEEE Signal Processing Magazine.
[39] Francis R. Bach,et al. Consistency of the group Lasso and multiple kernel learning , 2007, J. Mach. Learn. Res..
[40] Michael I. Jordan,et al. Union support recovery in high-dimensional multivariate regression , 2008, 2008 46th Annual Allerton Conference on Communication, Control, and Computing.
[41] F. Bunea. Honest variable selection in linear and logistic regression models via $\ell_1$ and $\ell_1+\ell_2$ penalization , 2008, 0808.4051.
[42] N. Meinshausen. A note on the Lasso for Gaussian graphical model selection , 2008 .
[43] Cun-Hui Zhang,et al. The sparsity and bias of the Lasso selection in high-dimensional linear regression , 2008, 0808.0967.
[44] F. Bunea. Honest variable selection in linear and logistic regression models via $\ell_1$ and $\ell_1+\ell_2$ penalization , 2008, 0808.4051.
[45] Ming Yuan,et al. Sparse Recovery in Large Ensembles of Kernel Machines On-Line Learning and Bandits , 2008, COLT.
[46] Adam J. Rothman,et al. Sparse permutation invariant covariance estimation , 2008, 0801.4837.
[47] P. Bühlmann,et al. The group lasso for logistic regression , 2008 .
[48] Jean-Philippe Vert,et al. Group lasso with overlap and graph lasso , 2009, ICML '09.
[49] N. Meinshausen,et al. LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA , 2008, 0806.0145.
[50] S. Geer,et al. On the conditions used to prove oracle results for the Lasso , 2009, 0910.0722.
[51] P. Zhao,et al. The composite absolute penalties family for grouped and hierarchical variable selection , 2009, 0909.0411.
[52] Massimiliano Pontil,et al. Taking Advantage of Sparsity in Multi-Task Learning , 2009, COLT.
[53] Martin J. Wainwright,et al. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting , 2009, IEEE Trans. Inf. Theory.
[54] Martin J. Wainwright,et al. Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting , 2007, IEEE Transactions on Information Theory.
[55] Lieven Vandenberghe,et al. Interior-Point Method for Nuclear Norm Approximation with Application to System Identification , 2009, SIAM J. Matrix Anal. Appl..
[56] Babak Hassibi,et al. On the Reconstruction of Block-Sparse Signals With an Optimal Number of Measurements , 2008, IEEE Transactions on Signal Processing.
[57] Andrea Montanari,et al. Matrix Completion from Noisy Entries , 2009, J. Mach. Learn. Res..
[58] Yurii Nesterov,et al. Primal-dual subgradient methods for convex problems , 2005, Math. Program..
[59] Francis R. Bach,et al. Self-concordant analysis for logistic regression , 2009, ArXiv.
[60] Yoram Bresler,et al. Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint , 2009, ArXiv.
[61] James B. Brown,et al. An overview of recent developments in genomics and associated statistical methods , 2009, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[62] Arvind Ganesh,et al. Fast Convex Optimization Algorithms for Exact Recovery of a Corrupted Low-Rank Matrix , 2009 .
[63] Shuheng Zhou. Restricted Eigenvalue Conditions on Subgaussian Random Matrices , 2009, 0912.4045.
[64] S. Geer,et al. High-dimensional additive modeling , 2008, 0806.4115.
[65] P. Bickel,et al. SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR , 2008, 0801.1095.
[66] Martin J. Wainwright,et al. Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso) , 2009, IEEE Transactions on Information Theory.
[67] Jianqing Fan,et al. Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation. , 2007, Annals of statistics.
[68] Junzhou Huang,et al. The Benefit of Group Sparsity , 2009 .
[69] P. Bickel,et al. Covariance regularization by thresholding , 2009, 0901.3079.
[70] Pablo A. Parrilo,et al. Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization , 2007, SIAM Rev..
[71] Andrea Montanari,et al. Matrix completion from a few entries , 2009, 2009 IEEE International Symposium on Information Theory.
[72] J. Lafferty,et al. High-dimensional Ising model selection using ℓ1-regularized logistic regression , 2010, 1010.0311.
[73] M. Wegkamp,et al. Adaptive Rank Penalized Estimators in Multivariate Regression , 2010 .
[74] A. Tsybakov,et al. Estimation of high-dimensional low-rank matrices , 2009, 0912.5338.
[75] Larry A. Wasserman,et al. Time varying undirected graphs , 2008, Machine Learning.
[76] Martin J. Wainwright,et al. Estimation of (near) low-rank matrices with noise and high-dimensional scaling , 2009, ICML.
[77] Xiaodong Li,et al. Stable Principal Component Pursuit , 2010, 2010 IEEE International Symposium on Information Theory.
[78] Robert Tibshirani,et al. Spectral Regularization Algorithms for Learning Large Incomplete Matrices , 2010, J. Mach. Learn. Res..
[79] Emmanuel J. Candès,et al. Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements , 2010, ArXiv.
[80] Volkan Cevher,et al. Model-Based Compressive Sensing , 2008, IEEE Transactions on Information Theory.
[81] Ambuj Tewari,et al. Learning Exponential Families in High-Dimensions: Strong Convexity and Sparsity , 2009, AISTATS.
[82] Martin J. Wainwright,et al. Restricted Eigenvalue Properties for Correlated Gaussian Designs , 2010, J. Mach. Learn. Res..
[83] V. Koltchinskii,et al. SPARSITY IN MULTIPLE KERNEL LEARNING , 2010, 1211.2998.
[84] J. Tropp,et al. Two proposals for robust PCA using semidefinite programming , 2010, 1012.1086.
[85] Martin J. Wainwright,et al. Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$ -Balls , 2009, IEEE Transactions on Information Theory.
[86] Martin J. Wainwright,et al. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions , 2011, ICML.
[87] Martin J. Wainwright,et al. Simultaneous Support Recovery in High Dimensions: Benefits and Perils of Block $\ell _{1}/\ell _{\infty} $-Regularization , 2009, IEEE Transactions on Information Theory.
[88] M. Wegkamp,et al. Optimal selection of reduced rank estimators of high-dimensional matrices , 2010, 1004.2995.
[89] Michael I. Jordan,et al. Union support recovery in high-dimensional multivariate regression , 2008, 2008 46th Annual Allerton Conference on Communication, Control, and Computing.
[90] Julien Mairal,et al. Proximal Methods for Hierarchical Sparse Coding , 2010, J. Mach. Learn. Res..
[91] Weiyu Xu,et al. Null space conditions and thresholds for rank minimization , 2011, Math. Program..
[92] Pablo A. Parrilo,et al. Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..
[93] Sham M. Kakade,et al. Robust Matrix Decomposition With Sparse Corruptions , 2011, IEEE Transactions on Information Theory.
[94] Benjamin Recht,et al. A Simpler Approach to Matrix Completion , 2009, J. Mach. Learn. Res..