QUIC: quadratic approximation for sparse inverse covariance estimation
暂无分享,去创建一个
Pradeep Ravikumar | Inderjit S. Dhillon | Cho-Jui Hsieh | Mátyás A. Sustik | Cho-Jui Hsieh | I. Dhillon | Pradeep Ravikumar
[1] Kim-Chuan Toh,et al. A coordinate gradient descent method for ℓ1-regularized convex minimization , 2011, Comput. Optim. Appl..
[2] Jianqing Fan,et al. Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation. , 2007, Annals of statistics.
[3] Chih-Jen Lin,et al. A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification , 2010, J. Mach. Learn. Res..
[4] Marc Teboulle,et al. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..
[5] Alexandre d'Aspremont,et al. Model Selection Through Sparse Max Likelihood Estimation Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data , 2022 .
[6] K. Lange,et al. Coordinate descent algorithms for lasso penalized regression , 2008, 0803.3876.
[7] J. Friedman,et al. New Insights and Faster Computations for the Graphical Lasso , 2011 .
[8] Pradeep Ravikumar,et al. A Divide-and-Conquer Method for Sparse Inverse Covariance Estimation , 2012, NIPS.
[9] R. Tibshirani,et al. Sparse inverse covariance estimation with the graphical lasso. , 2008, Biostatistics.
[10] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[11] Katya Scheinberg,et al. Learning Sparse Gaussian Markov Networks Using a Greedy Coordinate Ascent Approach , 2010, ECML/PKDD.
[12] Jorge Nocedal,et al. Newton-Like Methods for Sparse Inverse Covariance Estimation , 2012, NIPS.
[13] Mark W. Schmidt,et al. Graphical model structure learning using L₁-regularization , 2010 .
[14] Trevor J. Hastie,et al. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso , 2011, J. Mach. Learn. Res..
[15] Boris Polyak,et al. Constrained minimization methods , 1966 .
[16] Martin J. Wainwright,et al. Fast global convergence rates of gradient methods for high-dimensional statistical recovery , 2010, NIPS.
[17] Paul Tseng,et al. A coordinate gradient descent method for nonsmooth separable minimization , 2008, Math. Program..
[18] Adam J. Rothman,et al. Sparse permutation invariant covariance estimation , 2008, 0801.4837.
[19] T. Apostol. Mathematical Analysis , 1957 .
[20] Mark W. Schmidt,et al. Optimizing Costly Functions with Simple Constraints: A Limited-Memory Projected Quasi-Newton Algorithm , 2009, AISTATS.
[21] P. Bühlmann,et al. The group lasso for logistic regression , 2008 .
[22] Bin Yu,et al. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence , 2008, 0811.3628.
[23] Pradeep Ravikumar,et al. Sparse inverse covariance matrix estimation using quadratic approximation , 2011, MLSLP.
[24] R. Tibshirani,et al. PATHWISE COORDINATE OPTIMIZATION , 2007, 0708.1485.
[25] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[26] M. Yuan,et al. Model selection and estimation in the Gaussian graphical model , 2007 .
[27] Michael A. Saunders,et al. Proximal Newton-type methods for convex optimization , 2012, NIPS.
[28] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[29] I. Daubechies,et al. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.
[30] Boris Polyak. The conjugate gradient method in extremal problems , 1969 .
[31] O. SIAMJ.. SMOOTH OPTIMIZATION APPROACH FOR SPARSE COVARIANCE SELECTION∗ , 2009 .
[32] Trevor Hastie,et al. Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.
[33] Stephen Gould,et al. Projected Subgradient Methods for Learning Sparse Gaussians , 2008, UAI.
[34] Alexandre d'Aspremont,et al. First-Order Methods for Sparse Covariance Selection , 2006, SIAM J. Matrix Anal. Appl..
[35] Chia-Hua Ho,et al. An improved GLMNET for l1-regularized logistic regression , 2011, J. Mach. Learn. Res..
[36] J. Dunn. Newton’s Method and the Goldstein Step-Length Rule for Constrained Minimization Problems , 1980 .
[37] Dimitri P. Bertsekas,et al. Nonlinear Programming , 1997 .
[38] Lu Li,et al. An inexact interior point method for L1-regularized sparse covariance selection , 2010, Math. Program. Comput..
[39] Shiqian Ma,et al. Sparse Inverse Covariance Selection via Alternating Linearization Methods , 2010, NIPS.