暂无分享,去创建一个
Yuling Jiao | Lican Kang | Xiliang Lu | Peili Li | Yuling Jiao | Xiliang Lu | Lican Kang | Peili Li
[1] K. Lange,et al. Coordinate descent algorithms for lasso penalized regression , 2008, 0803.3876.
[2] Martin Jaggi,et al. Sparse Convex Optimization Methods for Machine Learning , 2011 .
[3] Volker Roth,et al. The Group-Lasso for generalized linear models: uniqueness of solutions and efficient algorithms , 2008, ICML '08.
[4] D. Sengupta. Linear models , 2003 .
[5] Ping Li,et al. On the Iteration Complexity of Support Recovery via Hard Thresholding Pursuit , 2017, ICML.
[6] Jinghui Chen,et al. Fast Newton Hard Thresholding Pursuit for Sparsity Constrained Nonconvex Optimization , 2017, KDD.
[7] Jianqing Fan,et al. Sure independence screening for ultrahigh dimensional feature space , 2006, math/0612857.
[8] R. Tibshirani,et al. PATHWISE COORDINATE OPTIMIZATION , 2007, 0708.1485.
[9] Yuling Jiao,et al. GSDAR: a fast Newton algorithm for ℓ0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _0$$\end{document} regulariz , 2021, Computational Statistics.
[10] O. SIAMJ.,et al. SMOOTH OPTIMIZATION APPROACH FOR SPARSE COVARIANCE SELECTION∗ , 2009 .
[11] Trevor Hastie,et al. Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.
[12] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty , 2010, 1002.4734.
[13] H. Zou,et al. Regularization and variable selection via the elastic net , 2005 .
[14] Yuling Jiao,et al. A Primal Dual Active Set Algorithm With Continuation for Compressed Sensing , 2013, IEEE Transactions on Signal Processing.
[15] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[16] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[17] Martin J. Wainwright,et al. Fast global convergence of gradient methods for high-dimensional statistical recovery , 2011, ArXiv.
[18] Zehua Chen,et al. Sequential Lasso Cum EBIC for Feature Selection With Ultra-High Dimensional Feature Space , 2014 .
[19] Bhiksha Raj,et al. Greedy sparsity-constrained optimization , 2011, 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR).
[20] Shenglong Zhou,et al. Newton method for ℓ0-regularized optimization , 2020, Numerical Algorithms.
[21] Jian Huang,et al. COORDINATE DESCENT ALGORITHMS FOR NONCONVEX PENALIZED REGRESSION, WITH APPLICATIONS TO BIOLOGICAL FEATURE SELECTION. , 2011, The annals of applied statistics.
[22] D. Lorenz,et al. Elastic-net regularization: error estimates and active set methods , 2009, 0905.0796.
[23] Yunhai Xiao,et al. An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation , 2018, Comput. Stat. Data Anal..
[24] Michael A. Saunders,et al. Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..
[25] Tong Zhang,et al. Analysis of Multi-stage Convex Relaxation for Sparse Regularization , 2010, J. Mach. Learn. Res..
[26] Jin Liu,et al. A Unified Primal Dual Active Set Algorithm for Nonconvex Sparse Recovery , 2013 .
[27] G. A. Young,et al. High‐dimensional Statistics: A Non‐asymptotic Viewpoint, Martin J.Wainwright, Cambridge University Press, 2019, xvii 552 pages, £57.99, hardback ISBN: 978‐1‐1084‐9802‐9 , 2020, International Statistical Review.
[28] Runze Li,et al. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION. , 2013, Annals of statistics.
[29] S. Geer. HIGH-DIMENSIONAL GENERALIZED LINEAR MODELS AND THE LASSO , 2008, 0804.0703.
[30] T. Cai,et al. A Constrained ℓ1 Minimization Approach to Sparse Precision Matrix Estimation , 2011, 1102.2233.
[31] Qingshan Liu,et al. Newton-Type Greedy Selection Methods for $\ell _0$ -Constrained Minimization , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[32] Tong Zhang,et al. Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models , 2008, NIPS.
[33] Tong Zhang,et al. Gradient Hard Thresholding Pursuit , 2018, J. Mach. Learn. Res..
[34] Stephen P. Boyd,et al. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..
[35] Mila Nikolova,et al. Description of the Minimizers of Least Squares Regularized with 퓁0-norm. Uniqueness of the Global Minimizer , 2013, SIAM J. Imaging Sci..
[36] Xiao-Tong Yuan,et al. Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization , 2013, ICML.
[37] Lin Xiao,et al. A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem , 2012, SIAM J. Optim..
[38] Shenglong Zhou,et al. Fast Newton Method for Sparse Logistic Regression , 2019, 1901.02768.
[39] Mee Young Park,et al. L 1-regularization path algorithm for generalized linear models , 2006 .
[40] Bangti Jin,et al. A Primal Dual Active Set with Continuation Algorithm for the \ell^0-Regularized Optimization Problem , 2014, ArXiv.
[41] P. Bühlmann,et al. The group lasso for logistic regression , 2008 .
[42] HuangJian,et al. A constructive approach to L0 penalized regression , 2018 .
[43] Jianqing Fan,et al. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .
[44] D. Donoho,et al. Sparse MRI: The application of compressed sensing for rapid MR imaging , 2007, Magnetic resonance in medicine.