Sparse Regression: Scalable Algorithms and Empirical Performance
暂无分享,去创建一个
Dimitris Bertsimas | Jean Pauphilet | Bart Van Parys | Bart P. G. Van Parys | D. Bertsimas | J. Pauphilet
[1] L. Breiman. Heuristics of instability and stabilization in model selection , 1996 .
[2] Bo-Yu Chu,et al. Warm Start for Parameter Selection of Linear Classifiers , 2015, KDD.
[3] Hussein Hazimeh,et al. Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms , 2018, Oper. Res..
[4] Trevor Hastie,et al. Statistical Learning with Sparsity: The Lasso and Generalizations , 2015 .
[5] Po-Ling Loh,et al. Support recovery without incoherence: A case for nonconvex regularization , 2014, ArXiv.
[6] Martin J. Wainwright,et al. Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting , 2007, IEEE Transactions on Information Theory.
[7] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[8] Ignacio E. Grossmann,et al. An outer-approximation algorithm for a class of mixed-integer nonlinear programs , 1987, Math. Program..
[9] Trevor Hastie,et al. Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.
[10] N. Meinshausen,et al. High-dimensional graphs and variable selection with the Lasso , 2006, math/0608017.
[11] Marc Teboulle,et al. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..
[12] Arthur E. Hoerl,et al. Ridge Regression: Biased Estimation for Nonorthogonal Problems , 2000, Technometrics.
[13] K. Schittkowski,et al. NONLINEAR PROGRAMMING , 2022 .
[14] H. Zou,et al. Regularization and variable selection via the elastic net , 2005 .
[15] A. Bruce,et al. WAVESHRINK WITH FIRM SHRINKAGE , 1997 .
[16] Y. Ye. Data Randomness Makes Optimization Problems Easier to Solve ? , 2016 .
[17] Jianqing Fan,et al. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .
[18] R. Tibshirani,et al. Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso , 2017, 1707.08692.
[19] Balas K. Natarajan,et al. Sparse Approximate Solutions to Linear Systems , 1995, SIAM J. Comput..
[20] Fengrong Wei,et al. Group coordinate descent algorithms for nonconvex penalized regression , 2012, Comput. Stat. Data Anal..
[21] Dimitris Bertsimas,et al. Sparse Classification: a scalable discrete optimization perspective , 2017 .
[22] Aixia Guo,et al. Gene Selection for Cancer Classification using Support Vector Machines , 2014 .
[23] Martin J. Wainwright,et al. Sparse learning via Boolean relaxations , 2015, Mathematical Programming.
[24] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[25] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty , 2010, 1002.4734.
[26] Martin J. Wainwright,et al. Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso) , 2009, IEEE Transactions on Information Theory.
[27] F. Chiaromonte,et al. Efficient and Effective $L_0$ Feature Selection , 2018, 1808.02526.
[28] R. Tibshirani,et al. PATHWISE COORDINATE OPTIMIZATION , 2007, 0708.1485.
[29] K. Lange,et al. Coordinate descent algorithms for lasso penalized regression , 2008, 0803.3876.
[30] Iain Dunning,et al. JuMP: A Modeling Language for Mathematical Optimization , 2015, SIAM Rev..
[31] Jianqing Fan,et al. Sure independence screening in generalized linear models with NP-dimensionality , 2009, The Annals of Statistics.
[32] Bart P. G. Van Parys,et al. Sparse high-dimensional regression: Exact scalable algorithms and phase transitions , 2017, The Annals of Statistics.
[33] Shie Mannor,et al. Robustness and Regularization of Support Vector Machines , 2008, J. Mach. Learn. Res..
[34] D. Bertsimas,et al. Best Subset Selection via a Modern Optimization Lens , 2015, 1507.03133.
[35] Dimitris Bertsimas,et al. Logistic Regression: From Art to Science , 2017 .
[36] Xiaoming Huo,et al. Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.
[37] E. Candès,et al. Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.
[38] David Gamarnik,et al. High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition , 2017, COLT.
[39] Emmanuel J. Candès,et al. False Discoveries Occur Early on the Lasso Path , 2015, ArXiv.
[40] Martin J. Wainwright,et al. Information-Theoretic Limits on Sparse Signal Recovery: Dense versus Sparse Measurement Matrices , 2008, IEEE Transactions on Information Theory.
[41] Jian Huang,et al. COORDINATE DESCENT ALGORITHMS FOR NONCONVEX PENALIZED REGRESSION, WITH APPLICATIONS TO BIOLOGICAL FEATURE SELECTION. , 2011, The annals of applied statistics.
[42] Asuman E. Ozdaglar,et al. Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods , 2008, SIAM J. Optim..
[43] Stéphane Mallat,et al. Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..
[44] M. Sion. On general minimax theorems , 1958 .
[45] Chih-Jen Lin,et al. LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..
[46] Robert W. Wilson,et al. Regressions by Leaps and Bounds , 2000, Technometrics.
[47] Iain Dunning,et al. Computing in Operations Research Using Julia , 2013, INFORMS J. Comput..
[48] Peng Zhao,et al. On Model Selection Consistency of Lasso , 2006, J. Mach. Learn. Res..
[49] Dustin G. Mixon,et al. On the tightness of an SDP relaxation of k-means , 2015, ArXiv.
[50] H. Zou,et al. One-step Sparse Estimates in Nonconcave Penalized Likelihood Models. , 2008, Annals of statistics.