Stagewise Lasso
暂无分享,去创建一个
[1] Yoav Freund,et al. Boosting a weak learning algorithm by majority , 1990, COLT '90.
[2] Robert E. Schapire,et al. The strength of weak learnability , 1990, Mach. Learn..
[3] J. Friedman,et al. A Statistical View of Some Chemometrics Regression Tools , 1993 .
[4] J. Friedman,et al. [A Statistical View of Some Chemometrics Regression Tools]: Response , 1993 .
[5] D. Donoho,et al. Basis pursuit , 1994, Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers.
[6] Vladimir Vapnik,et al. The Nature of Statistical Learning , 1995 .
[7] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[8] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[9] Wenjiang J. Fu. Penalized Regressions: The Bridge versus the Lasso , 1998 .
[10] Y. Freund,et al. Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By , 2000 .
[11] Nello Cristianini,et al. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .
[12] Wenjiang J. Fu,et al. Asymptotics for lasso-type estimators , 2000 .
[13] M. R. Osborne,et al. On the LASSO and its Dual , 2000 .
[14] Vladimir N. Vapnik,et al. The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.
[15] Naftali Tishby,et al. The information bottleneck method , 2000, ArXiv.
[16] M. R. Osborne,et al. A new approach to variable selection in least squares problems , 2000 .
[17] Peter L. Bartlett,et al. Functional Gradient Techniques for Combining Hypotheses , 2000 .
[18] P. Bühlmann,et al. Boosting with the L2-loss: regression and classification , 2001 .
[19] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[20] Tong Zhang,et al. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods , 2001, AI Mag..
[21] Jianqing Fan,et al. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .
[22] Alexander G. Dimitrov,et al. Information Distortion and Neural Coding , 2001 .
[23] Robert Tibshirani,et al. 1-norm Support Vector Machines , 2003, NIPS.
[24] Tong Zhang,et al. Sequential greedy approximation for certain convex optimization problems , 2003, IEEE Trans. Inf. Theory.
[25] R. Tibshirani,et al. Least angle regression , 2004, math/0406456.
[26] Saharon Rosset,et al. Tracking Curved Regularized Optimization Solution Paths , 2004, NIPS 2004.
[27] Ji Zhu,et al. Boosting as a Regularized Path to a Maximum Margin Classifier , 2004, J. Mach. Learn. Res..
[28] J. Tropp. JUST RELAX: CONVEX PROGRAMMING METHODS FOR SUBSET SELECTION AND SPARSE APPROXIMATION , 2004 .
[29] A. Atiya,et al. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2005, IEEE Transactions on Neural Networks.
[30] D. Donoho. For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution , 2006 .
[31] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[32] Michael Elad,et al. Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.
[33] N. Meinshausen,et al. High-dimensional graphs and variable selection with the Lasso , 2006, math/0608017.
[34] Martin J. Wainwright,et al. Sharp thresholds for high-dimensional and noisy recovery of sparsity , 2006, ArXiv.
[35] H. Zou. The Adaptive Lasso and Its Oracle Properties , 2006 .
[36] Jianfeng Gao,et al. Approximation Lasso Methods for Language Modeling , 2006, ACL.
[37] Joel A. Tropp,et al. Just relax: convex programming methods for identifying sparse signals in noise , 2006, IEEE Transactions on Information Theory.
[38] Peng Zhao,et al. On Model Selection Consistency of Lasso , 2006, J. Mach. Learn. Res..
[39] N. Meinshausen,et al. Submitted to the Annals of Statistics LASSO-TYPE RECOVERY OF SPARSE REPRESENTATIONS FOR HIGH-DIMENSIONAL DATA , 2006 .
[40] Terence Tao,et al. The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.
[41] R. Tibshirani,et al. Forward stagewise regression and the monotone lasso , 2007, 0705.0269.
[42] Cun-Hui Zhang,et al. The sparsity and bias of the Lasso selection in high-dimensional linear regression , 2008, 0808.0967.
[43] Robert Tibshirani,et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition , 2001, Springer Series in Statistics.