Reconciling "priors" & "priors" without prejudice?
暂无分享,去创建一个
[1] M. Kowalski. Sparse regression using mixed norms , 2009 .
[2] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[3] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[4] Rémi Gribonval,et al. Reconciling "priors" & "priors" without prejudice? (research report) , 2013 .
[5] R. Tibshirani,et al. Regression shrinkage and selection via the lasso: a retrospective , 2011 .
[6] Julien Mairal,et al. Optimization with Sparsity-Inducing Penalties , 2011, Found. Trends Mach. Learn..
[7] Yurii Nesterov,et al. Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems , 2012, SIAM J. Optim..
[8] Chih-Jen Lin,et al. A dual coordinate descent method for large-scale linear SVM , 2008, ICML '08.
[9] Hervé Glotin,et al. Stochastic Low-Rank Kernel Learning for Regression , 2011, ICML.
[10] A. E. Hoerl,et al. Ridge Regression: Applications to Nonorthogonal Problems , 1970 .
[11] Rodolphe Jenatton. Active Set Algorithm for Structured Sparsity-Inducing Norms , 2009 .
[12] Volkan Cevher,et al. Compressible Distributions for High-Dimensional Statistics , 2011, IEEE Transactions on Information Theory.
[13] Eero P. Simoncelli,et al. Learning to be Bayesian without Supervision , 2006, NIPS.
[14] Rémi Gribonval,et al. Should Penalized Least Squares Regression be Interpreted as Maximum A Posteriori Estimation? , 2011, IEEE Transactions on Signal Processing.