Fast Newton Hard Thresholding Pursuit for Sparsity Constrained Nonconvex Optimization
暂无分享,去创建一个
[1] Michael I. Jordan,et al. A Linearly-Convergent Stochastic L-BFGS Algorithm , 2015, AISTATS.
[2] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[3] Ping Li,et al. A Tight Bound of Hard Thresholding , 2016, J. Mach. Learn. Res..
[4] Heekuck Oh,et al. Neural Networks for Pattern Recognition , 1993, Adv. Comput..
[5] Deanna Needell,et al. Linear Convergence of Stochastic Iterative Greedy Algorithms With Sparse Constraints , 2014, IEEE Transactions on Information Theory.
[6] Deanna Needell,et al. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples , 2008, ArXiv.
[7] J. Nocedal,et al. Exact and Inexact Subsampled Newton Methods for Optimization , 2016, 1609.08502.
[8] Lu Tian,et al. Forward Backward Greedy Algorithms for Multi-Task Learning with Faster Rates , 2016, UAI.
[9] Alexander J. Smola,et al. Stochastic Frank-Wolfe methods for nonconvex optimization , 2016, 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[10] Jinghui Chen,et al. Accelerated Stochastic Block Coordinate Gradient Descent for Sparsity Constrained Nonconvex Optimization , 2016, UAI.
[11] Andrea Montanari,et al. Convergence rates of sub-sampled Newton methods , 2015, NIPS.
[12] Quanquan Gu,et al. Optimal Statistical and Computational Rates for One Bit Matrix Completion , 2016, AISTATS.
[13] Martin J. Wainwright,et al. Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$ -Balls , 2009, IEEE Transactions on Information Theory.
[14] Cun-Hui Zhang. Nearly unbiased variable selection under minimax concave penalty , 2010, 1002.4734.
[15] Quanquan Gu,et al. Semiparametric Differential Graph Models , 2016, NIPS.
[16] Jianqing Fan,et al. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .
[17] Yiming Yang,et al. RCV1: A New Benchmark Collection for Text Categorization Research , 2004, J. Mach. Learn. Res..
[18] Naman Agarwal,et al. Second Order Stochastic Optimization in Linear Time , 2016, ArXiv.
[19] Simon Foucart,et al. Hard Thresholding Pursuit: An Algorithm for Compressive Sensing , 2011, SIAM J. Numer. Anal..
[20] Quanquan Gu,et al. Accelerated Stochastic Block Coordinate Descent with Optimal Sampling , 2016, KDD.
[21] Zeyuan Allen Zhu,et al. Variance Reduction for Faster Non-Convex Optimization , 2016, ICML.
[22] Tong Zhang,et al. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations , 2011, IEEE Transactions on Information Theory.
[23] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[24] Qingshan Liu,et al. Newton Greedy Pursuit: A Quadratic Approximation Method for Sparsity-Constrained Optimization , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[25] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[26] Joel A. Tropp,et al. Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.
[27] Xiao-Tong Yuan,et al. Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization , 2013, ICML.
[28] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[29] Prateek Jain,et al. On Iterative Hard Thresholding Methods for High-dimensional M-Estimation , 2014, NIPS.
[30] Tengyu Ma,et al. Finding Approximate Local Minima for Nonconvex Optimization in Linear Time , 2016, ArXiv.
[31] Po-Ling Loh,et al. Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima , 2013, J. Mach. Learn. Res..
[32] Xiang Ren,et al. Precision Matrix Estimation in High Dimensional Gaussian Graphical Models with Faster Rates , 2016, AISTATS.
[33] Benjamin Recht,et al. A Simpler Approach to Matrix Completion , 2009, J. Mach. Learn. Res..
[34] Alexander J. Smola,et al. Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.
[35] Mike E. Davies,et al. Iterative Hard Thresholding for Compressed Sensing , 2008, ArXiv.
[36] Tong Zhang,et al. Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints , 2010, SIAM J. Optim..
[37] Noah A. Smith,et al. Predicting Risk from Financial Reports with Regression , 2009, NAACL.
[38] Michael W. Mahoney,et al. Sub-Sampled Newton Methods I: Globally Convergent Algorithms , 2016, ArXiv.
[39] Ali Jalali,et al. On Learning Discrete Graphical Models using Greedy Methods , 2011, NIPS.
[40] Quanquan Gu,et al. Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing , 2015, ICML.
[41] Martin J. Wainwright,et al. Estimation of (near) low-rank matrices with noise and high-dimensional scaling , 2009, ICML.
[42] Shai Shalev-Shwartz,et al. Stochastic dual coordinate ascent methods for regularized loss , 2012, J. Mach. Learn. Res..
[43] Christos Faloutsos,et al. Graph evolution: Densification and shrinking diameters , 2006, TKDD.
[44] Martin J. Wainwright,et al. A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.
[45] Yurii Nesterov,et al. Cubic regularization of Newton method and its global performance , 2006, Math. Program..
[46] Stéphane Mallat,et al. Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..
[47] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[48] Peng Xu,et al. Sub-sampled Newton Methods with Non-uniform Sampling , 2016, NIPS.
[49] Tong Zhang. Some sharp performance bounds for least squares regression with L1 regularization , 2009, 0908.2869.
[50] Zhaoran Wang,et al. Sparse PCA with Oracle Property , 2014, NIPS.
[51] Ken Lang,et al. NewsWeeder: Learning to Filter Netnews , 1995, ICML.
[52] Bhiksha Raj,et al. Greedy sparsity-constrained optimization , 2011, 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR).
[53] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[54] Murat A. Erdogdu. Newton-Stein Method: A Second Order Method for GLMs via Stein's Lemma , 2015, NIPS.
[55] Tuo Zhao,et al. Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning , 2016, ICML.
[56] Honglak Lee,et al. Efficient L1 Regularized Logistic Regression , 2006, AAAI.