暂无分享,去创建一个
Haishan Ye | Zhihua Zhang | Luo Luo | Luo Luo | Haishan Ye | Zhihua Zhang
[1] Daniel A. Spielman,et al. Spectral Graph Theory , 2012 .
[2] Naman Agarwal,et al. Second Order Stochastic Optimization in Linear Time , 2016, ArXiv.
[3] Rong Jin,et al. Linear Convergence with Condition Number Independent Access of Full Gradients , 2013, NIPS.
[4] Alexander J. Smola,et al. Efficient mini-batch training for stochastic optimization , 2014, KDD.
[5] D K Smith,et al. Numerical Optimization , 2001, J. Oper. Res. Soc..
[6] Andrea Montanari,et al. Convergence rates of sub-sampled Newton methods , 2015, NIPS.
[7] David P. Woodruff. Sketching as a Tool for Numerical Linear Algebra , 2014, Found. Trends Theor. Comput. Sci..
[8] Mark W. Schmidt,et al. A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets , 2012, NIPS.
[9] R. Dembo,et al. INEXACT NEWTON METHODS , 1982 .
[10] H. Robbins. A Stochastic Approximation Method , 1951 .
[11] V. Lemaire,et al. Design and Analysis of the Nomao challenge Active Learning in the Real-World , 2012 .
[12] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[13] David P. Woodruff,et al. Low rank approximation and regression in input sparsity time , 2013, STOC '13.
[14] Robert H. Halstead,et al. Matrix Computations , 2011, Encyclopedia of Parallel Computing.
[15] Michael W. Mahoney,et al. Sub-Sampled Newton Methods I: Globally Convergent Algorithms , 2016, ArXiv.
[16] David P. Woodruff,et al. Low rank approximation and regression in input sparsity time , 2012, STOC '13.
[17] Michael W. Mahoney,et al. Sub-Sampled Newton Methods II: Local Convergence Rates , 2016, ArXiv.
[18] Gary L. Miller,et al. Iterative Row Sampling , 2012, 2013 IEEE 54th Annual Symposium on Foundations of Computer Science.
[19] Martin J. Wainwright,et al. Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence , 2015, SIAM J. Optim..
[20] Ohad Shamir,et al. Better Mini-Batch Algorithms via Accelerated Gradient Methods , 2011, NIPS.
[21] Jorge Nocedal,et al. On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning , 2011, SIAM J. Optim..
[22] John C. Platt,et al. Fast training of support vector machines using sequential minimal optimization, advances in kernel methods , 1999 .
[23] Denis J. Dean,et al. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables , 1999 .
[24] Michael W. Mahoney,et al. Low-distortion subspace embeddings in input-sparsity time and applications to robust linear regression , 2012, STOC '13.
[25] Mark W. Schmidt,et al. Minimizing finite sums with the stochastic average gradient , 2013, Mathematical Programming.
[26] David P. Woodruff,et al. Fast approximation of matrix coherence and statistical leverage , 2011, ICML.
[27] Alexander Shapiro,et al. Stochastic Approximation approach to Stochastic Programming , 2013 .
[28] Peng Xu,et al. Sub-sampled Newton Methods with Non-uniform Sampling , 2016, NIPS.