Estimation, Optimization, and Parallelism when Data is Sparse
暂无分享,去创建一个
[1] J. Hiriart-Urruty,et al. Convex analysis and minimization algorithms , 1993 .
[2] Claudio Gentile,et al. Adaptive and Self-Confident On-Line Learning Algorithms , 2000, J. Comput. Syst. Sci..
[3] P. Kantor. Foundations of Statistical Natural Language Processing , 2001, Information Retrieval.
[4] Claudio Gentile,et al. On the generalization ability of on-line learning algorithms , 2001, IEEE Transactions on Information Theory.
[5] Lawrence K. Saul,et al. Identifying suspicious URLs: an application of large-scale online learning , 2009, ICML '09.
[6] Yurii Nesterov,et al. Primal-dual subgradient methods for convex problems , 2005, Math. Program..
[7] Alexander Shapiro,et al. Stochastic Approximation approach to Stochastic Programming , 2013 .
[8] Matthew J. Streeter,et al. Adaptive Bound Optimization for Online Convex Optimization , 2010, COLT 2010.
[9] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[10] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[11] Elad Hazan. The convex optimization approach to regret minimization , 2011 .
[12] Avleen Singh Bijral,et al. Mini-Batch Primal and Dual Methods for SVMs , 2013, ICML.
[13] 秀俊 松井,et al. Statistics for High-Dimensional Data: Methods, Theory and Applications , 2014 .