暂无分享,去创建一个
Jie Chen | Ronny Luss | Ronny Luss | Jie Chen
[1] Alexander Shapiro,et al. Stochastic Approximation approach to Stochastic Programming , 2013 .
[2] Pietro Liò,et al. Graph Attention Networks , 2017, ICLR.
[3] Alexandre d'Aspremont,et al. Smooth Optimization with Approximate Gradient , 2005, SIAM J. Optim..
[4] Christopher J. C. Burges,et al. From RankNet to LambdaRank to LambdaMART: An Overview , 2010 .
[5] Samuel S. Schoenholz,et al. Neural Message Passing for Quantum Chemistry , 2017, ICML.
[6] Tito Homem-de-Mello,et al. On Rates of Convergence for Stochastic Optimization Problems Under Non--Independent and Identically Distributed Sampling , 2008, SIAM J. Optim..
[7] Jure Leskovec,et al. Inductive Representation Learning on Large Graphs , 2017, NIPS.
[8] Cao Xiao,et al. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling , 2018, ICLR.
[9] Shai Ben-David,et al. Understanding Machine Learning: From Theory to Algorithms , 2014 .
[10] S. Shalev-Shwartz,et al. Stochastic Gradient Descent , 2014 .
[11] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[12] Yurii Nesterov,et al. First-order methods of smooth convex optimization with inexact oracle , 2013, Mathematical Programming.
[13] Tong Zhang,et al. Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.
[14] Saeed Ghadimi,et al. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..
[15] Francis Bach,et al. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.
[16] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[17] Richard S. Zemel,et al. Gated Graph Sequence Neural Networks , 2015, ICLR.
[18] Gregory N. Hullender,et al. Learning to rank using gradient descent , 2005, ICML.
[19] Mark W. Schmidt,et al. Hybrid Deterministic-Stochastic Methods for Data Fitting , 2011, SIAM J. Sci. Comput..
[20] Mark W. Schmidt,et al. Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization , 2011, NIPS.
[21] Xavier Bresson,et al. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering , 2016, NIPS.
[22] Quoc V. Le,et al. Learning to Rank with Nonsmooth Cost Functions , 2006, Neural Information Processing Systems.
[23] Mark W. Schmidt,et al. A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method , 2012, ArXiv.
[24] Nikhil S. Ketkar. Stochastic Gradient Descent , 2017 .
[25] Alexander J. Smola,et al. Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.
[26] Furong Huang,et al. Escaping From Saddle Points - Online Stochastic Gradient for Tensor Decomposition , 2015, COLT.
[27] Joan Bruna,et al. Spectral Networks and Locally Connected Networks on Graphs , 2013, ICLR.
[28] H. Robbins. A Stochastic Approximation Method , 1951 .
[29] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[30] Max Welling,et al. Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.