Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
暂无分享,去创建一个
S. Bellavia | G. Gurioli | B. Morini | Ph.L. Toint | P. Toint | S. Bellavia | G. Gurioli | B. Morini
[1] P. Toint,et al. Adaptive cubic overestimation methods for unconstrained optimization , 2007 .
[2] Nicholas I. M. Gould,et al. On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization , 2012, SIAM J. Optim..
[3] Katya Scheinberg,et al. Convergence of Trust-Region Methods Based on Probabilistic Models , 2013, SIAM J. Optim..
[4] P. Toint,et al. Trust-region and other regularisations of linear least-squares problems , 2009 .
[5] Michael W. Mahoney,et al. Sub-sampled Newton methods , 2018, Math. Program..
[6] J. Dussault. Simple unified convergence proofs for Trust Region and a new ARC variant , 2015 .
[7] Katya Scheinberg,et al. Global convergence rate analysis of unconstrained optimization methods based on probabilistic models , 2015, Mathematical Programming.
[8] Joel A. Tropp,et al. An Introduction to Matrix Concentration Inequalities , 2015, Found. Trends Mach. Learn..
[9] Philippe L. Toint,et al. WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION , 2017, Proceedings of the International Congress of Mathematicians (ICM 2018).
[10] Stefania Bellavia,et al. A Levenberg–Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients , 2018, Numerische Mathematik.
[11] Michael I. Jordan,et al. Stochastic Cubic Regularization for Fast Nonconvex Optimization , 2017, NeurIPS.
[12] Serge Gratton,et al. Recursive Trust-Region Methods for Multiscale Nonlinear Optimization , 2008, SIAM J. Optim..
[13] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[14] Nicholas I. M. Gould,et al. Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints , 2018, SIAM J. Optim..
[15] Yurii Nesterov,et al. Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.
[16] A Stochastic Line Search Method with Convergence Rate Analysis , 2018, 1807.07994.
[17] L. Qi,et al. Bernstein Concentration Inequalities for Tensors via Einstein Products , 2019, 1902.03056.
[18] Serge Gratton,et al. Minimizing convex quadratic with variable precision Krylov methods , 2018, ArXiv.
[19] Tianyi Lin,et al. On Adaptive Cubic Regularized Newton's Methods for Convex Optimization via Random Sampling , 2018 .
[20] Peng Xu,et al. Newton-type methods for non-convex optimization under inexact Hessian information , 2017, Math. Program..
[21] Yurii Nesterov,et al. Cubic regularization of Newton method and its global performance , 2006, Math. Program..
[22] Peng Xu,et al. Second-Order Optimization for Non-Convex Machine Learning: An Empirical Study , 2017, SDM.
[23] Peng Xu,et al. Inexact Nonconvex Newton-Type Methods , 2018, INFORMS Journal on Optimization.
[24] STEFANIA BELLAVIA,et al. Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization , 2018, IMA Journal of Numerical Analysis.
[25] Cho-Jui Hsieh,et al. Stochastic Second-order Methods for Non-convex Optimization with Inexact Hessian and Gradient , 2018, ArXiv.
[26] Stephen A. Vavasis,et al. Black-Box Complexity of Local Minimization , 1993, SIAM J. Optim..
[27] Vyacheslav Kungurtsev,et al. A Subsampling Line-Search Method with Second-Order Results , 2018, INFORMS J. Optim..
[28] José Mario Martínez,et al. Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models , 2017, Math. Program..
[29] Katya Scheinberg,et al. Convergence Rate Analysis of a Stochastic Trust-Region Method via Supermartingales , 2016, INFORMS Journal on Optimization.