Random Models in Nonlinear Optimization

1

[1]  Stefan M. Wild,et al.  Manifold Sampling for ℓ1 Nonconvex Optimization , 2016, SIAM J. Optim..

[2]  Jorge Nocedal,et al.  Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..

[3]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[4]  Michael C. Ferris,et al.  Variable-Number Sample-Path Optimization , 2008, Math. Program..

[5]  Hong Wan,et al.  Stochastic Trust-Region Response-Surface Method (STRONG) - A New Response-Surface Framework for Simulation Optimization , 2013, INFORMS J. Comput..

[6]  Warren Hare,et al.  A derivative-free approximate gradient sampling algorithm for finite minimax problems , 2013, Computational Optimization and Applications.

[7]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[8]  H. Robbins A Stochastic Approximation Method , 1951 .

[9]  James V. Burke,et al.  Descent methods for composite nondifferentiable optimization problems , 1985, Math. Program..

[10]  Luís Nunes Vicente,et al.  Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case , 2016, SIAM J. Optim..

[11]  Andreas Griewank,et al.  On Lipschitz optimization based on gray-box piecewise linearization , 2016, Math. Program..

[12]  Katya Scheinberg,et al.  Introduction to derivative-free optimization , 2010, Math. Comput..

[13]  Charles Audet,et al.  Mesh Adaptive Direct Search Algorithms for Constrained Optimization , 2006, SIAM J. Optim..

[14]  R. Fletcher A model algorithm for composite nondifferentiable optimization problems , 1982 .

[15]  R. Pasupathy,et al.  ASTRO-DF: A CLASS OF ADAPTIVE SAMPLING TRUST-REGION ALGORITHMS FOR DERIVATIVE-FREE SIMULATION OPTIMIZATION , 2015 .

[16]  F. Facchinei,et al.  Finite-Dimensional Variational Inequalities and Complementarity Problems , 2003 .

[17]  K. Kiwiel Methods of Descent for Nondifferentiable Optimization , 1985 .

[18]  Alexander Shapiro,et al.  The empirical behavior of sampling methods for stochastic programming , 2006, Ann. Oper. Res..

[19]  Tong Zhang,et al.  Accelerating Stochastic Gradient Descent using Predictive Variance Reduction , 2013, NIPS.

[20]  Christine A. Shoemaker,et al.  Global Convergence of Radial Basis Function Trust-Region Algorithms for Derivative-Free Optimization , 2013, SIAM Rev..

[21]  Yann LeCun,et al.  Second Order Properties of Error Surfaces: Learning Time and Generalization , 1990, NIPS 1990.

[22]  Jeffrey Larson,et al.  Derivative-Free Optimization of Expensive Functions with Computational Error Using Weighted Regression , 2013, SIAM J. Optim..

[23]  Saeed Ghadimi,et al.  Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming , 2013, SIAM J. Optim..

[24]  R. Durrett Probability: Theory and Examples , 1993 .

[25]  Christine A. Shoemaker,et al.  Derivative-free optimization algorithms for computationally expensive functions , 2009 .

[26]  Ruobing Chen,et al.  Stochastic derivative-free optimization of noisy functions , 2015 .

[27]  L. N. Vicente,et al.  Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation , 2008 .

[28]  Zeyuan Allen Zhu,et al.  Variance Reduction for Faster Non-Convex Optimization , 2016, ICML.

[29]  M. J. D. Powell,et al.  UOBYQA: unconstrained optimization by quadratic approximation , 2002, Math. Program..

[30]  Frank E. Curtis,et al.  An adaptive gradient sampling algorithm for non-smooth optimization , 2013, Optim. Methods Softw..

[31]  Stephen M. Robinson,et al.  Analysis of Sample-Path Optimization , 1996, Math. Oper. Res..

[32]  Krzysztof C. Kiwiel,et al.  Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization , 2007, SIAM J. Optim..

[33]  Michael I. Jordan,et al.  Convexity, Classification, and Risk Bounds , 2006 .

[34]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[35]  R. Fletcher Practical Methods of Optimization , 1988 .

[36]  Alexander Shapiro,et al.  Stochastic Approximation approach to Stochastic Programming , 2013 .

[37]  F. Clarke Optimization And Nonsmooth Analysis , 1983 .

[38]  Alexander J. Smola,et al.  Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization , 2016, NIPS.

[39]  Claudio Gentile,et al.  Learning noisy linear classifiers via adaptive and selective sampling , 2011, Machine Learning.

[40]  Yurii Nesterov,et al.  Random Gradient-Free Minimization of Convex Functions , 2015, Foundations of Computational Mathematics.

[41]  Alexander J. Smola,et al.  Stochastic Variance Reduction for Nonconvex Optimization , 2016, ICML.

[42]  R. Rockafellar Monotone Operators and the Proximal Point Algorithm , 1976 .

[43]  Jeffrey Larson,et al.  Stochastic derivative-free optimization using a trust region framework , 2016, Comput. Optim. Appl..

[44]  K. Kiwiel A Method for Solving Certain Quadratic Programming Problems Arising in Nonsmooth Optimization , 1986 .

[45]  Stefan M. Wild,et al.  Benchmarking Derivative-Free Optimization Algorithms , 2009, SIAM J. Optim..

[46]  J. Spall Multivariate stochastic approximation using a simultaneous perturbation gradient approximation , 1992 .

[47]  Jorge Nocedal,et al.  Sample size selection in optimization methods for machine learning , 2012, Math. Program..

[48]  J. Spall Adaptive stochastic approximation by the simultaneous perturbation method , 1998, Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171).

[49]  S. V. N. Vishwanathan,et al.  T-logistic Regression , 2010, NIPS.

[50]  L. N. Vicente,et al.  Complexity and global rates of trust-region methods based on probabilistic models , 2018 .

[51]  Rocco A. Servedio,et al.  Random classification noise defeats all convex potential boosters , 2008, ICML.

[52]  Eric Moulines,et al.  Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning , 2011, NIPS.

[53]  Adrian S. Lewis,et al.  A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization , 2005, SIAM J. Optim..

[54]  Saeed Ghadimi,et al.  Accelerated gradient methods for nonconvex nonlinear and stochastic programming , 2013, Mathematical Programming.

[55]  Shai Ben-David,et al.  On the difficulty of approximately maximizing agreements , 2000, J. Comput. Syst. Sci..

[56]  Francis Bach,et al.  SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives , 2014, NIPS.

[57]  Krzysztof C. Kiwiel,et al.  A Nonderivative Version of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization , 2010, SIAM J. Optim..

[58]  J. Kiefer,et al.  Stochastic Estimation of the Maximum of a Regression Function , 1952 .

[59]  Peter Richtárik,et al.  Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function , 2011, Mathematical Programming.

[60]  Andrew R. Teel,et al.  Direct search methods for nonsmooth optimization , 2004, 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601).

[61]  Katya Scheinberg,et al.  Convergence of Trust-Region Methods Based on Probabilistic Models , 2013, SIAM J. Optim..

[62]  Boris Polyak,et al.  Acceleration of stochastic approximation by averaging , 1992 .

[63]  Guanghui Lan,et al.  An optimal method for stochastic composite optimization , 2011, Mathematical Programming.

[64]  J. Blanchet,et al.  Convergence Rate Analysis of a Stochastic Trust Region Method for Nonconvex Optimization , 2016 .

[65]  Ya-Xiang Yuan,et al.  A derivative-free trust-region algorithm for composite nonsmooth optimization , 2014, Computational and Applied Mathematics.

[66]  Raghu Pasupathy,et al.  Simulation Optimization: A Concise Overview and Implementation Guide , 2013 .

[67]  Katya Scheinberg,et al.  Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points , 2009, SIAM J. Optim..