Bayesian Optimization with Exponential Convergence

This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [1] requires access to the δ-cover sampling, which was considered to be impractical [1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.

[1]  B. Shubert A Sequential Method Seeking the Global Maximum of a Function , 1972 .

[2]  L. C. W. Dixon,et al.  Global Optima without Convexity , 1978 .

[3]  D. Mayne,et al.  Outer approximation algorithm for nondifferentiable optimization problems , 1984 .

[4]  Regina Hunter Mladineo An algorithm for finding the global maximum of a multimodal, multivariate function , 1986, Math. Program..

[5]  C. D. Perttunen,et al.  Lipschitzian optimization without the Lipschitz constant , 1993 .

[6]  Owen J. Eslinger,et al.  Algorithms for Noisy Problems in Gas Transmission Pipeline Optimization , 2001 .

[7]  Clara Pizzuti,et al.  Local tuning and partition strategies for diagonal GO methods , 2003, Numerische Mathematik.

[8]  L. Watson,et al.  Globally optimised parameters for a model of mitotic control in frog egg extracts. , 2005, Systems biology.

[9]  Wayne L. Tabor,et al.  Global and local optimization using radial basis function response surface models , 2007 .

[10]  Lihong Li,et al.  Reinforcement Learning in Finite MDPs: PAC Analysis , 2009, J. Mach. Learn. Res..

[11]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[12]  Andreas Krause,et al.  Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2009, IEEE Transactions on Information Theory.

[13]  Thomas J. Walsh,et al.  Integrating Sample-Based Planning and Model-Based Reinforcement Learning , 2010, AAAI.

[14]  Rémi Munos,et al.  Optimistic Optimization of Deterministic Functions , 2011, NIPS 2011.

[15]  Jia Yuan Yu,et al.  Lipschitz Bandits without the Lipschitz Constant , 2011, ALT.

[16]  Kevin P. Murphy,et al.  Machine learning - a probabilistic perspective , 2012, Adaptive computation and machine learning series.

[17]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[18]  Alexander J. Smola,et al.  Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations , 2012, ICML.

[19]  Nando de Freitas,et al.  Bayesian Optimization in High Dimensions via Random Embeddings , 2013, IJCAI.

[20]  Nando de Freitas,et al.  Bayesian Multi-Scale Optimistic Optimization , 2014, AISTATS.

[21]  Matt J. Kusner,et al.  Bayesian Optimization with Inequality Constraints , 2014, ICML.

[22]  Kirthevasan Kandasamy,et al.  High Dimensional Bayesian Optimisation and Bandits via Additive Models , 2015, ICML.