Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search Spaces

Bayesian optimisation is a popular method for efficient optimisation of expensive black-box functions. Traditionally, BO assumes that the search space is known. However, in many problems, this assumption does not hold. To this end, we propose a novel BO algorithm which expands (and shifts) the search space over iterations based on controlling the expansion rate thought a hyperharmonic series. Further, we propose another variant of our algorithm that scales to high dimensions. We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates. Our experiments with synthetic and real-world optimisation tasks demonstrate the superiority of our algorithms over the current state-of-the-art methods for Bayesian optimisation in unknown search space.

[1]  Nando de Freitas,et al.  Bayesian Optimization in High Dimensions via Random Embeddings , 2013, IJCAI.

[2]  Nando de Freitas,et al.  Unbounded Bayesian Optimization via Regularization , 2015, AISTATS.

[3]  Kian Hsiang Low,et al.  Decentralized High-Dimensional Bayesian Optimization with Factor Graphs , 2017, AAAI.

[4]  Christopher K. I. Williams,et al.  Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) , 2005 .

[5]  Volkan Cevher,et al.  High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups , 2018, AISTATS.

[6]  Max Welling,et al.  BOCK : Bayesian Optimization with Cylindrical Kernels , 2018, ICML.

[7]  Matthias Poloczek,et al.  Scalable Global Optimization via Local Bayesian Optimization , 2019, NeurIPS.

[8]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[9]  Cheng Li,et al.  Filtering Bayesian optimization approach in weakly specified search space , 2018, Knowledge and Information Systems.

[10]  Roman Garnett,et al.  Active Learning of Linear Embeddings for Gaussian Processes , 2013, UAI.

[11]  Kirthevasan Kandasamy,et al.  High Dimensional Bayesian Optimisation and Bandits via Additive Models , 2015, ICML.

[12]  Xianfu Wang Volumes of Generalized Unit Balls , 2005 .

[13]  Chun-Liang Li,et al.  High Dimensional Bayesian Optimization via Restricted Projection Pursuit Models , 2016, AISTATS.

[14]  Steven Su,et al.  High Dimensional Bayesian Optimization via Supervised Dimension Reduction , 2019, IJCAI.

[15]  Andrew Gordon Wilson,et al.  Scaling Gaussian Process Regression with Derivatives , 2018, NeurIPS.

[16]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[17]  Andreas Krause,et al.  High-Dimensional Gaussian Process Bandits , 2013, NIPS.

[18]  KrauseAndreas,et al.  Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2012 .

[19]  Matthias Poloczek,et al.  A Framework for Bayesian Optimization in Embedded Subspaces , 2019, ICML.

[20]  Edward Chlebus,et al.  An approximate formula for a partial sum of the divergent p-series , 2009, Appl. Math. Lett..

[21]  Andreas Krause,et al.  Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features , 2018, NeurIPS.

[22]  Cheng Li,et al.  Bayesian Optimization in Weakly Specified Search Space , 2017, 2017 IEEE International Conference on Data Mining (ICDM).

[23]  Jan Peters,et al.  Bayesian optimization for learning gaits under uncertainty , 2015, Annals of Mathematics and Artificial Intelligence.

[24]  T. Apostol An Elementary View of Euler's Summation Formula , 1999 .

[25]  Santu Rana,et al.  Bayesian Optimization with Unknown Search Space , 2019, NeurIPS.

[26]  Santu Rana,et al.  Trading Convergence Rate with Computational Budget in High Dimensional Bayesian Optimization , 2020, AAAI.

[27]  Wei Chen,et al.  Adaptive Expansion Bayesian Optimization for Unbounded Global Optimization , 2020, ArXiv.

[28]  Matthew W. Hoffman,et al.  Predictive Entropy Search for Efficient Global Optimization of Black-box Functions , 2014, NIPS.

[29]  Jan Peters,et al.  Bayesian optimization for learning gaits under uncertainty , 2015, Annals of Mathematics and Artificial Intelligence.

[30]  Andreas Krause,et al.  Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces , 2019, ICML.

[31]  Jonas Mockus,et al.  On Bayesian Methods for Seeking the Extremum , 1974, Optimization Techniques.