暂无分享,去创建一个
[1] Ruben Martinez-Cantin,et al. Practical Bayesian optimization in the presence of outliers , 2017, AISTATS.
[2] F. Alajaji,et al. Lectures Notes in Information Theory , 2000 .
[3] Svetha Venkatesh,et al. Stable Bayesian optimization , 2018, International Journal of Data Science and Analytics.
[4] Yueming Lyu,et al. Efficient Batch Black-box Optimization with Deterministic Regret Bounds , 2019, ArXiv.
[5] Michèle Sebag,et al. Machine Learning and Knowledge Discovery in Databases , 2015, Lecture Notes in Computer Science.
[6] Andreas Krause,et al. Stochastic Linear Bandits Robust to Adversarial Attacks , 2020, AISTATS.
[7] Tara Javidi,et al. Gaussian Process bandits with adaptive discretization , 2017, ArXiv.
[8] Aditya Gopalan,et al. Bayesian Optimization under Heavy-tailed Payoffs , 2019, NeurIPS.
[9] Maryam Kamgarpour,et al. Mixed Strategies for Robust Optimization of Unknown Objectives , 2020, AISTATS.
[10] Adam D. Bull,et al. Convergence Rates of Efficient Global Optimization Algorithms , 2011, J. Mach. Learn. Res..
[11] Andreas Krause,et al. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2009, IEEE Transactions on Information Theory.
[12] Tara Javidi,et al. Multiscale Gaussian Process Level Set Estimation , 2019, AISTATS.
[13] Peter A. Flach,et al. Evaluation Measures for Multi-class Subgroup Discovery , 2009, ECML/PKDD.
[14] Nicolas Vayatis,et al. Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration , 2013, ECML/PKDD.
[15] Maryam Aziz,et al. Pure Exploration in Infinitely-Armed Bandit Models with Fixed-Confidence , 2018, ALT.
[16] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[17] Andreas Krause,et al. Corruption-Tolerant Gaussian Process Bandit Optimization , 2020, AISTATS.
[18] Volkan Cevher,et al. Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization , 2017, COLT.
[19] Alexander J. Smola,et al. Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations , 2012, ICML.
[20] Volkan Cevher,et al. Adversarially Robust Optimization with Gaussian Processes , 2018, NeurIPS.
[21] Jonathan Scarlett,et al. Noisy Adaptive Group Testing: Bounds and Algorithms , 2018, IEEE Transactions on Information Theory.
[22] Justin J. Beland. Bayesian Optimization Under Uncertainty , 2017 .
[23] Leslie Pack Kaelbling,et al. Bayesian Optimization with Exponential Convergence , 2015, NIPS.
[24] N. Aronszajn. Theory of Reproducing Kernels. , 1950 .
[25] John Shawe-Taylor,et al. Regret Bounds for Gaussian Process Bandit Problems , 2010, AISTATS 2010.
[26] Aurélien Garivier,et al. On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models , 2014, J. Mach. Learn. Res..
[27] Dimitris Bertsimas,et al. Nonconvex Robust Optimization for Problems with Constraints , 2010, INFORMS J. Comput..
[28] Nicolò Cesa-Bianchi,et al. Gambling in a rigged casino: The adversarial multi-armed bandit problem , 1995, Proceedings of IEEE 36th Annual Foundations of Computer Science.
[29] Alexandre Bernardino,et al. Unscented Bayesian optimization for safe robot grasping , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[30] Alessandro Lazaric,et al. Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence , 2012, NIPS.
[31] Yingkai Li,et al. Stochastic Linear Optimization with Adversarial Corruption , 2019, ArXiv.
[32] Zi Wang,et al. Max-value Entropy Search for Efficient Bayesian Optimization , 2017, ICML.
[33] Nello Cristianini,et al. Finite-Time Analysis of Kernelised Contextual Bandits , 2013, UAI.
[34] Daniele Calandriello,et al. Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret , 2019, COLT.
[35] Andreas Krause,et al. Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation , 2016, NIPS.
[36] Santu Rana,et al. Distributionally Robust Bayesian Quadrature Optimization , 2020, AISTATS.
[37] Sattar Vakili,et al. On Information Gain and Regret Bounds in Gaussian Process Bandits , 2020, AISTATS.
[38] Rémi Munos,et al. Optimistic optimization of a Brownian , 2018, NeurIPS.
[39] Anupam Gupta,et al. Better Algorithms for Stochastic Bandits with Adversarial Corruptions , 2019, COLT.
[40] Aditya Gopalan,et al. On Kernelized Multi-armed Bandits , 2017, ICML.
[41] Tara Javidi,et al. Multi-Scale Zero-Order Optimization of Smooth Functions in an RKHS , 2020, ArXiv.
[42] Vincent Y. F. Tan,et al. Tight Regret Bounds for Noisy Optimization of a Brownian Motion , 2020, IEEE Transactions on Signal Processing.
[43] Sattar Vakili,et al. Regret Bounds for Noise-Free Bayesian Optimization , 2020, ArXiv.
[44] Volkan Cevher,et al. Robust Maximization of Non-Submodular Objectives , 2018, AISTATS.
[45] Renato Paes Leme,et al. Stochastic bandits robust to adversarial corruptions , 2018, STOC.
[46] Bolei Zhou,et al. Optimization as Estimation with Gaussian Processes in Bandit Settings , 2015, AISTATS.
[47] Jonathan Scarlett,et al. Tight Regret Bounds for Bayesian Optimization in One Dimension , 2018, ICML.
[48] Johannes Kirschner,et al. Distributionally Robust Bayesian Optimization , 2020, AISTATS.
[49] David Janz,et al. Bandit optimisation of functions in the Matérn kernel RKHS , 2020, AISTATS.