Differentiating the Multipoint Expected Improvement for Optimal Batch Design
暂无分享,去创建一个
David Ginsbourger | Sébastien Marmin | Clément Chevalier | D. Ginsbourger | C. Chevalier | Sébastien Marmin
[1] C. Chevalier. Fast uncertainty reduction strategies relying on Gaussian process models , 2013 .
[2] David Ginsbourger,et al. Fast Computation of the Multi-Points Expected Improvement with Applications in Batch Selection , 2013, LION.
[3] Yves Deville,et al. DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization , 2012 .
[4] Andreas Krause,et al. Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process Bandit Optimization , 2012, ICML.
[5] S. Kakade,et al. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2012, IEEE Transactions on Information Theory.
[6] Ling Li,et al. Sequential design of computer experiments for the estimation of a probability of failure , 2010, Statistics and Computing.
[7] Jasjeet S. Sekhon,et al. Genetic Optimization Using Derivatives , 2011, Political Analysis.
[8] Adam D. Bull,et al. Convergence Rates of Efficient Global Optimization Algorithms , 2011, J. Mach. Learn. Res..
[9] Nando de Freitas,et al. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning , 2010, ArXiv.
[10] E. Vázquez,et al. Convergence properties of the expected improvement algorithm with fixed mean and covariance functions , 2007, 0712.3744.
[11] Andy J. Keane,et al. Learning and Intelligent Optimization, 4th International Conference, LION 4, Venice, Italy, January 18-22, 2010. Selected Papers , 2010, LION.
[12] Andreas Krause,et al. Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2009, IEEE Transactions on Information Theory.
[13] D. Ginsbourger,et al. Towards Gaussian Process-based Optimization with Finite Time Horizon , 2010 .
[14] D. Ginsbourger,et al. Kriging is well-suited to parallelize optimization , 2010 .
[15] Eric Walter,et al. An informational approach to the global optimization of expensive-to-evaluate functions , 2006, J. Glob. Optim..
[16] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[17] Warren B. Powell,et al. A Knowledge-Gradient Policy for Sequential Information Collection , 2008, SIAM J. Control. Optim..
[18] Peter Auer,et al. Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.
[19] Kenny Q. Ye,et al. Algorithmic construction of optimal symmetric Latin hypercube designs , 2000 .
[20] Donald R. Jones,et al. Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..
[21] William J. Welch,et al. Computer experiments and global optimization , 1997 .
[22] A. Genz. Numerical Computation of Multivariate Normal Probabilities , 1992 .
[23] S. Berman. An extension of Plackett's differential equation for the multivariate normal density , 1987 .
[24] S. B. Atienza-Samols,et al. With Contributions by , 1978 .