Multi-objective Bayesian global optimization for continuous problems and applications
暂无分享,去创建一个
A common method to solve expensive function evaluation
problem is using Bayesian Global Optimization, instead of Evolutionary
Algorithms. However, the execution time of multi-objective Bayesian Global
Optimization (MOBGO) itself is still too long, even though it only requires a
few function evaluations. The reason for the high cost of MOBGO is two-fold:
on the one hand, MOBGO requires an infill criterion to be calculated many
times, but the computational complexity of an infill criterion has so far
been very high. Another reason is that the optimizer, which aims at searching
for an optimal solution according to the surrogate models, is not
sufficiently efficient. The main contributions of this thesis consist of 1.
Decreased the computational complexity of a well-known infill criteria,
Expected Hypervolume Improvement, into $n log (n)$ both in 2-D and 3-D
cases; 2. Proposed a new criterion, Truncated Expected Hypervolume
Improvement, to make full use of a-priori knowledge of objective functions,
whenever it is available; 3. Proposed another infill criterion, Expected
Hypervolume Improvement Gradient, to improve the convergence of the optimizer
in MOBGO.