Gaussian process (GP) models are non-parametric, blackbox models that represent a new method for system identification. The optimization of GP models, due to their probabilistic nature, is based on maximization of the probability of the model. This probability can be calculated by the marginal likelihood. Commonly used approaches for maximizing the marginal likelihood of GP models are the deterministic optimization methods. However, their success critically depends on the initial values. In addition, the marginal likelihood function often has a lot of local minima in which the deterministic method can be trapped. Therefore, stochastic optimization methods can be considered as an alternative approach. In this paper we test their applicability in GP model optimization.We performed a comparative study of three stochastic algorithms: the genetic algorithm, differential evolution, and particle swarm optimization. Empirical tests were carried out on a benchmark problem of modeling the concentration of CO2 in the atmosphere. The results indicate that with proper tuning differential evolution and particle swarm optimization significantly outperform the conjugate gradient method.
[1]
R. Storn,et al.
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
,
2005
.
[2]
Riccardo Poli,et al.
Particle swarm optimization
,
1995,
Swarm Intelligence.
[3]
Mauro Birattari,et al.
Swarm Intelligence
,
2012,
Lecture Notes in Computer Science.
[4]
R. Storn,et al.
Differential Evolution
,
2004
.
[5]
Rainer Storn,et al.
Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces
,
1997,
J. Glob. Optim..
[6]
A. E. Eiben,et al.
Introduction to Evolutionary Computing
,
2003,
Natural Computing Series.
[7]
Carl E. Rasmussen,et al.
Gaussian processes for machine learning
,
2005,
Adaptive computation and machine learning.
[8]
Dr.-Ing. Hartmut Pohlheim.
Genetic and Evolutionary Algorithm Toolbox for Matlab
,
2000
.