Global optimization problems, derived from high complexity industrial applications (see, e.g., [1, 2, 17, 19, 21, 22]), are often “black box” and determined by multiextremal objective functions. Solving efficiently this type of problems is a great challenge, since they present a high number of local minima, often with extremely different function values, and do not present a simple mathematical description of the global optima. One of the natural and powerful (from both the theoretical and the applied points of view) assumptions on these problems is that the objective function has bounded slopes. In this case, the methods of the Lipschitz global optimization can be applied (see, e.g., [2, 5, 10, 17, 22, 23]). This kind of global optimization problems is very frequent in practice. Let us refer only to the following examples: general (Lipschitz) nonlinear approximation; solution of nonlinear equations and inequalities; calibration of complex nonlinear system models; black-box systems optimization; optimization of complex hierarchical systems (related, for example, to facility location, mass-service systems); etc. (see, e.g., [2, 4, 5, 8, 15, 17, 22] and the references given therein). As well known, the usage of the only global information about behavior of the objective function during its optimization can lead to a slow convergence of algorithms to global optimum points. Therefore, particular attention should be paid to the usage of a local information in global optimization methods, as well. One of the traditional ways in this context (see, e.g., [5, 9, 10]) recommends stopping the global procedure and switching to a local optimization method in order to improve the solution and to accelerate the search during its final phase. Applying this technique can lead to some problems related to the combination of global and local phases, the main problem being that of determining when to stop the global procedure and start the local one. A premature arrest can provoke the loss of the global solution whereas a late one can slow down the search. In this talk, another fruitful approaches will be discussed. The first one (the so-called local tuning approach, see [12,13,17,18,20,22]) allows global optimization algorithms to tune their behavior to the shape of the objective function at different subintervals of the admissible domain by estimating the local Lipschitz constants. The second one regards a continual local improvement of the current best solution incorporated in a global search procedure (see [16,17]). These techniques become even more efficient when information about the objective function derivatives is available (see [3, 11, 14]). Several Lipschitz global optimization methods illustrating the above-mentioned concepts will be considered and compared numerically with some known algorithms (see [6, 7, 11, 16, 17]).
[1]
Yaroslav D. Sergeyev,et al.
An Information Global Optimization Algorithm with Local Tuning
,
1995,
SIAM J. Optim..
[2]
Lonnie Hamm,et al.
GLOBAL OPTIMIZATION METHODS
,
2002
.
[3]
Yaroslav D. Sergeyev,et al.
Index information algorithm with local tuning for solving multidimensional global optimization problems with multiextremal constraints
,
2011,
Math. Program..
[4]
John B. Shoven,et al.
I
,
Edinburgh Medical and Surgical Journal.
[5]
Panos M. Pardalos,et al.
Encyclopedia of Optimization
,
2006
.
[6]
P. Pardalos,et al.
Handbook of global optimization
,
1995
.
[7]
Yaroslav D. Sergeyev,et al.
Global Search Based on Efficient Diagonal Partitions and a Set of Lipschitz Constants
,
2006,
SIAM J. Optim..
[8]
A. Zhigljavsky.
Stochastic Global Optimization
,
2008,
International Encyclopedia of Statistical Science.
[9]
Yu. G. Evtushenko,et al.
Parallelization of the global extremum searching process
,
2007
.
[10]
Clara Pizzuti,et al.
Local tuning and partition strategies for diagonal GO methods
,
2003,
Numerische Mathematik.
[11]
Yaroslav D. Sergeyev,et al.
A one-dimensional local tuning algorithm for solving GO problems with partially defined constraints
,
2007,
Optim. Lett..
[12]
Yaroslav D. Sergeyev,et al.
Global one-dimensional optimization using smooth auxiliary functions
,
1998,
Math. Program..
[13]
Yaroslav D. Sergeyev,et al.
Algorithm 829: Software for generation of classes of test functions with known local and global minima for global optimization
,
2003,
TOMS.
[14]
R. Horst,et al.
Global Optimization: Deterministic Approaches
,
1992
.
[15]
A. A. Zhigli︠a︡vskiĭ,et al.
Stochastic Global Optimization
,
2007
.
[16]
Y. D. Sergeyev,et al.
Global Optimization with Non-Convex Constraints - Sequential and Parallel Algorithms (Nonconvex Optimization and its Applications Volume 45) (Nonconvex Optimization and Its Applications)
,
2000
.
[17]
Pasquale Daponte,et al.
Two methods for solving optimization problems arising in electronic measurements and electrical engineering
,
1999,
SIAM J. Optim..
[18]
Josef Stoer,et al.
Numerische Mathematik 1
,
1989
.
[19]
Yaroslav D. Sergeyev,et al.
A univariate global search working with a set of Lipschitz constants for the first derivative
,
2009,
Optim. Lett..
[20]
Y. Sergeyev,et al.
Parallel Asynchronous Global Search and the Nested Optimization Scheme
,
2001
.
[21]
Vladimir A. Grishagin,et al.
Parallel Characteristical Algorithms for Solving Problems of Global Optimization
,
1997,
J. Glob. Optim..