Derivative free optimization addresses general nonlin- ear optimization problems in the cases when obtaining deriva- tive information for the objective and/or the constraint functions is impractical due to computational cost or numerical inaccu- racies. Applications of derivative free optimization arise often in engineering design such as circuit tuning, aircraft configura- tion, water pipe calibration, oil reservoir modeling, etc. Tradi- tional approaches to derivative free optimization until late 1990's have been based on sampling of the objective function, without any attempt to build models of the function or its derivatives. In the late 90's model based trust region derivative free meth- ods started to gain popularity, pioneered by Powell and further advanced by Conn, Scheinberg and Toint. These methods build linear or quadratic interpolation model of the objective func- tion and hence can exploit some first and second oder informa- tion. In the last several years the general convergence theory for these methods, under reasonable assumptions, was developed by Conn, Scheinberg and Vicente. Moreover, recently Scheinberg and Toint have discovered the "self-correcting" property which helps explain the good performance observed in these methods and have shown convergence under very mild requirements.
[1]
Boris Polyak.
The conjugate gradient method in extremal problems
,
1969
.
[2]
Stefan M. Wild,et al.
Benchmarking Derivative-Free Optimization Algorithms
,
2009,
SIAM J. Optim..
[3]
M. J. D. Powell,et al.
Least Frobenius norm updating of quadratic models that satisfy interpolation conditions
,
2004,
Math. Program..
[4]
Katya Scheinberg,et al.
Introduction to derivative-free optimization
,
2010,
Math. Comput..
[5]
J. Zowe,et al.
An iterative two-step algorithm for linear complementarity
problems
,
1994
.