The convergence theory of generalized pattern search algorithms for unconstrained optimization guarantees under mild conditions that the method produces a limit point satisfying first order optimality conditions related to the local differentiability of the objective function. By exploiting the flexibility allowed by the algorithm, we derive six small dimensional examples showing that the convergence results are tight in the sense that they cannot be strengthened without additional assumptions, i.e., that certain requirement imposed on pattern search algorithms are not merely artifacts of the proofs.In particular, we first show the necessity of the requirement that some algorithmic parameters are rational. We then show that, even for continuously differentiable functions, the method may generate infinitely many limit points, some of which may have non-zero gradients. Finally, we consider functions that are not strictly differentiable. We show that even when a single limit point is generated, the gradient may be non-zero, and zero may be excluded from the generalized gradient, therefore, the method does not necessarily produce a Clarke stationary point.
[1]
A. Manitius.
Optimization and Nonsmooth Analysis (Frank H. Clarke)
,
1985
.
[2]
V. Torczon,et al.
RANK ORDERING AND POSITIVE BASES IN PATTERN SEARCH ALGORITHMS
,
1996
.
[3]
Virginia Torczon,et al.
On the Convergence of Pattern Search Algorithms
,
1997,
SIAM J. Optim..
[4]
A. J. Booker,et al.
A rigorous framework for optimization of expensive functions by surrogates
,
1998
.
[5]
C. J. Price,et al.
On the Convergence of Grid-Based Methods for Unconstrained Optimization
,
2000,
SIAM J. Optim..
[6]
Charles Audet,et al.
Analysis of Generalized Pattern Searches
,
2000,
SIAM J. Optim..
[7]
Charles Audet,et al.
Mesh Adaptive Direct Search Algorithms for Constrained Optimization
,
2006,
SIAM J. Optim..