Convergence Results for Generalized Pattern Search Algorithms are Tight

The convergence theory of generalized pattern search algorithms for unconstrained optimization guarantees under mild conditions that the method produces a limit point satisfying first order optimality conditions related to the local differentiability of the objective function. By exploiting the flexibility allowed by the algorithm, we derive six small dimensional examples showing that the convergence results are tight in the sense that they cannot be strengthened without additional assumptions, i.e., that certain requirement imposed on pattern search algorithms are not merely artifacts of the proofs.In particular, we first show the necessity of the requirement that some algorithmic parameters are rational. We then show that, even for continuously differentiable functions, the method may generate infinitely many limit points, some of which may have non-zero gradients. Finally, we consider functions that are not strictly differentiable. We show that even when a single limit point is generated, the gradient may be non-zero, and zero may be excluded from the generalized gradient, therefore, the method does not necessarily produce a Clarke stationary point.