The effect of perturbations on the convergence rates of optimization algorithms

The problem of minimizing a functionF over a set Ω is approximated by a sequence of problems whereF and Ω are replaced byF(n) and Ω(n), respectively. We show in which manner the convergence rates of the conditional gradient and projected gradient methods are influenced by the approximation. In particular, it becomes evident how the convergence theory for infinite dimensional problems such as control problems explains the behavior of finite dimensional implementations.