Diagonally modified conditional gradient methods for input constrained optimal control problems

Many iterative methods for constrained minimization problems conform to the general scheme, $u_{i + 1} \in A_i (\Omega ,F,u_i )$, $u_1 \in \Omega $ where $\Omega $ is a set of feasible vectors in a Banach space X, F is a payoff function whose minimum is sought in $\Omega $ and $A_i $ is a map with range in the set of subsets of $\Omega $. If $\{ (\Omega _i ,F_i )\} $ is a sequence of related problems that approximate $\{ (\Omega ,F)\} $ in some sense, with $\Omega _i \subset \Omega _{i + 1} \subset \Omega $, then the corresponding diagonal modification of the original algorithm generates iterates via the recursion, $u_{i + 1} \in A_i (\Omega _i ,F_i ,u_i )$, $u_1 \in \Omega _1 $. If $(\Omega _i ,F_i )$ is properly selected, the diagonal modification can compute approximate solutions for $(\Omega ,F)$ efficiently in circumstances where the original algorithm is difficult or impossible to implement for $(\Omega ,F)$. In particular, this happens for certain gradient-related descent methods and increasingly r...