Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals

Two conditional gradient algorithms are considered for the problem min ¿F, with ¿ a bounded convex subset of a Banach space. Neither method requires line search; one method needs no Lipschitz constants. Convergence rate estimates are similar in the two cases, and depend critically on the continuity properties of a set valued operator T whose fixed points, ¿, are the extremals of F in ¿. The continuity properties of T at ¿ are determined by the way a(¿) = inf{¿= |y¿¿,||;y-¿||>¿} grows with increasing ¿. It is shown that for convex F and Lipschitz continuous F', the algorithms converge like o(1/n), geometrically, or in finitely many steps, according to whether a(¿)>0 for ¿>0, or a(¿)>A¿2 with A>0, or a(¿)>A¿ with A>0. These three abstract conditions are closely related to established notions of nonsingularity for an important class of optimal control problems with bounded control inputs. The first con-- dition is satisfied (in L1)when meas {t|s(t)=0} =0, where s(¿) is the switching function associated with the extremal control ¿(¿); the second condition is satisfied when s(¿) has finitely many zeros, all simple (typical of the bang-bang extremal); the third condition is satisfied when s(¿) is bounded away from zero. Strong or uniform convexity assumptions are not invoked in the main: convergence theorems. One of the theorems can be extended to a large subclass of quasiconvex functionals F.