The conjugate gradient method for optimal control problems

This paper extends the conjugate gradient minimization method of Fletcher and Reeves to optimal control problems. The technique is directly applicable only to unconstrained problems; if terminal conditions and inequality constraints are present, the problem must be converted to an unconstrained form; e.g., by penalty functions. Only the gradient trajectory, its norm, and one additional trajectory, the actual direction of search, need be stored. These search directions are generated from past and present values of the objective and its gradient. Successive points are determined by linear minimization down these directions, which are always directions of descent. Thus, the method tends to converge, even from poor approximations to the minimum. Since, near its minimum, a general nonlinear problem can be approximated by one with a linear system and quadratic objective, the rate of convergence is studied by considering this case. Here, the directions of search are conjugate and hence the objective is minimized over an expanding sequence of sets. Also, the distance from the current point to the miminum is reduced at each step. Three examples are presented to compare the method with the method of steepest descent. Convergence of the proposed method is much more rapid in all cases. A comparison with a second variational technique is also given in Example 3.