A SUCCESSIVE SWEEP METHOD FOR SOLVING OPTIMAL PROGRAMMING PROBLEMS

Abstract : An automatic, finite-step numerical procedure is described for finding exact solutions on nonlinear optimal programming problems. The procedure represents a unification and extension of the steepest-descent, and second variation techniques. The procedure requires the backward integration of the usual adjoint-vector differential equations plus certain matrix differential equations. These integrations correspond, in the ordinary calculus, to finding the first and second derivatives of the performance index respectively. The matrix equations arise from an inhomogeneous Ricatti transformation, which generates a linear 'feedback control law' that preserves the gradient histories on the next step or permits changing them by controlled amounts, while also changing terminal conditions by controlled amounts. Thus, in a finite number of steps, the gradient histories can be made identically zero, as required for optimality, and the terminal conditions satisfied exactly. One forward plus one backward sweep, correspond to one step in the Newton-Raphson technique for finding maxima and minima in the ordinary calculus. As by-products, the procedure produces (a) the functions needed to show that the program is, or is not, a local maximum (the generalized Jacobi test) and (b) the feedback gain programs for neighboring optimal paths to the same, or a slightly different, set of terminal conditions.