SUCCESSIVE APPROXIMATION METHODS FOR THE SOLUTION OF OPTIMAL CONTROL PROBLEMS

IN THIS paper we present some successive approximation methods for the solution of a general class of optimal control problems. The class of problems considered is known as the Bolxa Problem in the Calculus of Variations [l]. The algorithms considered are extensions of the gradient methods due to KELLEY [2] and BRYSON [3] and similar to the methods proposed by MIRIAM [4, s]. MERRIAM approaches the problem from the HamiltonJacobi viewpoint and restricts himself to the simplified Bolxa problem. The algorithm presented is formally equivalent to Newton’s Method in Function Space [6, 73 and indeed in some problems it would be better to use Newton’s Method. The development in this paper is formal and indicates how we solve these problems on a digital computer. However, under the assumptions we have made a rigorous treatment of these successive approximation methods can be given. We shall do this elsewhere. The paper may be divided into 8 sections. In Section 3 we formulate the problem and state the assumptions we have made. In Section 4 we state the first-order necessary conditions of optimality. These are the Euler-Lagrange equations and the transversality condition. Section 5 is devoted to Second Variation Successive Approximation Methods and certain modifications to it. In Section 6 we show how the second variation method is formally equivalent to Newton’s Method and also indicate how the linear two point boundary value problem arising in Newton’s Method can be solved in essentially the same way as in the Second Variation Method. In Section 7 we point out certain advantages and disadvantages of the Second Variation Method.