Determination of optimal feedback terminal controllers for general boundary conditions using generating functions

Given a nonlinear system and a performance index to be minimized, we present a general approach to expressing the finite time optimal feedback control law applicable to different types of boundary conditions. Starting from the necessary conditions for optimality represented by a Hamiltonian system, we solve the Hamilton-Jacobi equation for a generating function for a specific canonical transformation. This enables us to obtain the optimal feedback control for fundamentally different sets of boundary conditions only using a series of algebraic manipulations and partial differentiations. Furthermore, the proposed approach reveals an insight that the optimal cost functions for a given dynamical system can be decomposed into a single generating function that is only a function of the dynamics plus a term representing the boundary conditions. This result is formalized as a theorem. The whole procedure provides an advantage over methods rooted in dynamic programming, which require one to solve the Hamilton-Jacobi-Bellman equation repetitively for each type of boundary condition. The cost of this favorable versatility is doubling the dimension of the partial differential equation to be solved.

[1]  J. W. Humberston Classical mechanics , 1980, Nature.

[2]  Kenneth A. Loparo,et al.  Quadratic regulatory theory for analytic non-linear systems with additive controls , 1989, Autom..

[3]  Chandeok Park,et al.  Solutions of optimal feedback control problems with general boundary conditions using Hamiltonian dynamics and generating functions , 2004, Proceedings of the 2004 American Control Conference.

[4]  E. Bohn,et al.  The synthesis of suboptimal feedback control laws , 1967 .

[5]  D. Scheeres,et al.  Solving Relative Two-Point Boundary Value Problems: Spacecraft Formation Flight Transfers Application , 2004 .

[6]  Richard M. Murray,et al.  Finite-horizon optimal control and stabilization of time-scalable systems , 2000, Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187).

[7]  Daniel J. Scheeres,et al.  Solving Optimal Continuous Thrust Rendezvous Problems with Generating Functions , 2005 .

[8]  F. M. Kirillova,et al.  Optimal Feedback Control , 1995 .

[9]  S. Sieniutycz,et al.  Optimal Control: An Introduction , 2001 .

[10]  Arthur E. Bryson,et al.  Applied Optimal Control , 1969 .

[11]  S. Kahne,et al.  Optimal control: An introduction to the theory and ITs applications , 1967, IEEE Transactions on Automatic Control.

[12]  N. J. Krikelis,et al.  Optimal feedback control of non-linear systems , 1992 .

[13]  Daniel J. Scheeres,et al.  Solutions of the optimal feedback control problem using Hamiltonian dynamics and generating functions , 2003, 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475).

[14]  John Matuszewski Suboptimal terminal feedback control of nonstationary, nonlinear systems , 1972, CDC 1972.

[15]  Derek F Lawden,et al.  Optimal trajectories for space navigation , 1964 .

[16]  M. Sain,et al.  Series solution of a class of nonlinear optimal regulators , 1996 .

[17]  M. Jamshidi,et al.  Three-stage near-optimum design of nonlinear-control processes , 1974 .