Optimal stable control for nonlinear dynamical systems: an analytical dynamics based approach

This paper presents a method for obtaining optimal stable control for general nonlinear nonautonomous dynamical systems. The approach is inspired by recent developments in analytical dynamics and the observation that the Lyapunov criterion for stability of dynamical systems can be recast as a constraint to be imposed on the system. A closed-form expression for control is obtained that minimizes a user-defined control cost at each instant of time and enforces the Lyapunov constraint simultaneously. The derivation of this expression closely mirrors the development of the fundamental equation of motion used in the study of constrained motion. For this control method to work, the positive definite functions used in the Lyapunov constraint should satisfy a consistency condition. A class of positive definite functions has been provided for mechanical systems that meet this criterion. To illustrate the broad scope of the method, for linear systems it is shown that a proper choice of these positive definite functions results in conventional LQR control. Control of the Lorenz system and a multi-degree of freedom nonlinear mechanical system are considered. Numerical examples demonstrating the efficacy and simplicity of the method are provided.

[1]  L. A. Pars,et al.  An introduction to the calculus of variations , 1964 .

[2]  Oskar Bolza,et al.  Lectures on the calculus of variations , 1905 .

[3]  R. Kalaba,et al.  Analytical Dynamics: A New Approach , 1996 .

[4]  P. Kokotovic,et al.  Inverse Optimality in Robust Stabilization , 1996 .

[5]  L. Perko Differential Equations and Dynamical Systems , 1991 .

[6]  Eduardo Sontag A universal construction of Artstein's theorem on nonlinear stabilization , 1989 .

[7]  F. Udwadia A new perspective on the tracking control of nonlinear structural and mechanical systems , 2003, Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[8]  F. Udwadia Optimal tracking control of nonlinear dynamical systems , 2008, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[9]  Eduardo D. Sontag,et al.  Mathematical Control Theory: Deterministic Finite Dimensional Systems , 1990 .

[10]  C. A. Desoer,et al.  Nonlinear Systems Analysis , 1978 .

[11]  Anja Vogler,et al.  Lectures On The Calculus Of Variations , 2016 .

[12]  R. Kalaba,et al.  A new perspective on constrained motion , 1992, Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences.

[13]  Firdaus E. Udwadia,et al.  A New Approach to Stable Optimal Control of Complex Nonlinear Dynamical Systems , 2014 .

[14]  P. Olver Nonlinear Systems , 2013 .

[15]  R. Bellman Calculus of Variations (L. E. Elsgolc) , 1963 .

[16]  B. Brunt The calculus of variations , 2003 .

[17]  Firdaus E. Udwadia,et al.  Dynamics and control of a multi-body planar pendulum , 2015 .

[18]  F. Chorlton,et al.  An introduction to the calculus of variations , 1963 .

[19]  Frank L. Lewis,et al.  Optimal Control , 1986 .

[20]  Tayfun Çimen,et al.  State-Dependent Riccati Equation (SDRE) Control: A Survey , 2008 .

[21]  Solomon Lefschetz,et al.  Differential Equations: Geometric Theory , 1958 .

[22]  R. Freeman,et al.  Control Lyapunov functions: new ideas from an old source , 1996, Proceedings of 35th IEEE Conference on Decision and Control.

[23]  Vladimir Ivanovich Zubov Mathematical theory of the motion stability , 1997 .

[24]  Brian D. O. Anderson,et al.  Linear Optimal Control , 1971 .

[25]  D. Mayne Nonlinear and Adaptive Control Design [Book Review] , 1996, IEEE Transactions on Automatic Control.