Symmetries and analytical solutions of the Hamilton–Jacobi–Bellman equation for a class of optimal control problems

Summary The main contribution of this paper is to identify explicit classes of locally controllable second-order systems and optimization functionals for which optimal control problems can be solved analytically, assuming that a differentiable optimal cost-to-go function exists for such control problems. An additional contribution of the paper is to obtain a Lyapunov function for the same classes of systems. The paper addresses the Lie point symmetries of the Hamilton–Jacobi–Bellman (HJB) equation for optimal control of second-order nonlinear control systems that are affine in a single input and for which the cost is quadratic in the input. It is shown that if there exists a dilation symmetry of the HJB equation for optimal control problems in this class, this symmetry can be used to obtain a solution. It is concluded that when the cost on the state preserves the dilation symmetry, solving the optimal control problem is reduced to finding the solution to a first-order ordinary differential equation. For some cases where the cost on the state breaks the dilation symmetry, the paper also presents an alternative method to find analytical solutions of the HJB equation corresponding to additive control inputs. The relevance of the proposed methodologies is illustrated in several examples for which analytical solutions are found, including the Van der Pol oscillator and mass–spring systems. Furthermore, it is proved that in the well-known case of a linear quadratic regulator, the quadratic cost is precisely the cost that preserves the dilation symmetry of the equation. Copyright © 2015 John Wiley & Sons, Ltd.

[1]  Miroslav Krstic,et al.  Inverse optimal stabilization of a rigid spacecraft , 1999, IEEE Trans. Autom. Control..

[2]  Winter Sinkala,et al.  Symmetry-based solution of a model for a combination of a risky investment and a riskless investment , 2007 .

[3]  Arthur J. Krener,et al.  Backstepping design with local optimality matching , 2001, IEEE Trans. Autom. Control..

[4]  D. Tsoubelis,et al.  COMPLETE SPECIFICATION OF SOME PARTIAL DIFFERENTIAL EQUATIONS THAT ARISE IN FINANCIAL MATHEMATICS , 2009 .

[5]  L. Rodrigues An inverse optimality method to solve a class of second order optimal control problems , 2010, 18th Mediterranean Conference on Control and Automation, MED'10.

[6]  M. Krstić,et al.  Inverse optimal design of input-to-state stabilizing nonlinear controllers , 1998, IEEE Trans. Autom. Control..

[7]  L. Rodrigues,et al.  Optimal control of a third order nonlinear system based on an inverse optimality method , 2011, Proceedings of the 2011 American Control Conference.

[8]  P. Kokotovic,et al.  Inverse Optimality in Robust Stabilization , 1996 .

[9]  Camille Alain Rabbath,et al.  An inverse optimality method to solve a class of third order optimal control problems , 2010, 49th IEEE Conference on Decision and Control (CDC).

[10]  Viroshan Naicker,et al.  Symmetry Reductions of a Hamilton-Jacobi-Bellman Equation Arising in Financial Mathematics , 2005 .

[11]  J. Doyle,et al.  Nonlinear games: examples and counterexamples , 1996, Proceedings of 35th IEEE Conference on Decision and Control.

[12]  D. Lukes Optimal Regulation of Nonlinear Dynamical Systems , 1969 .

[13]  R. E. Kalman,et al.  When Is a Linear Control System Optimal , 1964 .