Dynamic solution of the HJB equation and the optimal control of nonlinear systems

Optimal control problems are often solved exploiting the solution of the so-called Hamilton-Jacobi-Bellman (HJB) partial differential equation, which may be, however, hard or impossible to solve in specific examples. Herein we circumvent this issue determining a dynamic solution of the HJB equation, without solving any partial differential equation. The methodology yields a dynamic control law that minimizes a cost functional defined as the sum of the original cost and an additional cost.