STOCHASTIC OPTIMAL CONTROL OF PARTIALLY OBSERVABLE NONLINEAR SYSTEMS

Abstract This paper presents a new theory for solving the continuous-time stochastic optimal control problem for a very general class of nonlinear (nonautonomous and nonaffine controlled) systems with partial state information. The proposed theory transforms the nonlinear problem into a sequence of linear-quadratic Gaussian (LQG) and time-varying problems, which converge (uniformly in time) under very mild conditions of local Lipschitz continuity. These results have been previously presented for deterministic nonlinear systems under perfect state measurements for finite horizons, but the present study shows how an additional class of nonlinear problems, involving partially observable stochastic systems, can be handled with the same theory. The method introduces an “approximating sequence of Riccati equations” ( ASRE ) to explicitly find the error covariance matrix and nonlinear time-varying optimal feedback controllers for such nonlinear systems, which is achieved using the framework of Kalman-Bucy filtering, separation principle and LQR theory. The paper shows a practical way of designing optimal feedback control systems for complex nonlinear stochastic problems using a combination of modern LQG estimation and LQ control-design methodologies.