Dynamic programming for stochastic control of discrete systems
暂无分享,去创建一个
This paper treats the general discrete-time linear quadratic stochastic control problem. This problem is solved in two steps. Dynamic programming is used to obtain a solution to the stochastic control problem in which perfect measurements of the state are available. Then the stochastic control problem in which only noisy measurements of a linear operator on the state are available is converted into a new stochastic control problem in which perfect measurements of the state are available. This conversion is based upon Kalman filter theory and is valid whenever the disturbances and measurement noises are Gaussian.
[1] R. Penrose. A Generalized inverse for matrices , 1955 .
[2] R. E. Kalman,et al. On the general theory of control systems , 1959 .
[3] D. Joseph,et al. On linear control theory , 1961, Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry.
[4] S. Dreyfus. Some Types of Optimal Control of Stochastic Systems , 1964 .
[5] C. Striebel. Sufficient statistics in the optimum control of stochastic systems , 1965 .