The expected value of a multiplicative performance criterion, represented by the exponential of a quadratic function of the state and control variables, is minimized subject to a discrete stochastic linear system with additive Gaussian measurement and process noise. This cost function, which is a generalization of the mean quadratic cost criterion, allows a degree of shaping of the probability density function of the quadratic cost criterion. In general, the control law depends upon a gain matrix which operates linearly on the smoothed history of the state vector from the initial to the current time. This gain matrix explicitly includes the covariance of the estimation errors of the entire state history. The separation theorem holds although the certainty equivalence principle does not. Two special cases are of importance. The first occurs when only the terminal state is costed. A feedback control law, linear in the current estimate of the state, results where the feedback gains are functionally dependent upon the error covariance of the current state estimate. The second occurs if all the intermediate states are costed but there is no process noise except for an initial condition uncertainty. A feedback law results which depends not only upon the current dynamical state estimate but also on an additional vector which is path dependent.
[1]
R. E. Kalman,et al.
A New Approach to Linear Filtering and Prediction Problems
,
2002
.
[2]
R. E. Kalman,et al.
New Results in Linear Filtering and Prediction Theory
,
1961
.
[3]
C. Striebel.
Sufficient statistics in the optimum control of stochastic systems
,
1965
.
[4]
W. Wonham.
On the Separation Theorem of Stochastic Control
,
1968
.
[5]
H. Witsenhausen.
Separation of estimation and control for discrete time systems
,
1971
.
[6]
R. Larson,et al.
Dynamic programming for stochastic control of discrete systems
,
1971
.
[7]
Rhodes,et al.
Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games
,
1973
.