Stochastic Optimal Control

In previous chapters we assumed that the state variables of the system were known with certainty. If this were not the case, the state of the system over time would be a stochastic process. We are then faced with a stochastic optimal control problem where the state of the system is represented by a controlled stochastic process. We shall only consider the case when the state equation is perturbed by a Wiener process, which gives rise to the state as a Markov diffusion process. In Appendix D.2 we have defined the Wiener process, also known as Brownian motion. In Section 13.1, we will formulate a stochastic optimal control problem governed by stochastic differential equations involving a Wiener process, known as Itô equations. Our goal will be to synthesize optimal feedback controls for systems subject to Itô equations in a way that maximizes the expected value of a given objective function. In this chapter, we also assume that the state is (fully) observed. On the other hand, when the system is subject to noisy measurements, we face partially observed optimal control problems. In some important special cases, it is possible to separate the problem into two problems: optimal estimation and optimal control. We discuss one such case in Appendix D.4.1. In general, these problems are very difficult and are beyond the scope of this book. Interested readers can consult some references listed in Section 13.5. In Section 13.2, we will extend the production planning model of Chapter 6 to allow for some uncertain disturbances. We will obtain an optimal production policy for the stochastic production planning problem thus formulated. In Section 13.3, we solve an optimal stochastic advertising problem explicitly. The problem is a modification as well as 365