Stochastic Optimal Control Problems

Uncertainty is inherent in most real-world systems. It places many disadvantages (and sometimes, surprisingly, advantages) on humankind’s efforts, which are usually associated with the quest for optimal results. The systems mainly studied in this book are dynamic, namely, they evolve over time. Moreover, they are described by Ito’s stochastic differential equations and are sometimes called diffusion models. The basic source of uncertainty in diffusion models is white noise, which represents the joint effects of a large number of independent random forces acting on the systems. Since the systems are dynamic, the relevant decisions (controls), which are made based on the most updated information available to the decision makers (controllers), must also change over time. The decision makers must select an optimal decision among all possible ones to achieve the best expected result related to their goals. Such optimization problems are called stochastic optimal control problems. The range of stochastic optimal control problems covers a variety of physical, biological, economic, and management systems, just to mention a few. In this chapter we shall set up a rigorous mathematical framework for stochastic optimal control problems.