Optimal Control of Diffusion Processes

This chapter deals with completely observable stochastic control problems for diffusion processes, described by SDEs. The decision maker chooses an optimal decision among all possible ones to achieve the goal. Namely, for a control process, its response evolves according to a (controlled) SDE and the payoff on a finite time interval is given. The controller wants to minimize (or maximize) the payoff by choosing an appropriate control process from among all possible ones. Here we consider three types of control processes: 1. .Ft /-progressively measurable processes. 2. Brownian-adapted processes. 3. Feedback controls. In order to analyze the problems, we mainly use the dynamic programming principle (DPP) for the value function. The reminder of this chapter is organized as follows. Section 2.1 presents the formulation of control problems and basic properties of value functions, as preliminaries for later sections. Section 2.2 focuses on DPP. Although DPP is known as a two stage optimization method, we will formulate DPP by using a semigroup and characterize the value function via the semigroup. In Sect. 2.3, we deal with verification theorems, which give recipes for finding optimal Markovian policies. Section 2.4 considers a class of Merton-type optimal investment models, as an application of previous results.