Stochastic Differential Games and Viscosity Solutions of Hamilton--Jacobi--Bellman--Isaacs Equations

In this paper we study zero-sum two-player stochastic differential games with the help of the theory of backward stochastic differential equations (BSDEs). More precisely, we generalize the results of the pioneering work of Fleming and Souganidis [Indiana Univ. Math. J., 38 (1989), pp. 293-314] by considering cost functionals defined by controlled BSDEs and by allowing the admissible control processes to depend on events occurring before the beginning of the game. This extension of the class of admissible control processes has the consequence that the cost functionals become random variables. However, by making use of a Girsanov transformation argument, which is new in this context, we prove that the upper and the lower value functions of the game remain deterministic. Apart from the fact that this extension of the class of admissible control processes is quite natural and reflects the behavior of the players who always use the maximum of available information, its combination with BSDE methods, in particular that of the notion of stochastic “backward semigroups" introduced by Peng [BSDE and stochastic optimizations, in Topics in Stochastic Analysis, Science Press, Beijing, 1997], allows us then to prove a dynamic programming principle for both the upper and the lower value functions of the game in a straightforward way. The upper and the lower value functions are then shown to be the unique viscosity solutions of the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations, respectively. For this Peng's BSDE method is extended from the framework of stochastic control theory into that of stochastic differential games.