Integral-Partial Differential Equations of Isaacs' Type Related to Stochastic Differential Games with Jumps

In this paper we study zero-sum two-player stochastic differential games with jumps with the help of theory of Backward Stochastic Differential Equations (BSDEs). We generalize the results of Fleming and Souganidis [10] and those by Biswas [3] by considering a controlled stochastic system driven by a d-dimensional Brownian motion and a Poisson random measure and by associating nonlinear cost functionals defined by controlled BSDEs. Moreover, unlike the both papers cited above we allow the admissible control processes of both players to depend on all events occurring before the beginning of the game. This quite natural extension allows the players to take into account such earlier events, and it makes even easier to derive the dynamic programming principle. The price to pay is that the cost functionals become random variables and so also the upper and the lower value functions of the game are a priori random fields. The use of a new method allows to prove that, in fact, the upper and the lower value functions are deterministic. On the other hand, the application of BSDE methods [18] allows to prove a dynamic programming principle for the upper and the lower value functions in a very straight-forward way, as well as the fact that they are the unique viscosity solutions of the upper and the lower integral-partial differential equations of Hamilton-Jacobi-Bellman-Isaacs' type, respectively. Finally, the existence of the value of the game is got in this more general setting if Isaacs' condition holds.

[1]  Juan Li,et al.  Stochastic Differential Games and Viscosity Solutions of Hamilton--Jacobi--Bellman--Isaacs Equations , 2008, SIAM J. Control. Optim..

[2]  G. Barles,et al.  Backward stochastic differential equations and integral-partial differential equations , 1997 .

[3]  Shanjian Tang,et al.  Switching Games of Stochastic Differential Systems , 2007, SIAM J. Control. Optim..

[4]  G. Barles,et al.  Second-order elliptic integro-differential equations: viscosity solutions' theory revisited , 2007, math/0702263.

[5]  P. Souganidis,et al.  Differential Games and Representation Formulas for Solutions of Hamilton-Jacobi-Isaacs Equations. , 1983 .

[6]  B. Øksendal,et al.  Risk minimizing portfolios and HJBI equations for stochastic differential games , 2008 .

[7]  S. Shreve,et al.  Methods of Mathematical Finance , 2010 .

[8]  Shige Peng,et al.  Stochastic optimization theory of backward stochastic differential equations with jumps and viscosity solutions of Hamilton–Jacobi–Bellman equations , 2009 .

[9]  Catherine Rainer,et al.  Nash Equilibrium Payoffs for Nonzero-Sum Stochastic Differential Games , 2004, SIAM J. Control. Optim..

[10]  S. Peng,et al.  Adapted solution of a backward stochastic differential equation , 1990 .

[11]  Claude Dellacherie Sur l’existence de certains ess.inf et ess.sup de familles de processus mesurables , 1978 .

[12]  Imran H. Biswas On Zero-Sum Stochastic Differential Games with Jump-Diffusion Driven State: A Viscosity Solution Framework , 2010, SIAM J. Control. Optim..

[13]  Xunjing Li,et al.  Necessary Conditions for Optimal Control of Stochastic Systems with Random Jumps , 1994 .

[14]  S. Peng A Generalized dynamic programming principle and hamilton-jacobi-bellman equation , 1992 .

[15]  J. Lepeltier,et al.  Zero-sum stochastic differential games and backward equations , 1995 .

[16]  P. Lions,et al.  User’s guide to viscosity solutions of second order partial differential equations , 1992, math/9207212.