We generalize the results of Fleming and Souganidis (1989) on zero sum stochastic differential games to the case when the controls are unbounded. We do this by proving a dynamic programming principle using a covering argument instead of relying on a discrete approximation (which is used along with a comparison principle by Fleming and Souganidis). Also, in contrast with Fleming and Souganidis, we define our pay-off through a doubly reflected backward stochastic differential equation. The value function (in the degenerate case of a single controller) is closely related to the second order doubly reflected BSDEs.
[1]
M. Yor,et al.
Continuous martingales and Brownian motion
,
1990
.
[2]
M. Reed.
Methods of Modern Mathematical Physics. I: Functional Analysis
,
1972
.
[3]
Ioannis Karatzas,et al.
Brownian Motion and Stochastic Calculus
,
1987
.
[4]
H. Soner,et al.
Dual Formulation of Second Order Target Problems
,
2010,
1003.6050.
[5]
S. Peng.
G-Brownian Motion and Dynamic Risk Measure under Volatility Uncertainty
,
2007,
0711.2834.
[6]
D. W. Stroock,et al.
Multidimensional Diffusion Processes
,
1979
.