A general procedure is presented for constructing and analyzing approximations of dynamic programming models. The models considered are the monotone contraction operator models of Denardo (1967), which include Markov decision processes and stochastic games with a criterion of discounted present value over an infinite horizon plus many finite-stage dynamic programs. The approximations are typically achieved by replacing the original state and action spaces by subsets. Tight bounds are obtained for the distances between the optimal return function in the original model and (1) the extension of the optimal return function in the approximate mode! and (2) the return function associated with the extension of an optimal pohcy in the approximate model. Conditions are also given under which the sequence of bounds associated with a sequence of approximating models converges to zero.
[1]
E. Denardo.
CONTRACTION MAPPINGS IN THE THEORY UNDERLYING DYNAMIC PROGRAMMING
,
1967
.
[2]
B. Fox.
Discretizing dynamic programs
,
1973
.
[3]
J. Wessels.
Markov programming by successive approximations by respect to weighted supremum norms
,
1976,
Advances in Applied Probability.
[4]
D. Bertsekas.
Convergence of discretization procedures in dynamic programming
,
1975
.
[5]
Daniel H. Wagner.
Survey of Measurable Selection Theorems
,
1977
.
[6]
Annie Thomas,et al.
Models for optimal capacity expansion
,
1977
.
[7]
K. Hinderer.
ON APPROXIMATE SOLUTIONS OF FINITE-STAGE DYNAMIC PROGRAMS
,
1978
.
[8]
W. Whitt.
Representation and Approximation of Noncooperative Sequential Games
,
1980
.