On the accuracy of the estimated policy function using the Bellman contraction method

In this paper we show that the approximation error of the optimal policy function in the stochastic dynamic programing problem using the policies defined by the Bellman contraction method is lower than a constant (which depends on the modulus of strong concavity of the one-period return function) times the square root of the value function approximation error. Since the Bellman's method is a contraction it results that we can control the approximation error of the policy function. This method for estimating the approximation error is robust under small numerical errors in the computation of value and policy functions.