Perturbation theory for Markov reward processes with applications to queueing systems

We study the effect of perturbations in the data of a discrete-time Markov reward process on the finite-horizon total expected reward, the infinite-horizon expected discounted and average reward and the total expected reward up to a first-passage time. Bounds for the absolute errors of these reward functions are obtained. The results are illustrated for a finite as well as infinite queueing systems (M/M/1/S and ). Extensions to Markov decision processes and other settings are discussed.