Northwest corner and banded matrix approximations to a Markov chain

In this paper, we consider approximations to discrete time Markov chains with countably infinite state spaces. We provide a simple, direct proof for the convergence of certain probabilistic quantities when one uses a northwest corner or a banded matrix approximation to the original probability transition matrix. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 187–197, 1999

[1]  Theodore J. Sheskin,et al.  Technical Note - A Markov Chain Partitioning Algorithm for Computing Steady State Probabilities , 1985, Oper. Res..

[2]  H. R. Gail,et al.  Non-Skip-Free M/G/1 and G/M/1 Type Markov Chains , 1997, Advances in Applied Probability.

[3]  Sلأren Asmussen,et al.  Applied Probability and Queues , 1989 .

[4]  Wei Li,et al.  Infinite block-structured transition matrices and their properties , 1998, Advances in Applied Probability.

[5]  Peter Whittle,et al.  Growth Optimality for Branching Markov Decision Chains , 1982, Math. Oper. Res..

[6]  Jeffrey J. Hunter,et al.  Mathematical techniques of applied probability , 1985 .

[7]  E. Seneta,et al.  Finite approximations to infinite non-negative matrices, II: refinements and applications , 1968, Mathematical Proceedings of the Cambridge Philosophical Society.

[8]  Danielle Liu,et al.  The censored Markov chain and the best augmentation , 1996, Journal of Applied Probability.

[9]  H. R. Gail,et al.  Spectral analysis of M/G/1 and G/M/1 type Markov chains , 1996, Advances in Applied Probability.

[10]  Winfried K. Grassmann,et al.  Equilibrium distribution of block-structured Markov chains with repeating rows , 1990, Journal of Applied Probability.

[11]  Eugene Seneta,et al.  Finite approximations to infinite non-negative matrices , 1967, Mathematical Proceedings of the Cambridge Philosophical Society.

[12]  Myong-Hi Kim,et al.  Equilibrium analysis of skip free markov chains: nonlinear matrix equations , 1991 .

[13]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[14]  Daniel P. Heyman Approximating the stationary distribution of an infinite stochastic matrix , 1991 .

[15]  Edward P. C. Kao Using state reduction for computing steady state vectors in Markov chains ofM/G/1 type , 1992, Queueing Syst. Theory Appl..

[16]  Marcel F. Neuts,et al.  Matrix-geometric solutions in stochastic models - an algorithmic approach , 1982 .

[17]  D. Wolf,et al.  Approximation of the invariant probability measure of an infinite stochastic matrix , 1980, Advances in Applied Probability.

[18]  R. Tweedie Criteria for classifying general Markov chains , 1976, Advances in Applied Probability.

[19]  Winfried K. Grassmann,et al.  Computation of Steady-State Probabilities for Infinite-State Markov Chains with Repeating Rows , 1993, INFORMS J. Comput..

[20]  Eugene Seneta,et al.  Augmented truncations of infinite stochastic matrices , 1987 .

[21]  Edward P. C. Kao Using State Reduction for Computing Steady State Probabilities of Queues of GI/PH/1 Types , 1991, INFORMS J. Comput..

[22]  G. Latouche ALGORITHMS FOR INFINITE MARKOV CHAINS WITH REPEATING COLUMNS , 1993 .

[23]  Winfried K. Grassmann,et al.  Regenerative Analysis and Steady State Distributions for Markov Chains , 1985, Oper. Res..

[24]  S. Karlin,et al.  A second course in stochastic processes , 1981 .

[25]  Linn I. Sennott,et al.  The Computation of Average Optimal Policies in Denumerable State Markov Decision Chains , 1997, Advances in Applied Probability.