Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach

This is an important contribution to a modern area of applied probability that deals with nonstationary Markov chains in continuous time. This area is becoming increasingly useful in engineering, economics, communication theory, active networking, and so forth, where the Markov-chain system is subject to frequent  uctuations with clusters of states such that the chain  uctuates very rapidly among different states of a cluster but changes less rapidly from one cluster to another. The authors use the setting of singular perturbations, which allows them to study both weak and strong interactions among the states of the chain. This leads to simpliŽ cations through the averaging principle, aggregation, and decomposition. The main results include asymptotic expansions of the corresponding probability distributions, occupations measures, limiting normality, and exponential rates. These results give the asymptotic behavior of many controlled stochastic dynamic systems when the perturbation parameter tends to 0. The classical analytical method employs the asymptotic expansions of onedimensional distributions of the Markov chain as solutions to a system of singularly perturbed ordinary differential equations. Indeed, the asymptotic behavior of solutions of such equations is well studied and understood. A more probabilistic approach also used by the authors is based on the tightness of the family of probability measures generated by the singularly perturbed Markov chain with the corresponding weak convergence properties. Both of these methods are illustrated by practical dynamic optimization problems, in particular by hierarchical production planning in a manufacturing system. An important contribution is the last chapter, Chapter 10, which describes numerical methods to solve various control and optimization problems involving Markov chains. Altogether the monograph consists of three parts, with Part I containing necessary, technically rather demanding facts about Markov processes (which in the nonstationary case are deŽ ned through martingales.) Part II derives the mentioned asymptotic expansions, and Part III deals with several applications, including Markov decision processes and optimal control of stochastic dynamic systems. This technically demanding book may be out of reach of many readers of Technometrics. However, the use of Markov processes has become common for numerous real-life complex stochastic systems. To understand the behavior of these systems, the sophisticated mathematical methods described in this book may be indispensable.