Approximate Time Bounded Reachability for CTMCs and CTMDPs: A Lyapunov Approach

Time bounded reachability is a fundamental problem in model checking continuous-time Markov chains (CTMCs) and Markov decision processes (CTMDPs) for specifications in continuous stochastic logics. It can be computed by numerically solving a characteristic linear dynamical system, which is computationally expensive. We take a control-theoretic approach and propose a reduction technique that finds another dynamical system of lower dimension (number of variables), such that numerically solving the reduced dynamical system provides an approximation to the solution of the original system with guaranteed error bounds. Our technique generalises lumpability (or probabilistic bisimulation) to a quantitative setting. Our main result is a Lyapunov function characterisation of the difference in the trajectories of the two dynamics that depends on the initial mismatch and exponentially decreases over time. In particular, the Lyapunov function enables us to compute an error bound between the two dynamics as well as a convergence rate. Finally, we show that the search for the reduced dynamics can be computed in polynomial time using a Schur decomposition of the transition matrix. This enables us to efficiently solve the reduced dynamical system using exponential of upper-triangular matrices. For CTMDPs, we generalise the approach to computing a piecewise quadratic Lyapunov functions for a switched affine dynamical system. We synthesise a policy for the CTMDP via its reduced-order switched system in order to have time bounded reachability probability above a threshold. We provide error bounds that depend on the minimum dwell time of the policy. We show the efficiency of the technique on examples from queueing networks, for which lumpability does not produce any state space reduction and which cannot be solved without reduction.

[1]  P. Olver Nonlinear Systems , 2013 .

[2]  Sven Schewe,et al.  Finite optimal control for time-bounded reachability in CTMDPs and continuous-time Markov games , 2010, Acta Informatica.

[3]  Kim G. Larsen,et al.  Bisimulation through Probabilistic Testing , 1991, Inf. Comput..

[4]  Lijun Zhang,et al.  Model Checking Algorithms for CTMDPs , 2011, CAV.

[5]  Radha Jagadeesan,et al.  Metrics for labelled Markov processes , 2004, Theor. Comput. Sci..

[6]  Johan Löfberg,et al.  YALMIP : a toolbox for modeling and optimization in MATLAB , 2004 .

[7]  M. Siegle,et al.  Multi Terminal Binary Decision Diagrams to Represent and Analyse Continuous Time Markov Chains , 1999 .

[8]  Robert K. Brayton,et al.  Model-checking continuous-time Markov chains , 2000, TOCL.

[9]  Antoine Girard,et al.  Approximate bisimulation relations for constrained linear systems , 2007, Autom..

[10]  William Feller,et al.  An Introduction to Probability Theory and Its Applications , 1967 .

[11]  Christel Baier,et al.  Model-Checking Algorithms for Continuous-Time Markov Chains , 2002, IEEE Trans. Software Eng..

[12]  Bruce A. Francis,et al.  Feedback Control Theory , 1992 .

[13]  E. Yaz Linear Matrix Inequalities In System And Control Theory , 1998, Proceedings of the IEEE.

[14]  Stephen P. Boyd,et al.  Linear Matrix Inequalities in Systems and Control Theory , 1994 .

[15]  Stephen P. Boyd,et al.  Graph Implementations for Nonsmooth Convex Programs , 2008, Recent Advances in Learning and Control.

[16]  Paulo Tabuada,et al.  Approximately Bisimilar Symbolic Models for Incrementally Stable Switched Systems , 2008, IEEE Transactions on Automatic Control.

[17]  Feller William,et al.  An Introduction To Probability Theory And Its Applications , 1950 .

[18]  Kim G. Larsen,et al.  On the Total Variation Distance of Semi-Markov Chains , 2015, FoSSaCS.