Brief paper continuous-time controlled markov chains with safety upper bound

This study introduces the notion of safety for the controlled Markov chains in the continuous-time horizon. The concept is a non-trivial extension of safety control for stochastic systems modelled as discrete-time Markov decision processes, where the safety means that the probability distributions of the system states will not visit the given forbidden set at any time. In this paper study a unit-interval-valued vector that serves as an upper bound on the state probability distribution vector characterises the forbidden set. A probability distribution is then called safe if it does not exceed the upper bound. Under mild conditions the author derives two results: (i) the necessary and sufficient conditions that guarantee the all-time safety of the probability distributions if the starting distribution is safe, and (ii) the characterisation of the supreme set of safe initial probability vectors that remain safe as time passes. In particular, study the paper identifies an upper bound on time and shows that if a distribution is always safe before that time, the distribution is safe at all times. Numerical examples are provided to illustrate the two results.