Maximum likelihood trajectories for continuous-time Markov chains

Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. Many computational problems related to such chains have been solved, including determining state distributions as a function of time, parameter estimation, and control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. We study three versions of this problem: (i) an initial value problem, in which an initial state is given and we seek the most likely trajectory until a given final time, (ii) a boundary value problem, in which initial and final states and times are given, and we seek the most likely trajectory connecting them, and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum likelihood trajectories are not always well-defined, and describe a polynomial time test for well-definedness. When well-definedness holds, we show that each of the three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so.

[1]  M. Holder,et al.  Phylogeny estimation: traditional and Bayesian approaches , 2003, Nature Reviews Genetics.

[2]  N. Kampen,et al.  Stochastic processes in physics and chemistry , 1981 .

[3]  Darren J. Wilkinson Stochastic Modelling for Systems Biology , 2006 .

[4]  R Rosales,et al.  Bayesian restoration of ion channel records using hidden Markov models. , 2001, Biophysical journal.

[5]  J. Rice,et al.  Maximum likelihood estimation and identification directly from single-channel recordings , 1992, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[6]  D. Iglehart,et al.  Discrete time methods for simulating continuous time Markov chains , 1976, Advances in Applied Probability.

[7]  Eric Vanden-Eijnden,et al.  Fitting timeseries by continuous-time Markov chains: A quadratic programming approach , 2006, J. Comput. Phys..

[8]  F. G. Ball,et al.  Stochastic models for ion channels: introduction and bibliography. , 1992, Mathematical biosciences.

[9]  W. Ebeling Stochastic Processes in Physics and Chemistry , 1995 .

[10]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[11]  A. Rantzer,et al.  Optimal control of hybrid systems , 1999, Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304).

[12]  Kenneth Dixon,et al.  Introduction to Stochastic Modeling , 2011 .

[13]  D. Gillespie Exact Stochastic Simulation of Coupled Chemical Reactions , 1977 .

[14]  M. Suchard,et al.  Bayesian selection of continuous-time Markov chain evolutionary models. , 2001, Molecular biology and evolution.

[15]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .