Continuous time Bayesian networks (CTBNs) describe structured stochastic processes with finitely many states that evolve over continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. We address the problem of learning parameters and structure of a CTBN from fully observed data. We define a conjugate prior for CTBNs, and show how it can be used both for Bayesian parameter estimation and as the basis of a Bayesian score for structure learning. Because acyclicity is not a constraint in CTBNs, we can show that the structure learning problem is significantly easier, both in theory and in practice, than structure learning for dynamic Bayesian networks (DBNs). Furthermore, as CTBNs can tailor the parameters and dependency structure to the different time granularities of the evolution of different variables, they can provide a better fit to continuous-time processes than DBNs with a fixed time granularity.
[1]
I. I. Gikhman,et al.
The Theory of Stochastic Processes II
,
1975
.
[2]
I. I. Gikhman,et al.
The Theory of Stochastic Processes III
,
1979
.
[3]
Keiji Kanazawa,et al.
A model for reasoning about persistence and causation
,
1989
.
[4]
Wai Lam,et al.
LEARNING BAYESIAN BELIEF NETWORKS: AN APPROACH BASED ON THE MDL PRINCIPLE
,
1994,
Comput. Intell..
[5]
David Heckerman,et al.
A Characterization of the Dirichlet Distribution with Application to Learning Bayesian Networks
,
1995,
UAI.
[6]
Daphne Koller,et al.
Continuous Time Bayesian Networks
,
2012,
UAI.
[7]
David Maxwell Chickering,et al.
Learning Bayesian Networks: The Combination of Knowledge and Statistical Data
,
1994,
Machine Learning.