Learning Bayesian networks from big data with greedy search: computational complexity and efficient implementation

Learning the structure of Bayesian networks from data is known to be a computationally challenging, NP-hard problem. The literature has long investigated how to perform structure learning from data containing large numbers of variables, following a general interest in high-dimensional applications (“small n, large p”) in systems biology and genetics. More recently, data sets with large numbers of observations (the so-called “big data”) have become increasingly common; and these data sets are not necessarily high-dimensional, sometimes having only a few tens of variables depending on the application. We revisit the computational complexity of Bayesian network structure learning in this setting, showing that the common choice of measuring it with the number of estimated local distributions leads to unrealistic time complexity estimates for the most common class of score-based algorithms, greedy search. We then derive more accurate expressions under common distributional assumptions. These expressions suggest that the speed of Bayesian network learning can be improved by taking advantage of the availability of closed-form estimators for local distributions with few parents. Furthermore, we find that using predictive instead of in-sample goodness-of-fit scores improves speed; and we confirm that it improves the accuracy of network reconstruction as well, as previously observed by Chickering and Heckerman (Stat Comput 10: 55–62, 2000). We demonstrate these results on large real-world environmental and epidemiological data; and on reference data sets available from public repositories.

[1]  G. Roussas,et al.  A first course in mathematical statistics , 1976 .

[2]  Pedro Larrañaga,et al.  Structure Learning of Bayesian Networks by Genetic Algorithms: A Performance Analysis of Control Parameters , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Nir Friedman,et al.  Learning Belief Networks in the Presence of Missing Values and Hidden Variables , 1997, ICML.

[4]  David Maxwell Chickering,et al.  Large-Sample Learning of Bayesian Networks is NP-Hard , 2002, J. Mach. Learn. Res..

[5]  James Cussens,et al.  Bayesian network learning with cutting planes , 2011, UAI.

[6]  Gregory F. Cooper,et al.  A Bayesian method for the induction of probabilistic networks from data , 1992, Machine Learning.

[7]  N. Draper,et al.  Applied Regression Analysis: Draper/Applied Regression Analysis , 1998 .

[8]  P. Spirtes,et al.  Causation, prediction, and search , 1993 .

[9]  Jose Miguel Puerta,et al.  Ant colony optimization for learning Bayesian networks , 2002, Int. J. Approx. Reason..

[10]  Judea Pearl,et al.  Chapter 2 – BAYESIAN INFERENCE , 1988 .

[11]  David Maxwell Chickering,et al.  Learning Bayesian Networks is , 1994 .

[12]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[13]  David Maxwell Chickering,et al.  A Transformational Characterization of Equivalent Bayesian Network Structures , 1995, UAI.

[14]  Serafín Moral,et al.  Mixtures of Truncated Exponentials in Hybrid Bayesian Networks , 2001, ECSQARU.

[15]  Sadique Sheik,et al.  Reservoir computing compensates slow response of chemosensor arrays exposed to fast varying gas concentrations in continuous monitoring , 2015 .

[16]  N. Wermuth,et al.  Graphical Models for Associations between Variables, some of which are Qualitative and some Quantitative , 1989 .

[17]  Marco Zaffalon,et al.  Learning Bayesian Networks with Thousands of Variables , 2015, NIPS.

[18]  Joe Suzuki,et al.  An Efficient Bayesian Network Structure Learning Strategy , 2016, New Generation Computing.

[19]  Pierre Baldi,et al.  Parameterized neural networks for high-energy physics , 2016, The European Physical Journal C.

[20]  Frank Harary,et al.  Graphical enumeration , 1973 .

[21]  G. Schwarz Estimating the Dimension of a Model , 1978 .

[22]  T. Heskes,et al.  Learning Sparse Causal Models is not NP-hard , 2013, UAI.

[23]  David Maxwell Chickering,et al.  Optimal Structure Identification With Greedy Search , 2002, J. Mach. Learn. Res..

[24]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[25]  Gal Elidan,et al.  Copula Bayesian Networks , 2010, NIPS.

[26]  Béla Bollobás,et al.  Directed scale-free graphs , 2003, SODA '03.

[27]  Jean-Baptiste Denis,et al.  Bayesian Networks , 2014 .

[28]  Constantin F. Aliferis,et al.  The max-min hill-climbing Bayesian network structure learning algorithm , 2006, Machine Learning.

[29]  Nir Friedman,et al.  Being Bayesian About Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks , 2004, Machine Learning.

[30]  Manuel Laguna,et al.  Tabu Search , 1997 .

[31]  Peter Bühlmann,et al.  Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm , 2007, J. Mach. Learn. Res..

[32]  Allan Tucker,et al.  Modeling Air Pollution, Climate, and Health Data Using Bayesian Networks: A Case Study of the English Regions , 2018 .

[33]  Anna Goldenberg,et al.  Tractable learning of large Bayes net structures from sparse data , 2004, ICML.

[34]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[35]  Susanne Bottcher,et al.  Learning Bayesian networks with mixed variables , 2001, AISTATS.

[36]  C. C. Craig,et al.  A First Course in Mathematical Statistics , 1947 .

[37]  Marco Scutari,et al.  Learning Bayesian Networks with the bnlearn R Package , 2009, 0908.3817.

[38]  Robert G. Cowell,et al.  Conditions Under Which Conditional Independence and Scoring Methods Lead to Identical Selection of Bayesian Network Models , 2001, UAI.

[39]  David Maxwell Chickering,et al.  Learning Bayesian Networks: The Combination of Knowledge and Statistical Data , 1994, Machine Learning.

[40]  N. Draper,et al.  Applied Regression Analysis , 1966 .

[41]  Andrew W. Moore,et al.  Cached Sufficient Statistics for Efficient Machine Learning with Large Datasets , 1998, J. Artif. Intell. Res..

[42]  Jesper Tegnér,et al.  Learning dynamic Bayesian network models via cross-validation , 2005, Pattern Recognit. Lett..

[43]  Michael D. Perlman,et al.  The size distribution for Markov equivalence classes of acyclic digraph models , 2002, Artif. Intell..

[44]  Jaroslaw Zola,et al.  Fast Counting in Machine Learning Applications , 2018, UAI.

[45]  A. P. Dawid,et al.  Present position and potential developments: some personal views , 1984 .

[46]  David Heckerman,et al.  Learning Gaussian Networks , 1994, UAI.

[47]  David Maxwell Chickering,et al.  A comparison of scientific and engineering criteria for Bayesian model selection , 2000, Stat. Comput..

[48]  Russell Greiner,et al.  Model Selection Criteria for Learning Belief Nets: An Empirical Comparison , 2000, ICML.

[49]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[50]  P. Baldi,et al.  Searching for exotic particles in high-energy physics with deep learning , 2014, Nature Communications.

[51]  Marco Scutari,et al.  Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimised Implementations in the bnlearn R Package , 2014, ArXiv.