Probabilistic Approximate Logic and its Implementation in the Logical Imagination Engine

In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications. In this note, we introduce Probabilistic Approximate Logic (PALO) as a logic based on the notion of mean approximate probability to overcome conceptual and computational difficulties inherent to strictly probabilistic logics. The logic is approximate in several dimensions. Logical independence assumptions are used to obtain approximate probabilities, but by averaging over many instances of formulas a useful estimate of mean probability with known confidence can usually be obtained. To enable efficient computational inference, the logic has a continuous semantics that reflects only a subset of the structural properties of classical logic, but this imprecision can be partly compensated by richer theories obtained by classical inference or other means. Computational inference, which refers to the construction of models and validation of logical properties, is based on Stochastic Gradient Descent (SGD) and Markov Chain Monte Carlo (MCMC) techniques and hence another dimension where approximations are involved. We also present the Logical Imagination Engine (LIME), a prototypical implementation of PALO based on TensorFlow. Albeit not limited to the biological domain, we illustrate its operation in a quite substantial bioinformatics machine learning application concerned with network synthesis and analysis in a recent DARPA project.

[1]  Joohyung Lee,et al.  On the Semantic Relationship between Probabilistic Soft Logic and Markov Logic , 2016, ArXiv.

[2]  Ramanathan V. Guha,et al.  Towards a Model Theory for Distributed Representations , 2014, AAAI Spring Symposia.

[3]  Pedro M. Domingos,et al.  Sum-product networks: A new deep architecture , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[4]  Matthew Richardson,et al.  Markov logic networks , 2006, Machine Learning.

[5]  J. Meseguer,et al.  Rewriting Logic as a Logical and Semantic Framework , 1996 .

[6]  Hang Li,et al.  Inferring Mechanism of Action of an Unknown Compound from Time Series Omics Data , 2018, CMSB.

[7]  Artur S. d'Avila Garcez,et al.  Fast relational learning using bottom clause propositionalization with artificial neural networks , 2013, Machine Learning.

[8]  Artur S. d'Avila Garcez,et al.  Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge , 2016, NeSy@HLAI.

[9]  Hans Reichenbach,et al.  The theory of probability , 1968 .

[10]  Lluís Godo,et al.  Putting together Lukasiewicz and product logics , 1999 .

[11]  David M. Blei,et al.  Stochastic Gradient Descent as Approximate Bayesian Inference , 2017, J. Mach. Learn. Res..

[12]  Artur S. d'Avila Garcez,et al.  Learning and reasoning in logic tensor networks: theory and application to semantic image interpretation , 2017, SAC.

[13]  Carolyn L. Talcott,et al.  Learning Causality: Synthesis of Large-Scale Causal Networks from High-Dimensional Time Series Data , 2019, ArXiv.

[14]  Narciso Martí-Oliet,et al.  The Maude System , 1999, RTA.

[15]  Danqi Chen,et al.  Reasoning With Neural Tensor Networks for Knowledge Base Completion , 2013, NIPS.

[16]  Lawrence Carin,et al.  Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks , 2015, AAAI.

[17]  Stephen H. Bach Hinge-Loss Markov Random Fields and Probabilistic Soft Logic: A Scalable Approach to Structured Prediction , 2015, J. Mach. Learn. Res..

[18]  Brian R. Gaines,et al.  Fuzzy and Probability Uncertainty Logics , 1993 .

[19]  Joseph Y. Halpern An Analysis of First-Order Logics of Probability , 1989, IJCAI.

[20]  Lluís Godo Lacasa,et al.  Putting together Łukasiewicz and product logics , 1999 .

[21]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[22]  Alex Graves,et al.  Automated Curriculum Learning for Neural Networks , 2017, ICML.

[23]  Guy Van den Broeck,et al.  Lifted generative learning of Markov logic networks , 2016, Machine Learning.

[24]  Alan Bundy Incidence calculus: A mechanism for probabilistic reasoning , 2004, Journal of Automated Reasoning.

[25]  Steffen Rendle,et al.  Factorization Machines , 2010, 2010 IEEE International Conference on Data Mining.

[26]  L. D. Moura,et al.  The YICES SMT Solver , 2006 .

[27]  Fahiem Bacchus,et al.  Representing and reasoning with probabilistic knowledge - a logical approach to probabilities , 1991 .

[28]  José Meseguer,et al.  Conditioned Rewriting Logic as a United Model of Concurrency , 1992, Theor. Comput. Sci..

[29]  Mark-Oliver Stehr,et al.  Petri's Axioms of Concurrency- A Selection of Recent Results , 1997, ICATPN.

[30]  Petr Hájek,et al.  A complete many-valued logic with product-conjunction , 1996, Arch. Math. Log..

[31]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[32]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[33]  Patrick Lincoln,et al.  Probabilistic Modeling of Failure Dependencies Using Markov Logic Networks , 2013, 2013 IEEE 19th Pacific Rim International Symposium on Dependable Computing.

[34]  Sameer Singh,et al.  Low-Dimensional Embeddings of Logic , 2014, ACL 2014.

[35]  Sameer Singh,et al.  Injecting Logical Background Knowledge into Embeddings for Relation Extraction , 2015, NAACL.

[36]  Edward Grefenstette,et al.  Towards a Formal Distributional Semantics: Simulating Logical Calculi with Tensors , 2013, *SEMEVAL.

[37]  Pedro M. Domingos,et al.  Deep Symmetry Networks , 2014, NIPS.

[38]  José Meseguer,et al.  Specification and proof in membership equational logic , 2000, Theor. Comput. Sci..

[39]  Yee Whye Teh,et al.  Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.