Meta-interpretive learning: application to grammatical inference

Despite early interest Predicate Invention has lately been under-explored within ILP. We develop a framework in which predicate invention and recursive generalisations are implemented using abduction with respect to a meta-interpreter. The approach is based on a previously unexplored case of Inverse Entailment for Grammatical Inference of Regular languages. Every abduced grammar H is represented by a conjunction of existentially quantified atomic formulae. Thus ¬H is a universally quantified clause representing a denial. The hypothesis space of solutions for ¬H can be ordered by θ-subsumption. We show that the representation can be mapped to a fragment of Higher-Order Datalog in which atomic formulae in H are projections of first-order definite clause grammar rules and the existentially quantified variables are projections of first-order predicate symbols. This allows predicate invention to be effected by the introduction of first-order variables. Previous work by Inoue and Furukawa used abduction and meta-level reasoning to invent predicates representing propositions. By contrast, the present paper uses abduction with a meta-interpretive framework to invent relations. We describe the implementations of Meta-interpretive Learning (MIL) using two different declarative representations: Prolog and Answer Set Programming (ASP). We compare these implementations against a state-of-the-art ILP system MC-TopLog using the dataset of learning Regular and Context-Free grammars as well learning a simplified natural language grammar and a grammatical description of a staircase. Experiments indicate that on randomly chosen grammars, the two implementations have significantly higher accuracies than MC-TopLog. In terms of running time, Metagol is overall fastest in these tasks. Experiments indicate that the Prolog implementation is competitive with the ASP one due to its ability to encode a strong procedural bias. We demonstrate that MIL can be applied to learning natural grammars. In this case experiments indicate that increasing the available background knowledge, reduces the running time. Additionally ASPM (ASP using a meta-interpreter) is shown to have a speed advantage over Metagol when background knowledge is sparse. We also demonstrate that by combining MetagolR (Metagol with a Regular grammar meta-interpreter) and MetagolCF (Context-Free meta-interpreter) we can formulate a system, MetagolRCF, which can change representation by firstly assuming the target to be Regular, and then failing this, switch to assuming it to be Context-Free. MetagolRCF runs up to 100 times faster than MetagolCF on grammars chosen randomly from Regular and non-Regular Context-Free grammars.

[1]  Shan-Hwei Nienhuys-Cheng,et al.  Foundations of Inductive Logic Programming , 1997, Lecture Notes in Computer Science.

[2]  Luc De Raedt,et al.  ILP turns 20 , 2011, Machine Learning.

[3]  Bert Van Nuffelen,et al.  A-System: Problem Solving through Abduction , 2001, IJCAI.

[4]  Amaury Habrard,et al.  A Polynomial Algorithm for the Inference of Context Free Languages , 2008, ICGI.

[5]  Stephen Muggleton,et al.  MC-TopLog: Complete Multi-clause Learning Guided by a Top Theory , 2011, ILP.

[6]  Wolfgang Faber,et al.  Logic Programming and Nonmonotonic Reasoning , 2011, Lecture Notes in Computer Science.

[7]  Stephen Muggleton,et al.  Inverse entailment and progol , 1995, New Generation Computing.

[8]  Claude Sammut,et al.  Plane-based object categorisation using relational learning , 2013, Machine Learning.

[9]  Peter A. Flach,et al.  Ninth International Workshop on Inductive Logic Programming (ILP'99) , 1999 .

[10]  Christophe Costa Florêncio Consistent Identification in the Limit of Rigid Grammars from Strings Is NP-hard , 2002, ICGI.

[11]  José-Miguel Benedí,et al.  RNA Modeling by Combining Stochastic Context-Free Grammars and n-Gram Models , 2002, Int. J. Pattern Recognit. Artif. Intell..

[12]  Martin Gebser,et al.  clasp : A Conflict-Driven Answer Set Solver , 2007, LPNMR.

[13]  S. Muggleton Stochastic Logic Programs , 1996 .

[14]  Yasubumi Sakakibara,et al.  Efficient Learning of Context-Free Grammars from Positive Structural Examples , 1992, Inf. Comput..

[15]  Henrik Boström,et al.  Predicate Invention and Learning from Positive Examples Only , 1998, ECML.

[16]  Martin Gebser,et al.  Answer Set Solving in Practice , 2012, Answer Set Solving in Practice.

[17]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[18]  K. VanLehn,et al.  A version space approach to learning context-free grammars , 2004, Machine Learning.

[19]  Katsumi Inoue,et al.  ILP turns 20 - Biography and future challenges , 2012, Mach. Learn..

[20]  Edward F. Moore,et al.  Gedanken-Experiments on Sequential Machines , 1956 .

[21]  Stephen G. Pulman,et al.  Experiments in Inductive Chart Parsing , 1999, Learning Language in Logic.

[22]  François Denis,et al.  Learning Regular Languages from Simple Positive Examples , 2001, Machine Learning.

[23]  Stephen Muggleton,et al.  Theory Completion Using Inverse Entailment , 2000, ILP.

[24]  守屋 悦朗,et al.  J.E.Hopcroft, J.D. Ullman 著, "Introduction to Automata Theory, Languages, and Computation", Addison-Wesley, A5変形版, X+418, \6,670, 1979 , 1980 .

[25]  Stephen Muggleton,et al.  Towards Efficient Higher-Order Logic Learning in a First-Order Datalog Framework , 2011, ILP.

[26]  Colin de la Higuera,et al.  A bibliographical study of grammatical inference , 2005, Pattern Recognit..

[27]  Stephen Muggleton,et al.  Machine Invention of First Order Predicates by Inverting Resolution , 1988, ML.

[28]  Stephen Muggleton,et al.  Latest Advances in Inductive Logic Programming, ILP 2011, Late Breaking Papers, Windsor Great Park, UK, July 31 - August 3, 2011 , 2014, ILP.

[29]  Katsumi Inoue,et al.  Discovering Rules by Meta-level Abduction , 2009, ILP.

[30]  Torsten Schaub,et al.  Unsatisfiability-based optimization in clasp , 2012, ICLP.

[31]  Luc De Raedt Declarative Modeling for Machine Learning and Data Mining , 2012, ICFCA.

[32]  Enric Plaza,et al.  Machine Learning: ECML 2000 , 2003, Lecture Notes in Computer Science.

[33]  Pat Langley,et al.  Learning Context-Free Grammars with a Simplicity Bias , 2000, ECML.

[34]  Andreas Stolcke,et al.  An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities , 1994, CL.

[35]  Katsumi Inoue,et al.  SOLAR: An automated deduction system for consequence finding , 2010, AI Commun..

[36]  De Raedt,et al.  Advances in Inductive Logic Programming , 1996 .

[37]  Stephen Muggleton,et al.  TopLog: ILP Using a Logic Program Declarative Bias , 2008, ICLP.

[38]  Stephen Muggleton,et al.  Inductive acquisition of expert knowledge , 1986 .

[39]  Stephen Muggleton,et al.  Proceedings of the 21st international conference on Inductive Logic Programming , 2011, ILP 2011.

[40]  Alex S. Taylor,et al.  Machine intelligence , 2009, CHI.