MC-TopLog: Complete Multi-clause Learning Guided by a Top Theory

Within ILP much effort has been put into designing methods that are complete for hypothesis finding. However, it is not clear whether completeness is important in real-world applications. This paper uses a simplified version of grammar learning to show how a complete method can improve on the learning results of an incomplete method. Seeing the necessity of having a complete method for real-world applications, we introduce a method called ⊤-directed theory co-derivation, which is shown to be correct (ie. sound and complete). The proposed method has been implemented in the ILP system MC-TopLog and tested on grammar learning and the learning of game strategies. Compared to Progol5, an efficient but incomplete ILP system, MC-TopLog has higher predictive accuracies, especially when the background knowledge is severely incomplete.

[1]  Stephen Muggleton,et al.  TopLog: ILP Using a Logic Program Declarative Bias , 2008, ICLP.

[2]  Hendrik Blockeel,et al.  Top-Down Induction of First Order Logical Decision Trees , 1998, AI Commun..

[3]  G. Plotkin Automatic Methods of Inductive Inference , 1972 .

[4]  Oliver Ray,et al.  Nonmonotonic abductive inductive learning , 2009, J. Appl. Log..

[5]  Stephen Muggleton,et al.  ProGolem: A System Based on Relative Minimal Generalisation , 2009, ILP.

[6]  Luc De Raedt,et al.  Multiple Predicate Learning , 1993, IJCAI.

[7]  Akihiro Yamamoto,et al.  Which Hypotheses Can Be Found with Inverse Entailment? , 1997, ILP.

[8]  Luc De Raedt,et al.  Clausal Discovery , 1997, Machine Learning.

[9]  Alessandra Russo,et al.  Inductive Logic Programming as Abductive Search , 2010, ICLP.

[10]  Stephen Muggleton,et al.  Inverse entailment and progol , 1995, New Generation Computing.

[11]  Donato Malerba,et al.  Learning Recursive Theories in the Normal ILP Setting , 2003, Fundam. Informaticae.

[12]  Krzysztof R. Apt,et al.  Logic Programming , 1990, Handbook of Theoretical Computer Science, Volume B: Formal Models and Sematics.

[13]  Katsumi Inoue,et al.  Induction as Consequence Finding , 2004, Machine Learning.

[14]  Krysia Broda,et al.  Induction on Failure: Learning Connected Horn Theories , 2009, LPNMR.

[15]  Stephen Muggleton,et al.  Can ILP Learn Complete and Correct Game Strategies? , 2011, ILP.

[16]  Kedar Cabelli Explanation - based Generalization as resolution theorem proving , 1987 .

[17]  Stephen Muggleton,et al.  Does Multi-Clause Learning Help in Real-World Applications? , 2011, ILP.

[18]  Alex S. Taylor,et al.  Machine intelligence , 2009, CHI.

[19]  Henrik Boström,et al.  Induction of Logic Programs by Example-Guided Unfolding , 1999, J. Log. Program..

[20]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[21]  Stephen Muggleton,et al.  Theory Completion Using Inverse Entailment , 2000, ILP.

[22]  William W. Cohen Grammatically Biased Learning: Learning Logic Programs Using an Explicit Antecedent Description Language , 1994, Artif. Intell..

[23]  Ivan Bratko,et al.  Refining Complete Hypotheses in ILP , 1999, ILP.

[24]  Wolfgang Faber,et al.  Logic Programming and Nonmonotonic Reasoning , 2011, Lecture Notes in Computer Science.

[25]  Stephen Muggleton,et al.  Efficient Induction of Logic Programs , 1990, ALT.