Learnability of Restricted Logic Programs

An active area of research in machine learning is learning logic programs from examples; this subarea is sometimes called inductive logic programming. This paper investigates this problem formally, using as our primary technical tool the method of prediction-preserving reducibilities and the model of polynomial predictability introduced by Pitt and Warmuth 1990]. We focus on the learnability of various generalizations of the language of constant-depth determinate clauses, which is used by several practical learning systems. We show that a single determinate clause of logarithmic depth is not polynomially predictable, under cryptographic assumptions. We then establish a close connection between the learnability of a single clause with k \free" variables and the learnability of DNF; a close connection is also shown between the learnability of a single clauses with bounded indeterminacy and the learnability of k-term DNF, leading to a prediction algorithm for a class of clauses with bounded indeterminacy. We then deene two new classes of logic programs that allow indeter-minacy, but are easily pac-learnable. Finally we present a series of results showing that allowing recursion makes some simple logic programming languages hard to learn against an arbitrary distribution: in particular, one-clause constant depth determinate programs with arbitrary recursion are hard to learn, as are multi-clause constant depth determinate programs with linear recursion.