Caching and non-Horn inference in model elimination theorem provers

Caching in an inference procedure holds the promise of replacing exponential search with constant-time lookup, at a cost of slightly-increased overhead for each node expansion. Caching will be useful if subgoals are repeated often enough during proofs. In experiments on solving queries using a backward chainer on Horn theories, caching appears to be very helpful on average. When trying to extend this success to first-order theories, however, intuition suggests that subgoal caches are no longer useful. The cause is that complete first-order backward chaining requires goal-goal resolutions in addition to resolutions with the database, and this introduces a context-sensitivity into the proofs for a subgoal. A cache is only feasible if the solutions are independent of context, so that they may be copied from one part of the space to another. It is shown here that a full exploration of a subgoal in one context actually provides complete information about the solutions to the same subgoal in all other contexts of the proof. In a straightforward way, individual solutions from one context may be copied over directly. More importantly, non-Horn failure caching is also feasible, so no additional solutions in the new context (that might affect the query) are possible and therefore there is no need to re-explore the space in the new context. Thus most Horn clause caching schemes may be used with minimal changes in a non-Horn setting. In addition, a new Horn clause caching scheme is proposed: postponement caching. This new scheme involves exploring the inference space as a graph instead of as a tree, so that a given literal will only occur once in the proof space. Despite the previous extension of failure caching to non-Horn theories, postponement caching is incomplete in the non-Horn case. A counterexample is presented, and possible enhancements to reclaim completeness are investigated.

[1]  Stuart C. Shapiro,et al.  Using Active Connection Graphs for Reasoning with Recursive Rules , 1981, IJCAI.

[2]  John Beidler,et al.  Data Structures and Algorithms , 1996, Wiley Encyclopedia of Computer Science and Engineering.

[3]  Mark E. Stickel,et al.  A Prolog Technology Theorem Prover: A New Exposition and Implementation in Prolog , 1990, Theor. Comput. Sci..

[4]  Raghu Ramakrishnan,et al.  Magic Templates: A Spellbinding Approach To Logic Programs , 1991, J. Log. Program..

[5]  Owen L. Astrachan,et al.  Investigations in Model Elimination Based Theorem Proving , 1992 .

[6]  David A. Plaisted The Search Efficiency of Theorem Proving Strategies , 1994, CADE.

[7]  David Scott Warren,et al.  Memoing for logic programs , 1992, CACM.

[8]  Donald W. Loveland,et al.  Automated theorem proving: a logical basis , 1978, Fundamental studies in computer science.

[9]  Peter Baumgartner,et al.  Model Elimination Without Contrapositives , 1994, CADE.

[10]  J. Lloyd Foundations of Logic Programming , 1984, Symbolic Computation.

[11]  Richard E. Korf,et al.  Depth-First Iterative-Deepening: An Optimal Admissible Tree Search , 1985, Artif. Intell..

[12]  Richard E. Korf Linear-Space Best-First Search: Summary of Results , 1992, AAAI.

[13]  J. A. Robinson,et al.  A Machine-Oriented Logic Based on the Resolution Principle , 1965, JACM.

[14]  François Bry,et al.  Query Evaluation in Deductive Databases: Bottom-Up and Top-Down Reconciled , 1990, Data Knowl. Eng..

[15]  Matthew L. Ginsberg,et al.  Dynamic Backtracking , 1993, J. Artif. Intell. Res..

[16]  David E. Smith,et al.  Controlling Recursive Inference , 1986, Artif. Intell..

[17]  Jeffrey D. Ullman,et al.  Principles of Database and Knowledge-Base Systems, Volume II , 1988, Principles of computer science series.

[18]  Bruce Spencer Avoiding Duplicate Proofs , 1990, NACLP.