The Challenge of Revising an Impure Theory

Abstract A pure rule-based program will return a set of answers to each query; and will return the same answer set even if its rules are re-ordered. However, an impure program, which includes the Prolog cut “!” and not (.) operators, can return different answers if the rules are re-ordered. There are also many reasoning systems that return only the first answer found for each query; these first answers, too, depend on the rule order, even in pure rule-based systems. A theory revision algorithm, seeking a revised rule-base whose expected accuracy , over the distribution of queries, is optimal, should therefore consider modifying the order of the rules. This paper first shows that a polynomial number of training “labeled queries” (each a query coupled with its correct answer) provides the distribution information necessary to identify the optimal ordering. It then proves, however, that the task of determining which ordering is optimal, once given this information, is intractable even in trivial situations; e.g., even if each query is an atomic literal, we are seeking only a “perfect” theory, and the rule base is propositional. We also prove that this task is not even approximable: Unless P = NP , no polynomial time algorithm can produce an ordering of an n -rule theory whose accuracy is within n γ of optimal, for some γ > 0. We also prove similar hardness, and non-approximatability, results for the related tasks of determining, in these impure contexts, (1) the optimal ordering of the antecedents; (2) the optimal set of rules to add or (3) to delete ; and (4) the optimal priority values for a set of defaults.

[1]  Benjamin N. Grosof,et al.  Generalizing Prioritization , 1991, KR.

[2]  Paul van Arragon Nested default reasoning with priority levels , 1990 .

[3]  Marco Valtorta,et al.  Revision of Reduced Theories , 1991, ML.

[4]  Raymond J. Mooney,et al.  Theory Refinement Combining Analytical and Empirical Methods , 1994, Artif. Intell..

[5]  Alessandro Panconesi,et al.  Completeness in Approximation Classes , 1989, Inf. Comput..

[6]  Michael J. Pazzani,et al.  A Methodology for Evaluating Theory Revision Systems: Results with Audrey II , 1993, IJCAI.

[7]  Marco Valtoria Some results on the complexity of knowledge-base refinement , 1989, ICML 1989.

[8]  David C. Wilkins,et al.  The Refinement of Probabilistic Rule Sets: Sociopathic Interactions , 1994, Artif. Intell..

[9]  Jon Doyle,et al.  Two Theses of Knowledge Representation: Language Restrictions, Taxonomic Classification, and the Utility of Representation Services , 1991, Artif. Intell..

[10]  Russell Greiner The Complexity of Theory Revision , 1995, IJCAI.

[11]  R. Reiter,et al.  Nonmonotonic reasoning , 1988 .

[12]  Stephen Muggleton,et al.  Machine Invention of First Order Predicates by Inverting Resolution , 1988, ML.

[13]  Gerhard Brewka,et al.  Preferred Subtheories: An Extended Logical Framework for Default Reasoning , 1989, IJCAI.

[14]  Alon Itai,et al.  Nonuniform Learnability , 1988, J. Comput. Syst. Sci..

[15]  Francesco Bergadano,et al.  The Difficulties of Learning Logic Programs with Cut , 1993, J. Artif. Intell. Res..

[16]  Russell Greiner,et al.  Finding Optimal Derivation Strategies in Redundant Knowledge Bases , 1991, Artif. Intell..

[17]  Marco Valtorta More Results on the Complexity of Knowledge Base Refinement: Belief Networks , 1990, ML.

[18]  Carsten Lund,et al.  Proof verification and hardness of approximation problems , 1992, Proceedings., 33rd Annual Symposium on Foundations of Computer Science.

[19]  Hector J. Levesque,et al.  Foundations of a Functional Approach to Knowledge Representation , 1984, Artif. Intell..