Rare Speed-up in Automatic Theorem Proving Reveals Tradeoff Between Computational Time and Information Value

We show that strategies implemented in automatic theorem proving involve an interesting tradeoff between execution speed, proving speedup/computational time and usefulness of information. We advance formal definitions for these concepts by way of a notion of normality related to an expected (optimal) theoretical speedup when adding useful information (other theorems as axioms), as compared with actual strategies that can be effectively and efficiently implemented. We propose the existence of an ineluctable tradeoff between this normality and computational time complexity. The argument quantifies the usefulness of information in terms of (positive) speed-up. The results disclose a kind of no-free-lunch scenario and a tradeoff of a fundamental nature. The main theorem in this paper together with the numerical experiment---undertaken using two different automatic theorem provers AProS and Prover9 on random theorems of propositional logic---provide strong theoretical and empirical arguments for the fact that finding new useful information for solving a specific problem (theorem) is, in general, as hard as the problem (theorem) itself.

[1]  Gerd Folkers,et al.  On computable numbers , 2016 .

[2]  David S. Johnson,et al.  Computers and Intractability: A Guide to the Theory of NP-Completeness , 1978 .

[3]  Kurt Gödel,et al.  On Formally Undecidable Propositions of Principia Mathematica and Related Systems , 1966 .

[4]  A. Turing On Computable Numbers, with an Application to the Entscheidungsproblem. , 1937 .

[5]  Kenneth Kunen,et al.  Set Theory: An Introduction to Independence Proofs , 2010 .

[6]  Gregory J. Chaitin,et al.  On the Length of Programs for Computing Finite Binary Sequences , 1966, JACM.

[7]  Wilfried Sieg,et al.  Normal Natural Deduction Proofs (in classical logic) , 1998, Stud Logica.

[8]  Michael Alekhnovich,et al.  Minimum propositional proof length is NP-hard to linearly approximate , 1998, Journal of Symbolic Logic.

[9]  Valentin Goranko,et al.  Logic in Computer Science: Modelling and Reasoning About Systems , 2007, J. Log. Lang. Inf..

[10]  Marcus Hutter,et al.  Algorithmic Information Theory , 1977, IBM J. Res. Dev..

[11]  M. W. Shields An Introduction to Automata Theory , 1988 .

[12]  A. Kolmogorov Three approaches to the quantitative definition of information , 1968 .

[13]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[14]  Wilfried Sieg,et al.  The AProS Project: Strategic Thinking & Computational Logic , 2007, Log. J. IGPL.

[15]  Hector Zenil,et al.  Program-Size versus Time Complexity, Speed-Up and Slowdown Phenomena in Small Turing Machines , 2011, ArXiv.

[16]  Paul M. B. Vitányi,et al.  Clustering by compression , 2003, IEEE Transactions on Information Theory.

[17]  Hector Zenil,et al.  Computer Runtimes and the Length of Proofs - With an Algorithmic Probabilistic Application to Waiting Times in Automatic Theorem Proving , 2012, Computation, Physics and Beyond.