The constant speedup theorem, so well known from Turing machine based complexity theory, is shown false for a natural imperative programming language I that manipulates tree-structured data. This relieves a tension between general programming practice, where linear factors are essential, and complexity theory, where linear time changes are traditionally regarded as trivial. Specifically, there is a constant b such that for any a > 0 there is a set X recognizable in time a · b · n but not in time a · n. Thus LIN, the collection of all sets recognizable in linear time by deterministic I-programs, contains an infinite hierarchy ordered by constant coefficients. Constant hierarchies also exist for larger time bounds T (n), provided they are time-constructable. Second, a problem is exhibited which is complete for the nondeterministic linear time sets NLIN with respect to a natural notion of deterministic linear-time reduction. Third, Kleene’s Second Recursion Theorem in essence shows that for any program p defined with self-reference, there is an equivalent nonreflexive program q. This is proven for an extension I↑ of I. Further, q can be simulated by an I program at most constantly slower than p. Language I↑ allows calls to the language’s own interpretation function, and even to its running time function (without the usual high costs for nested levels of interpretation). The results all hold as well for a stronger language I allowing selective updating of tree-structured data. The results are robust in that classes LIN and NLIN are identical for I , Isu↑, Schonhage’s Storage Modification Machines, Knuth/Tarjan’s pointer machines, and successor RAMs [13,14,11]. where n is the size of the input. DIKU, Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen East, Denmark, E-mail: neil@diku.dk. If the “more realistic and precise measure” of SMM computation time is used [13], and similarly for the other models.
[1]
Peter Sestoft,et al.
Partial evaluation and automatic program generation
,
1993,
Prentice Hall international series in computer science.
[2]
Marko C. J. D. van Eekelen,et al.
Term Graph Rewriting
,
1987,
PARLE.
[3]
Neil D. Jones,et al.
Flow analysis and optimization of LISP-like structures
,
1979,
POPL.
[4]
Robert E. Tarjan,et al.
A Class of Algorithms which Require Nonlinear Time to Maintain Disjoint Sets
,
1979,
J. Comput. Syst. Sci..
[5]
Saharon Shelah,et al.
Nearly Linear Time
,
1989,
Logic at Botik.
[6]
S. C. Kleene,et al.
Introduction to Metamathematics
,
1952
.
[7]
Jeffrey D. Ullman,et al.
Introduction to Automata Theory, Languages and Computation
,
1979
.
[8]
Manuel Blum,et al.
A Machine-Independent Theory of the Complexity of Recursive Functions
,
1967,
JACM.
[9]
Jesper Larsson Träff,et al.
Experiments with Implementations of Two Theoretical Constructions
,
1989,
Logic at Botik.
[10]
A. D. Palù,et al.
Universita' Degli Studi Di Verona Facolta' Di Scienze Mm Ff E Nn Tesi Di Laurea New Optimal Algorithms on Pointer Machines
,
2004
.
[11]
Neil D. Jones,et al.
Mix: A self-applicable partial evaluator for experiments in compiler generation
,
1989,
LISP Symb. Comput..
[12]
Jan Willem Klop,et al.
Term Graph Rewriting
,
1995,
HOA.
[13]
Jr. Hartley Rogers.
Theory of Recursive Functions and Effective Computability
,
1969
.