John H. Reift In this paper, we examine the problem of incrementally evaluating algebraic functions. In particular, if f(Z1,22,..., Zn) = (Yl , Y2, . . ., ym) is an algebraic problem, we consider answering on-line requests of the form “change input xi to value IJ” or “what is the value of output yj?” While incremental evaluation of problems in graph theory and computational geometry have been examined, there is very little literature on the incremental evaluation of algebraic problems, with the exception of the prefix sum problem for which [Fredman, 821 showed tight O(logn) bounds. In this paper, we examine both lower bounds and algorithm design techniques for algebraic problems, We first present lower bounds for some simply stated algebraic problems: multipoint polynomial evaluation, polynomial reciprocal, and extended polynomial GCD. In all cases we prove an n(n) lower bound for the incremental evaluation of these functions. In addition, we give a rather surprising space-time trade off that applies to most interesting algebraic functions, including those with good incremental algorithms such as prefix sum, which are the first space-time trade-offs known for incremental algorithms. In particular, if we have S(n) storage locations available in addition to the inputs, then incremental algorithms for most problems require R(*) time per request. Secondly, we present two general purpose techniques for designing incremental algorithms. The first method can produce highly efficient incremental algorithms, giving for example an O(M(zu) log n) time per request algorithm for evaluating order-u, linear recurrences, and an O(fi) t ime bound for incremental computation of the Discrete Fourier Transform. The second *This research was supported by DARPA/ISTO Grant N0001491-J-1985, Subcontract K&92-01-0182 of DARPA/ISTO prime Contract N00014-92-C-0182, NSF Grant NSF-IRI-9100681, NASA subcontract 550-63 of prime Contract NASS-30428, and US-Israel Binational NSF Grant 88-00282/2. +Department of Computer Science, Duke University, Box 90129, Durham, NC 2770&0129. *Department of Computer Sciences, University of North Texas, P.O. Box 13886, Denton, TX 76203-6886. Stephen R. Tatet technique gives slightly slower incremental algorithms for these problems, but is applicable to a wider class of problems than the first method. Using the second method, we give incremental algorithms for multipoint evaluation with changing coefficients, polynomial multiplication, and restricted polynomial reciprocal that have request time a(fi, where d denotes the “soft0” which neglects polylog factors. We also apply these techniques to various matrix problems, giving fast incremental algorithms. Lastly, we consider answering the on-line requests with a parallel machine (a PRAM). We show that requests can be answered very efficiently for the problems of DFT, two-dimensional DFT, multipoint polynomial evaluation with changing coefficients, and the chirp z-transform with constant z. We show a continuous processor-time trade-off for these problems, where the total amount of work equals our sequential algorithms for b(fi) p rocessors, and at the fastest end of the spectrum requests can be handled in O(loglogn) time using O(n/ log” n) p rocessors for any constant c. Clearly, the sequential lower bounds can be translated to work lower bounds for our parallel algorithms.
[1]
David Eppstein,et al.
Dynamic half-space reporting, geometric optimization, and minimum spanning trees
,
1992,
Proceedings., 33rd Annual Symposium on Foundations of Computer Science.
[2]
Michael L. Fredman,et al.
The Complexity of Maintaining an Array and Computing Its Partial Sums
,
1982,
JACM.
[3]
Alfred V. Aho,et al.
The Design and Analysis of Computer Algorithms
,
1974
.
[4]
Robert E. Tarjan,et al.
A data structure for dynamic trees
,
1981,
STOC '81.
[5]
Kenneth J. Supowit,et al.
New techniques for some dynamic closest-point and farthest-point problems
,
1990,
SODA '90.
[6]
D. Heller.
A Survey of Parallel Algorithms in Numerical Linear Algebra.
,
1978
.
[7]
F. Leighton,et al.
Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes
,
1991
.
[8]
M. Rabin,et al.
Fast evaluation of polynomials by rational preparation
,
1972
.
[9]
David Eppstein,et al.
Sparsification-a technique for speeding up dynamic graph algorithms
,
1992,
Proceedings., 33rd Annual Symposium on Foundations of Computer Science.
[10]
Martin Tompa,et al.
Time-space tradeoffs for computing functions, using connectivity properties of their circuits
,
1978,
J. Comput. Syst. Sci..
[11]
Leslie G. Valiant,et al.
The Complexity of Computing the Permanent
,
1979,
Theor. Comput. Sci..
[12]
Allan Borodin,et al.
The computational complexity of algebraic and numeric problems
,
1975,
Elsevier computer science library.
[13]
Greg N. Frederickson,et al.
Ambivalent data structures for dynamic 2-edge-connectivity and k smallest spanning trees
,
1991,
[1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science.
[14]
Greg N. Frederickson,et al.
A data structure for dynamically maintaining rooted trees
,
1997,
SODA '93.
[15]
J. Tukey,et al.
An algorithm for the machine calculation of complex Fourier series
,
1965
.