A general method for parallelism of some dynamic programming algorithms on VLSI was presented in [6]. We present, a general method for parallelisation for the same class of problems on more powerful parallel computers. The method is demonstrated on three typical dynamic programming problems: computing the optimal order of matrix multiplications, the optimal binary search tree and optimal triangulation of polygons (see[1,2]). For these problems the dynamic programming approach gives algorithms having a similar structure. They can be viewed as straight line programs of size O(n3). the general method of parallelisation of such programs described by Valiant et al [16] then leads directly to algorithms working in log2 time with O(n9) processors. However we adopt an alternative approach and show that a special feature of dynamic programming problems can be used. They can be thought as generalized parsing problems: find a tree of the optimal decomposition of the problem into smaller subproblems. A parallel pebble game on fees [10,11] is used to decrease the number of processors and to simplify the structure of the algorithms. We show that the dynamic programming problems considered can be computed in log2n time using n6/log(n) processors on a parallel random access machine without write conflicts (CREW P-RAM). The main operation is essentially matrix multiplication, which is easily implementable on parallel computers with a fixed interconnection network of processors (ultracomputers, in the sense of [15]). Hence the problems considered also can be computed in log2n time using n6 processors on a perfect shuffle computer (PSC) or a cube connected computer (CCC). An extension of the algorithm from [14] for the recognition of context-free languages on PSC and CCC can be used. If the parallel random access machine with concurrent writes (CRCW P-RAM is used then the minimum of m numbers can be determined in constant time (see [8]) and consequently the parallel time for the computation of dynamic programming problems can be reduced from log2(n) to log(n). We investigate also the parallel computation of Fees realising the optimal cost of dynamic programming problems.
[1]
Wojciech Rytter.
On the Complexity of Parallel Parsing of General Context-Free Languages
,
1986,
Theor. Comput. Sci..
[2]
Wojciech Rytter.
On the recognition of context-free languages
,
1984,
Symposium on Computation Theory.
[3]
Leslie G. Valiant,et al.
Fast Parallel Computation of Polynomials Using Few Processors
,
1983,
SIAM J. Comput..
[4]
Wojciech Rytter.
Parallel Time O(log n) Recognition of Unambiguous Context-free Languages
,
1987,
Inf. Comput..
[5]
Wojciech Rytter.
Remarks on pebble games on graphs
,
1987
.
[6]
Sartaj Sahni,et al.
Parallel Matrix and Graph Algorithms
,
1981,
SIAM J. Comput..
[7]
Alfred V. Aho,et al.
Data Structures and Algorithms
,
1983
.
[8]
Ludek Kucera,et al.
Parallel Computation and Conflicts in Memory Access
,
1982,
Information Processing Letters.
[9]
Wojciech Rytter,et al.
An Optimal Parallel Algorithm for Dynamic Expression Evaluation and Its Applications
,
1986,
FSTTCS.
[10]
Gary L. Miller,et al.
Parallel tree contraction and its application
,
1985,
26th Annual Symposium on Foundations of Computer Science (sfcs 1985).
[11]
H. T. Kung,et al.
Direct VLSI Implementation of Combinatorial Algorithms
,
1979
.
[12]
Wojciech Rytter,et al.
The Complexity of Two-Way Pushdown Automata and Recursive Programs
,
1985
.
[13]
Alfred V. Aho,et al.
The Design and Analysis of Computer Algorithms
,
1974
.
[14]
Jan Karel Lenstra,et al.
An introduction to parallelism in combinatorial optimization
,
1986,
Discret. Appl. Math..