Asymptotically Efficient Algorithms for Parallel Architectures

This paper gives a general method for the construction of parallel algorithms. Starting from a conventional sequential program, one rst constructs a timing function, i.e. a schedule for a paracomputer. It is shown that the ratio of the maximum value of the timing function to the total operation count is a good measure of the degree of parallelism in the original algorithm. In particular, if this ratio tends to zero when the operation count grows large, then there is an asymptotically eecient parallel version of the original algorithm. This implementation is shown to be surprisingly robust in the face of variations, random and otherwise, of the operation execution times. The technique may be used as the starting point of the construction of programs for all kinds of parallel computers: vector, synchronous, asynchronous and distributed architectures.