Study of parallelism in regular iterative algorithms

The study of Regular Iterative Algorithms (RIAs), which was introduced in a seminal pa.per by Knrp, Miller, and Winograd in 1967, forms the basis for systematic design and analysis of regular processor arrays, including the class of systolic arrays. The RIAs have also been studied under different contexts and different names (on the last count RIAs were reintroduced, as late as 1987, under the name of dynamic graphs). In spite of the interest such algorithms have received over the years, many important issues that were left unresolved in the original paper by Karp e2 al., have remained unanswered. In this paper we answer many such questions, particularly those relating to parallel scheduling and implementation of RIAs. Based on the analysis of a simple graph that captures the dependence structure of a given RIA, we are able to determine linear subspaces in the index space of a given RIA such that all variables lying on the same subspace can be scheduled at the same time; this generalizes the so-called hyperplanar scheduling which was shown by Karp et. al. to work for only a subclass of RIAs. This geometric scheduling scheme is shown to be asymptotically optimal and is used to completely characterize the extent of parallelism in any RIA. Moreover, we develop procedures to determine explicit schedules (;.e, a closed form expression for the schedule of every computation in the algorithm) that correspond to the geometric schedules, and also show that every RIA can be automatically mapped onto regular processor arrays. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.