Compiler optimizations for distributed-memory programs

The single-program multiple-data (SPMD) mode of execution is an effective approach for exploiting parallelism in programs written using the shared-memory programming model on distributed memory machines. However, during SPMD execution one must consider dependencies due to the transfer of data among the processors. Such dependencies can be avoided by reordering the communication operations (sends and receives). However, no formal framework has been developed to explicitly recognize the represent such dependencies. The author identifies two types of dependencies, namely communication dependencies and scheduling dependencies, and proposes to represent these dependencies explicitly in the program dependency graph. Next, he presents program transformations that use this dependency information in transforming the program and increasing the degree of parallelism exploited. Finally, the author presents program transformations that reduce communication related run-time overhead.<<ETX>>

[1]  Joe D. Warren,et al.  The program dependence graph and its use in optimization , 1987, TOPL.

[2]  R. Gupta,et al.  SPMD execution of programs with dynamic data structures on distributed memory machines , 1992, Proceedings of the 1992 International Conference on Computer Languages.

[3]  Anne Rogers,et al.  Process decomposition through locality of reference , 1989, PLDI '89.

[4]  Joel H. Saltz,et al.  A Scheme for Supporting Automatic Data Migration on Multlcomputers , 1990, Proceedings of the Fifth Distributed Memory Computing Conference, 1990..

[5]  David A. Padua,et al.  Dependence graphs and compiler optimizations , 1981, POPL '81.

[6]  Dennis Gannon,et al.  On the problem of optimizing data transfers for complex memory systems , 1988, ICS '88.