Compiling Fortran D for MIMD distributed-memory machines

for MIMD Distributed-Memory Machines :llel computing resents the only :usible way to continue to increase the computational power available to scientists and engineers. Parallel computers , however, are not likely to be widely successful until they are easy to program. A major component in the success of vector supercomputers is the ability of scientists to write Fortran programs in a "'vec-torizable'" style and expect vectorizing compilers to automatically produce efficient code [8, 35]. The resulting programs are easily maintained , debugged, and ported across different vector machines. Compare this with the current situation for programming parallel machines. Scientists wishing to use such machines must rewrite their programs in an extension of For-tran that explicitly reflects the architecture of the underlying machine. Multiple-instruction, multiple data (MIMD) shared-memory machines such as the Cray Research Y-MP C90 are programmed with explicit synchronization and parallel loops found in Parallel Computing Forum (PCF) Fortran [24]. (SIMD) machines such as the Thinking Machines CM-2 are programmed using parallel array constructs found in Fortran 90 [3]. MIMD distributed-memory machines such as the Intel Paragon provide the most difficult programming model. Users must write message passing Fortran 77 programs that deal with separate address spaces, synchronizing processors, and communicating data using messages. The process is time-consuming, tedious, and error-prone. Significant increases in source code size are not only common but expected. Because parallel programs are extremely machine-specific, scientists are discouraged from utilizing parallel machines because they risk losing their investment whenever the program changes or a new architecture arrives. We propose to solve this problem by developing the compiler technology needed to establish a machine-independent programming model. It must be easy to use, yet perform with acceptable efficiency on different parallel architectures, at least for data-parallel scientific codes. The question is whether any existing Fortran dialect suffices. Parallel Computing Fortran is undesirable because it is easy to inadvertently write programs with data races that produce indeterminate results. Message-passing For-tran 77 is portable, but difficult to use. Fortran 90 is promising, but may not be sufficiently flexible. What all these languages lack is a way to specify the decomposition and placement of data in the program. We find that selecting a data decomposition is one of the most important intellectual steps in developing data-parallel scientific codes. Though many techniques have been developed for automatic data decomposition, we feel that the compiler will not be able to choose an efficient …

[1]  David A. Padua,et al.  Dependence graphs and compiler optimizations , 1981, POPL '81.

[2]  Ken Kennedy,et al.  A Parallel Programming Environment , 1985, IEEE Software.

[3]  Ken Kennedy,et al.  Parascope:a Parallel Programming Environment , 1988 .

[4]  Ken Kennedy,et al.  Automatic translation of FORTRAN programs to vector form , 1987, TOPL.

[5]  Joel H. Saltz,et al.  Principles of runtime support for parallel processors , 1988, ICS '88.

[6]  G. C. Fox,et al.  Solving Problems on Concurrent Processors , 1988 .

[7]  Michael Gerndt,et al.  SUPERB: A tool for semi-automatic MIMD/SIMD parallelization , 1988, Parallel Comput..

[8]  Anne Rogers,et al.  Process decomposition through locality of reference , 1989, PLDI '89.

[9]  P.-S. Tseng A parallelizing compiler for distributed memory parallel computers , 1989, PLDI 1989.

[10]  Michael Gerndt,et al.  Updating Distributed Variables in Local Computations , 1990, Concurr. Pract. Exp..

[11]  Ken Kennedy,et al.  Fortran D Language Specification , 1990 .

[12]  Ken Kennedy,et al.  An Interactive Environment for Data Partitioning and Distribution , 1990, Proceedings of the Fifth Distributed Memory Computing Conference, 1990..

[13]  Jean-Louis Pazat,et al.  Pandore: a system to manage data distribution , 1990, ICS '90.

[14]  Cherri M. Pancake,et al.  Do parallel languages respond to the needs of scientific programmers? , 1990, Computer.

[15]  Geoffrey C. Fox,et al.  An Automatic and Symbolic Parallelization System for Distributed Memory Parallel Computers , 1990, Proceedings of the Fifth Distributed Memory Computing Conference, 1990..

[16]  Ken Kennedy,et al.  An Overview of the Fortran D Programming System , 1991, LCPC.

[17]  Harry Berryman,et al.  Performance of Hashed Cache Data Migration Schemes on Multicomputers , 1991, J. Parallel Distributed Comput..

[18]  Marina C. Chen,et al.  Compiling Communication-Efficient Programs for Massively Parallel Machines , 1991, IEEE Trans. Parallel Distributed Syst..

[19]  Anthony P. Reeves,et al.  Paragon: A Parallel Programming Environment for Scientific Applicaitons Using Communication Structures , 1991, J. Parallel Distributed Comput..

[20]  Robert P. Weaver,et al.  The DINO Parallel Programming Language , 1991, J. Parallel Distributed Comput..

[21]  Harry Berryman,et al.  Runtime Compilation Methods for Multicomputers , 1991, International Conference on Parallel Processing.

[22]  Ken Kennedy,et al.  Analysis and transformation in the ParaScope editor , 1991, ICS '91.

[23]  Ken Kennedy,et al.  An Implementation of Interprocedural Bounded Regular Section Analysis , 1991, IEEE Trans. Parallel Distributed Syst..

[24]  Philip J. Hatcher,et al.  Data-Parallel Programming on MIMD Computers , 1991, IEEE Trans. Parallel Distributed Syst..

[25]  Charles Koelbel,et al.  Compiling Global Name-Space Parallel Loops for Distributed Execution , 1991, IEEE Trans. Parallel Distributed Syst..

[26]  Ken Kennedy,et al.  Compiler optimizations for Fortran D on MIMD distributed-memory machines , 1991, Proceedings of the 1991 ACM/IEEE Conference on Supercomputing (Supercomputing '91).

[27]  KremerUlrich,et al.  A static performance estimator to guide data partitioning decisions , 1991 .

[28]  Ken Kennedy,et al.  Computer support for machine-independent parallel programming in Fortran D , 1992 .

[29]  Ken Kennedy,et al.  Evaluation of compiler optimizations for Fortran D on MIMD distributed memory machines , 1992, ICS '92.