A Library-Based Approach to Task Parallelism in a Data-Parallel Language

Pure data-parallel languages such as High Performance Fortran version 1 (HPF) do not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compiler technology.

[1]  Prithviraj Banerjee,et al.  Automatic generation of efficient array redistribution routines for distributed memory multicomputers , 1995, Proceedings Frontiers '95. The Fifth Symposium on the Frontiers of Massively Parallel Computation.

[2]  Sachin S. Sapatnekar,et al.  A Framework for Exploiting Data and Functional Parallelism on Distributed Memory Multicomputers , 1994 .

[3]  SkjellumAnthony,et al.  A high-performance, portable implementation of the MPI message passing interface standard , 1996 .

[4]  Alok N. Choudhary,et al.  Double Standards: Bringing Task Parallelism to HPF Via the Message Passing Interface , 1996, Proceedings of the 1996 ACM/IEEE Conference on Supercomputing.

[5]  Edward B. Parlette,et al.  Development of a flexible and efficient multigrid-based multiblock flow solver; aiaa-93-0677 , 1993 .

[6]  Michael Gerndt,et al.  SUPERB: A tool for semi-automatic MIMD/SIMD parallelization , 1988, Parallel Comput..

[7]  E. Dubois,et al.  Digital picture processing , 1985, Proceedings of the IEEE.

[8]  Anne Rogers,et al.  Process decomposition through locality of reference , 1989, PLDI '89.

[9]  David M. Nicol,et al.  Optimal Processor Assignment for Pipeline Computations , 1991 .

[10]  Ian Foster,et al.  A compilation system that integrates High Performance Fortran and Fortran M , 1994, Proceedings of IEEE Scalable High Performance Computing Conference.

[11]  Nicholas Carriero,et al.  Linda in context , 1989, CACM.

[12]  William Gropp,et al.  Skjellum using mpi: portable parallel programming with the message-passing interface , 1994 .

[13]  Joel H. Saltz,et al.  Interoperability of data parallel runtime libraries with meta-chaos , 1996 .

[14]  David R. Kohr,et al.  Design And Optimization Of Coordination Mechanisms For Data-Parallel Tasks , 1996 .

[15]  Ken Kennedy,et al.  Compiling Fortran D for MIMD distributed-memory machines , 1992, CACM.

[16]  Ian Foster,et al.  Strand: New Concepts in Parallel Programming , 1990 .

[17]  Thomas R. Gross,et al.  Task Parallelism in a High Performance Fortran Framework , 1994, IEEE Parallel & Distributed Technology: Systems & Applications.

[18]  Guy L. Steele,et al.  The High Performance Fortran Handbook , 1993 .

[19]  W. Daniel Hillis,et al.  Data parallel algorithms , 1986, CACM.

[20]  Milind Girkar,et al.  Automatic Extraction of Functional Parallelism from Ordinary Programs , 1992, IEEE Trans. Parallel Distributed Syst..

[21]  Matthew Haines,et al.  An Overview of the Opus Language and Runtime System , 1994, LCPC.

[22]  Geoffrey C. Fox,et al.  Integrating multiple programming paradigms on Connection Machine CM5 in a dataflow-based software environment , 1993 .

[23]  Ian Foster,et al.  MPI as a coordination layer for communicating HPF tasks , 1996, Proceedings. Second MPI Developer's Conference.