Programming Effort vs. Performance with a Hybrid Programming Model for Distributed Memory Parallel Architectures

We investigate here the programming effort and performance of a programming model which is a hybrid between shared memory and message-passing. This model permits an easy implementation in shared memory, while still being able to benefit from performance advantages of message-passing for performance critical tasks. We have integrated message-passing with a software DSM system, and evaluated the programming effort and performance with three different applications and various degree of message-passing in the applications.In two of the applications we found that only a small fraction of the source code lines responsible for interprocess communication were performance critical and it was therefore easy to convert only those to message-passing primitives and still approach the performance of pure message-passing.

[1]  Anoop Gupta,et al.  Integration of message passing and shared memory in the Stanford FLASH multiprocessor , 1994, ASPLOS VI.

[2]  Alan L. Cox,et al.  Combining compile-time and run-time support for efficient software distributed shared memory , 1999 .

[3]  Alan L. Cox,et al.  TreadMarks: shared memory computing on networks of workstations , 1996 .

[4]  John K. Bennett,et al.  An Integrated Shared-Memory / Message Passing API for Cluster-Based Multicomputing , 1998 .

[5]  Message P Forum,et al.  MPI: A Message-Passing Interface Standard , 1994 .

[6]  Wolfgang Schröder-Preikschat,et al.  On the Coexistence of Shared-Memory and Message-Passing in the Programming of Parallel Applications , 1997, HPCN Europe.

[7]  L. Dagum,et al.  OpenMP: an industry standard API for shared-memory programming , 1998 .

[8]  Alan L. Cox,et al.  Quantifying the Performance Differences between PVM and TreadMarks , 1997, J. Parallel Distributed Comput..

[9]  Sigarch Proceedings : the 22nd Annual International Symposium on Computer Architecture, June 22-24, 1995, Santa Margherita Ligure, Italy , 1995 .

[10]  James R. Larus,et al.  Cooperative shared memory: software and hardware for scalable multiprocessors , 1993, TOCS.

[11]  Anoop Gupta,et al.  The SPLASH-2 programs: characterization and methodological considerations , 1995, ISCA.

[12]  Rolf Hempel,et al.  The MPI Message Passing Interface Standard , 1994 .

[13]  J. Mark Bull,et al.  Lessons Learned when Comparing Shared Memory and Message Passing Codes on Three Modern Parallel Architectures , 1998, HPCN Europe.

[14]  Anant Agarwal,et al.  Integrating message-passing and shared-memory: early experience , 1993, SIGP.

[15]  David H. Bailey,et al.  The Nas Parallel Benchmarks , 1991, Int. J. High Perform. Comput. Appl..