The emergence of the MPI message passing standard for parallel computing

Abstract MPI has been widely adopted as the message passing interface of choice in parallel computing environments. This paper examines how MPI grew out of the requirements of the scientific research community through a broad-based consultative process. The importance of MPI in providing a portable platform upon which to build higher level parallel software, such as numerical software libraries, is discussed. The development of MPI is contrasted with other similar standardization efforts, such as those of the Parallel Computing Forum and the HPF Forum. MPI is also compared with the Parallel Virtual Machine (PVM) system. Some general lessons learned from the MPI specification process are presented.

[1]  Martin Charles Golumbic,et al.  Instruction Scheduling Across Control Flow , 1993, Sci. Program..

[2]  Anthony J. G. Hey,et al.  Message passing interfaces , 1994 .

[3]  R Calkin,et al.  Portable Programming with the PARMACS Message-Passing Library , 1994, Parallel Comput..

[4]  Anthony Skjellum,et al.  A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard , 1996, Parallel Comput..

[5]  William Gropp,et al.  Skjellum using mpi: portable parallel programming with the message-passing interface , 1994 .

[6]  David W. Walker,et al.  The Design of a Standard Message Passing Interface for Distributed Memory Concurrent Computers , 1994, Parallel Comput..

[7]  Peter S. Pacheco Parallel programming with MPI , 1996 .

[8]  Rajeev Thakur,et al.  Users guide for ROMIO: A high-performance, portable MPI-IO implementation , 1997 .

[9]  Barbara M. Chapman,et al.  Programming in Vienna Fortran , 1992, Sci. Program..

[10]  Jack Dongarra,et al.  A Proposal for a User-Level, Message-Passing Interface in a Distributed Memory Environment , 1993 .

[11]  D. W. Walker,et al.  Standards for message-passing in a distributed memory environment , 1992 .

[12]  James Arthur Kohl,et al.  Cumulvs: Providing Fault Toler. Ance, Visualization, and Steer Ing of Parallel Applications , 1996, Int. J. High Perform. Comput. Appl..

[13]  William Gropp,et al.  Why Are PVM and MPI So Different? , 1997, PVM/MPI.

[14]  Ewing L. Lusk,et al.  Monitors, Messages, and Clusters: The p4 Parallel Programming System , 1994, Parallel Comput..

[15]  James Arthur Kohl,et al.  HARNESS: Heterogeneous Adaptable Reconfigurable NEtworked SystemS , 1998, Proceedings. The Seventh International Symposium on High Performance Distributed Computing (Cat. No.98TB100244).

[16]  C. A. R. Hoare,et al.  Communicating Sequential Processes (Reprint) , 1983, Commun. ACM.

[17]  Kurt Maly,et al.  Web‐based framework for distributed computing , 1997 .

[18]  Vaidy S. Sunderam,et al.  PVM: A Framework for Parallel Distributed Computing , 1990, Concurr. Pract. Exp..

[19]  L. Dagum,et al.  OpenMP: an industry standard API for shared-memory programming , 1998 .