Characterizing communication patterns of NAS-MPI benchmark programs

Scientific computing algorithms on parallel computing environments are popularly used to simulate scientific and engineering phenomena rather than physical experimentations. The performance of these applications on parallel computing environments depends on the communication delay between processors. To reduce the delay, communication patterns have been studied by many research scientists. The communication characteristics enables us to better understand the performance behaviors of scientific applications and allows us to predict the performance of large scale applications using a model of smaller version application programs. In this paper, we analyzed the communication behavior using NAS-MPI benchmark programs which have been used to represent scientific and engineering workloads. The experimental results show that the communication patterns such as communication timing, sizes of messages and destinations could be used to predict the performance.

[1]  Alan George,et al.  Parallel sparse Gaussian elimination with partial pivoting , 1990 .

[2]  Jack Dongarra,et al.  MPI: The Complete Reference , 1996 .

[3]  Lizy K. John,et al.  Workload characterization: motivation, goals and methodology , 1998, Workload Characterization: Methodology and Case Studies. Based on the First Workshop on Workload Characterization.

[4]  Ahmad Faraj,et al.  Communication Characteristics in the NAS Parallel Benchmarks , 2002, IASTED PDCS.

[5]  Michael T. Heath,et al.  Scientific Computing: An Introductory Survey , 1996 .

[6]  Jeffrey S. Vetter,et al.  Communication characteristics of large-scale scientific applications for contemporary cluster architectures , 2002, Proceedings 16th International Parallel and Distributed Processing Symposium.

[7]  David H. Bailey,et al.  NAS parallel benchmark results , 1992, Proceedings Supercomputing '92.

[8]  Michael T. Heath,et al.  Parallel solution of triangular systems on distributed-memory multiprocessors , 1988 .

[9]  Reza Zamani,et al.  Communication Characteristics of Message-Passing Scientific and Engineering Applications , 2005, IASTED PDCS.

[10]  John N. Shadid,et al.  Parallel sparse matrix vector multiply software for matrices with data locality , 1998 .

[11]  Carl Kesselman,et al.  Generalized communicators in the Message Passing Interface , 1996, Proceedings. Second MPI Developer's Conference.

[12]  Michael T. Heath,et al.  Parallel Algorithms for Sparse Linear Systems , 1991, SIAM Rev..

[13]  Edmond Chow,et al.  Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners with a Priori Sparsity Patterns , 2001, Int. J. High Perform. Comput. Appl..

[14]  George Karypis,et al.  Parmetis parallel graph partitioning and sparse matrix ordering library , 1997 .