Characterizing communication patterns of NAS-MPI benchmark programs
暂无分享,去创建一个
[1] Alan George,et al. Parallel sparse Gaussian elimination with partial pivoting , 1990 .
[2] Jack Dongarra,et al. MPI: The Complete Reference , 1996 .
[3] Lizy K. John,et al. Workload characterization: motivation, goals and methodology , 1998, Workload Characterization: Methodology and Case Studies. Based on the First Workshop on Workload Characterization.
[4] Ahmad Faraj,et al. Communication Characteristics in the NAS Parallel Benchmarks , 2002, IASTED PDCS.
[5] Michael T. Heath,et al. Scientific Computing: An Introductory Survey , 1996 .
[6] Jeffrey S. Vetter,et al. Communication characteristics of large-scale scientific applications for contemporary cluster architectures , 2002, Proceedings 16th International Parallel and Distributed Processing Symposium.
[7] David H. Bailey,et al. NAS parallel benchmark results , 1992, Proceedings Supercomputing '92.
[8] Michael T. Heath,et al. Parallel solution of triangular systems on distributed-memory multiprocessors , 1988 .
[9] Reza Zamani,et al. Communication Characteristics of Message-Passing Scientific and Engineering Applications , 2005, IASTED PDCS.
[10] John N. Shadid,et al. Parallel sparse matrix vector multiply software for matrices with data locality , 1998 .
[11] Carl Kesselman,et al. Generalized communicators in the Message Passing Interface , 1996, Proceedings. Second MPI Developer's Conference.
[12] Michael T. Heath,et al. Parallel Algorithms for Sparse Linear Systems , 1991, SIAM Rev..
[13] Edmond Chow,et al. Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners with a Priori Sparsity Patterns , 2001, Int. J. High Perform. Comput. Appl..
[14] George Karypis,et al. Parmetis parallel graph partitioning and sparse matrix ordering library , 1997 .