Process Mapping for MPI Collective Communications

It is an important problem to map virtual parallel processes to physical processors (or cores) in an optimized way to get scalable performance due to non-uniform communication cost in modern parallel computers. Existing work uses profile-guided approaches to optimize mapping schemes to minimize the cost of point-to-point communications automatically. However, these approaches cannot deal with collective communications and may get sub-optimal mappings for applications with collective communications. In this paper, we propose an approach called OPP (Optimized Process Placement) to handle collective communications which transforms collective communications into a series of point-to-point communication operations according to the implementation of collective communications in communication libraries. Then we can use existing approaches to find optimized mapping schemes which are optimized for both point-to-point and collective communications. We evaluated the performance of our approach with micro-benchmarks which include all MPI collective communications, NAS Parallel Benchmark suite and three other applications. Experimental results show that the optimized process placement generated by our approach can achieve significant speedup.

[1]  Eli Upfal,et al.  Efficient Algorithms for All-to-All Communications in Multiport Message-Passing Systems , 1997, IEEE Trans. Parallel Distributed Syst..

[2]  Steve Sistare,et al.  Optimization of MPI Collectives on Clusters of Large-Scale SMP's , 1999, SC.

[3]  Henri E. Bal,et al.  MagPIe: MPI's collective communication operations for clustered wide area systems , 1999, PPoPP '99.

[4]  Philip Heidelberger,et al.  Optimizing task layout on the Blue Gene/L supercomputer , 2005, IBM J. Res. Dev..

[5]  Jiwu Shu,et al.  Parallel algorithm and implementation for realtime dynamic simulation of power system , 2005, 2005 International Conference on Parallel Processing (ICPP'05).

[6]  S. Sistare,et al.  Optimization of MPI Collectives on Clusters of Large-Scale SMP’s , 1999, ACM/IEEE SC 1999 Conference (SC'99).

[7]  Martin K. Purvis,et al.  Performance evaluation of view-oriented parallel programming , 2005, 2005 International Conference on Parallel Processing (ICPP'05).

[8]  J. Watts,et al.  Interprocessor collective communication library (InterCom) , 1994, Proceedings of IEEE Scalable High Performance Computing Conference.

[9]  Sajal K. Das,et al.  A hierarchical and distributed approach for mapping large applications to heterogeneous grids using genetic algorithms , 2003, 2003 Proceedings IEEE International Conference on Cluster Computing.

[10]  D J Evans,et al.  Parallel processing , 1986 .

[11]  Panlop Zeephongsekul,et al.  A Heuristic Algorithm for Mapping Parallel Applications on Computational Grids , 2005, EGC.

[12]  José E. Moreira,et al.  Topology Mapping for Blue Gene/L Supercomputer , 2006, ACM/IEEE SC 2006 Conference (SC'06).

[13]  Dhabaleswar K. Panda,et al.  Fast collective operations using shared and remote memory access protocols on clusters , 2003, Proceedings International Parallel and Distributed Processing Symposium.

[14]  Rita R. Colwell From terabytes to insights , 2003, CACM.

[15]  Jesper Larsson Träff,et al.  The Hierarchical Factor Algorithm for All-to-All Communication (Research Note) , 2002, Euro-Par.

[16]  Wenguang Chen,et al.  MPIPP: an automatic profile-guided parallel process placement toolset for SMP clusters and multiclusters , 2006, ICS '06.

[17]  Avneesh Pant,et al.  Communicating efficiently on cluster based grids with MPICH-VMI , 2004, 2004 IEEE International Conference on Cluster Computing (IEEE Cat. No.04EX935).

[18]  Laxmikant V. Kalé,et al.  A framework for collective personalized communication , 2003, Proceedings International Parallel and Distributed Processing Symposium.