Hybrid Parallelism for CFD Simulations: Combining MPI with OpenMP
暂无分享,去创建一个
In this paper, performance of hybrid programming approach using MPI and OpenMP for a parallel CFD solver was studied in a single cluster of multi-core parallel system. Timing cost for computation and communication was compared for different scenarios. Tuning up the MPI based parallelizable sections of the solver with OpenMP functions and libraries were done. BigRed parallel system of Indiana University was used for parallel runs for 8, 16, 32, and 64 compute nodes with 4 processors (cores) per node. Four threads were used within the node, one for each core. It was observed that MPI performed better than the hybrid with OpenMP in overall elapsed time. However, the hybrid approach showed improved communication time for some cases. In terms of parallel speedup and efficiency, hybrid results were close to MPI, though they were higher for processor numbers less than 32. In general, MPI outperforms hybrid for our applications on this particular computing platform.
[1] Rolf Rabenseifner,et al. Hybrid Parallel Programming: Performance Problems and Chances , 2003 .
[2] A. Ecer,et al. Parallel Computational Fluid Dynamics, '91 , 1992 .
[3] Hasan U. Akay,et al. Computational fluid dynamics applications on TeraGrid , 2006 .
[4] Haoqiang Jin,et al. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster , 2003 .
[5] H. Akay,et al. Cell-vertex Based Parallel and Adaptive Explicit 3D Flow Solution on Unstructured Grids , 2001 .