Performance Improvements for a Large-scale Geological Simulation

Abstract Geological models have been successfully used to identify and study geothermal energy resources. Many computer simulations based on these models are data-intensive applications. Large-scale geological simulations require high performance computing (HPC) techniques to run within reasonable time constraints and performance levels. One research area that can benefit greatly from HPC techniques is the modeling of heat flow beneath the Earth's surface. This paper describes the application of HPC techniques to increase the scale of research with a well-established geological model. Recently, a serial C++ application based on this geological model was ported to a parallel HPC applications using MPI. An area of focus was to increase the performance of the MPI version to enable state or regional scale simulations using large numbers of processors. First, synchronous communications among MPI processes was replaced by overlapping communication and computation (asynchronous communication). Asynchronous communication improved performance over synchronous communications by averages of 28% using 56 cores in one environment and 46% using 56 cores in another. Second, an approach for load balancing involving repartitioning the data at the start of the program resulted in runtime performance improvements of 32% using 48 cores in the first environment and 14% using 24 cores in the second when compared to the asynchronous version. An additional feature, modeling of erosion, was also added to the MPI code base. The performance improvement techniques under erosion were less effective.

[1]  K.J. Barker,et al.  An Evaluation of a Framework for the Dynamic Load Balancing of Highly Adaptive and Irregular Parallel Applications , 2003, ACM/IEEE SC 2003 Conference (SC'03).

[2]  Jan Safanda,et al.  Implications of Post-Glacial Warming for Northern Alberta Heat Flow— Correcting for the Underestimate of the Geothermal Potential , 2012 .

[3]  Keshav Pingali,et al.  A load balancing framework for adaptive and asynchronous applications , 2004, IEEE Transactions on Parallel and Distributed Systems.

[4]  Forum Mpi MPI: A Message-Passing Interface , 1994 .

[5]  William D. Gosnold,et al.  Basin-Scale Groundwater Flow and Advective Heat Flow: An Example from the Northern Great Plains , 1999 .

[6]  William D. Gosnold,et al.  Episodic construction of batholiths: Insights from the spatiotemporal development of an ignimbrite flare-up , 2007 .

[7]  Bruce Hendrickson,et al.  An empirical study of static load balancing algorithms , 1994, Proceedings of IEEE Scalable High Performance Computing Conference.

[8]  Jan Christian Meyer,et al.  A Load Balancing Strategy for Computations on Large, Read-Only Data Sets , 2006, PARA.

[9]  Aleksandar Erdeljan,et al.  An Optimal Initial Partitioning of Large Data Model in Utility Management Systems , 2011 .

[10]  Inanc Senocak,et al.  An MPI-CUDA Implementation for Massively Parallel Incompressible Flow Computations on Multi-GPU Clusters , 2010 .

[11]  Jin-Fa Lee,et al.  An MPI/GPU parallelization of an interior penalty discontinuous Galerkin time domain method for Maxwell's equations , 2011 .

[12]  Laxmikant V. Kalé,et al.  Performance evaluation of adaptive MPI , 2006, PPoPP '06.

[13]  R. Sparks,et al.  Journal of volcanology and geothermal research: Elsevier Scientific Publishing Co., Amsterdam-Oxford-New York, N.Y., subscription price for 1976: Dfl. 117.00/US $44.95 (one volume, four issues) , 1976 .

[14]  Laxmikant V. Kalé,et al.  CHARM++: a portable concurrent object oriented system based on C++ , 1993, OOPSLA '93.

[15]  Wei Huang,et al.  Design of High Performance MVAPICH2: MPI2 over InfiniBand , 2006, Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06).

[16]  Steve Hauck,et al.  Implications of Post-glacial Warming for Northern Hemisphere Heat Flow , 2011 .

[17]  Kyle Foerster,et al.  Password recovery using MPI and CUDA , 2012, 2012 19th International Conference on High Performance Computing.