Effects of mesh loop modes on performance of unstructured finite volume GPU simulations

In unstructured finite volume method, loop on different mesh components such as cells, faces, nodes, etc is used widely for the traversal of data. Mesh loop results in direct or indirect data access that affects data locality significantly. By loop on mesh, many threads accessing the same data lead to data dependence. Both data locality and data dependence play an important part in the performance of GPU simulations. For optimizing a GPU-accelerated unstructured finite volume Computational Fluid Dynamics (CFD) program, the performance of hot spots under different loops on cells, faces, and nodes is evaluated on Nvidia Tesla V100 and K80. Numerical tests under different mesh scales show that the effects of mesh loop modes are different on data locality and data dependence. Specifically, face loop makes the best data locality, so long as access to face data exists in kernels. Cell loop brings the smallest overheads due to non-coalescing data access, when both cell and node data are used in computing without face data. Cell loop owns the best performance in the condition that only indirect access of cell data exists in kernels. Atomic operations reduced the performance of kernels largely in K80, which is not obvious on V100. With the suitable mesh loop mode in all kernels, the overall performance of GPU simulations can be increased by 15%-20%. Finally, the program on a single GPU V100 can achieve maximum 21.7 and average 14.1 speed up compared with 28 MPI tasks on two Intel CPUs Xeon Gold 6132.

[1]  Xu Sun,et al.  Re-evaluation of Atomic Operations and Graph Coloring for Unstructured Finite Volume GPU Simulations , 2020, 2020 IEEE 32nd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD).

[2]  Frank Mueller,et al.  OpenACC-based GPU Acceleration of a p-multigrid Discontinuous Galerkin Method for Compressible Flows on 3D Unstructured Grids , 2015 .

[3]  Freddie D. Witherden,et al.  ZEFR: A GPU-accelerated high-order solver for compressible viscous flows using the flux reconstruction method , 2020, Comput. Phys. Commun..

[4]  Chao Yang,et al.  Optimizing Finite Volume Method Solvers on Nvidia GPUs , 2019, IEEE Transactions on Parallel and Distributed Systems.

[5]  Adrien Loseille,et al.  Unstructured Grid Adaptation: Status, Potential Impacts, and Recommended Investments Toward CFD Vision 2030 , 2016 .

[6]  Kai Xu,et al.  A hybrid solution method for CFD applications on GPU-accelerated hybrid HPC platforms , 2016, Future Gener. Comput. Syst..

[7]  Dominik Obrist,et al.  High-order accurate simulation of incompressible turbulent flows on many parallel GPUs of a hybrid-node supercomputer , 2019, Comput. Phys. Commun..

[8]  Freddie D. Witherden,et al.  Towards Green Aviation with Python at Petascale , 2016, SC16: International Conference for High Performance Computing, Networking, Storage and Analysis.

[9]  Joaquim R. R. A. Martins,et al.  Perspectives on aerodynamic design optimization , 2020 .

[10]  Hilde van der Togt,et al.  Publisher's Note , 2003, J. Netw. Comput. Appl..

[11]  Takashi Misaka,et al.  Large-Eddy Simulation of Aircraft Wake Evolution from Roll-Up Until Vortex Decay , 2015 .

[12]  William Gropp,et al.  CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences , 2014 .

[13]  Rainald Löhner,et al.  Running unstructured grid‐based CFD solvers on modern graphics hardware , 2009 .

[14]  Gihan R. Mudalige,et al.  Locality optimized unstructured mesh algorithms on GPUs , 2019, J. Parallel Distributed Comput..

[15]  Andrea Lani,et al.  A GPU-enabled Finite Volume solver for global magnetospheric simulations on unstructured grids , 2014, Comput. Phys. Commun..

[16]  Hrvoje Jasak,et al.  A tensorial approach to computational continuum mechanics using object-oriented techniques , 1998 .

[17]  Andrew Giuliani,et al.  Face coloring in unstructured CFD codes , 2017, Parallel Comput..

[18]  Bertil Schmidt,et al.  CUDA-enabled Sparse Matrix-Vector Multiplication on GPUs using atomic operations , 2013, Parallel Comput..

[19]  K. Synylo,et al.  CFD simulation of exhaust gases jet from aircraft engine , 2020 .

[20]  Ricard Borrell,et al.  Heterogeneous CPU/GPU co-execution of CFD simulations on the POWER9 architecture: Application to airplane aerodynamics , 2020, Future Gener. Comput. Syst..

[21]  Jonathan Hines,et al.  Stepping up to Summit , 2018, Comput. Sci. Eng..

[22]  Paul H. J. Kelly,et al.  A Fast and Scalable Graph Coloring Algorithm for Multi-core and Many-core Architectures , 2015, Euro-Par.

[23]  Dan S. Henningson,et al.  Direct numerical simulation of the flow around a wing section at moderate Reynolds number , 2016 .

[24]  J. Alonso,et al.  SU 2 : An Open-Source Suite for Multiphysics Simulation and Design , 2016 .

[25]  Tor M. Aamodt,et al.  General-Purpose Graphics Processor Architectures , 2018, General-Purpose Graphics Processor Architectures.