Flexible Hardware Mapping for Finite Element Simulations on Hybrid CPU / GPU Clusters

The ever increasing peak floating-point performance and memory bandwidth of GPUs is making them increasingly ubiquitous in the high performance computing community. With increasing adoption of GPUs in cluster environments, applications that cannot take advantage of this hardware will be at a distinct disadvantage. For the class of applications that can achieve massive speedups of 100x or more on the GPU, the way forward is clear: maximum performance will depend on utilizing all available GPUs as efficiently as possible, with the CPU most likely relegated to managing the data flowing into and out of the GPU. However, for applications that can benefit from GPU execution, but may experience speedups that are only in the range of 5-10x, the appropriate relationship between the CPU and GPU is more difficult to determine, and may depend upon the specifics of the algorithm and hardware in question. Finite element applications generally fit the description of codes that benefit from GPUs, but probably not by more than 15-20x even in the best case. In a cluster environment where data must be transferred to the CPU at regular intervals for synchronization, speedups less than 10x are typical. When cluster nodes have 8 CPU cores and more, it is clear that maximum performance will require taking full advantage of execution on both CPU and GPU. We present an API and supporting software layer for finite element applications on unstructured meshes in hybrid CPU/GPU environments that allows runtime mapping of mesh partitions to either CPU or GPU hardware and effectively overlaps CPU and GPU work. This layer sits on top of the ParFUM [6] framework for unstructured meshes and takes advantage of its support for synchronization of shared nodes between mesh partitions. ParFUM in turn relies on the Charm++ parallel runtime system [4]. This software layer manages the creation and deletion of GPU memory buffers and the transfer of node and element data to and from the GPU at each synchronization point. It also provides a consistent API for accessing that data in both CPU and GPU functions, allowing very similar code for equivalent CPU and GPU kernels. We demonstrate the effectiveness of this scheme by presenting a functionally graded material simulation that scales to 128 nodes of the National Center for Supercomputing Applications (NCSA) Lincoln cluster, with a speedup of 2023 over a single CPU core. II. API