Integration of CUDA Processing within the C++ Library for Parallelism and Concurrency (HPX)

Experience shows that on today's high performance systems the utilization of different acceleration cards in conjunction with a high utilization of all other parts of the system is difficult. Future architectures, like exascale clusters, are expected to aggravate this issue as the number of cores are expected to increase and memory hierarchies are expected to become deeper. One big aspect for distributed applications is to guarantee high utilization of all available resources, including local or remote acceleration cards on a cluster while fully using all the available CPU resources and the integration of the GPU work into the overall programming model. For the integration of CUDA code we extended HPX, a general purpose C++ run time system for parallel and distributed applications of any scale, and enabled asynchronous data transfers from and to the GPU device and the asynchronous invocation of CUDA kernels on this data. Both operations are well integrated into the general programming model of HPX which allows to seamlessly overlap any GPU operation with work on the main cores. Any user defined CUDA kernel can be launched on any (local or remote) GPU device available to the distributed application. We present asynchronous implementations for the data transfers and kernel launches for CUDA code as part of a HPX asynchronous execution graph. Using this approach we can combine all remotely and locally available acceleration cards on a cluster to utilize its full performance capabilities. Overhead measurements show, that the integration of the asynchronous operations (data transfer + launches of the kernels) as part of the HPX execution graph imposes no additional computational overhead and significantly eases orchestrating coordinated and concurrent work on the main cores and the used GPU devices.

[1]  Dietmar Fey,et al.  High Performance Computing , 2016, Lecture Notes in Computer Science.

[2]  Michael Griebel,et al.  A HPX-based parallelization of a Navier-Stokes-solver , 2016 .

[3]  Michael Garland,et al.  Designing a unified programming model for heterogeneous machines , 2012, 2012 International Conference for High Performance Computing, Networking, Storage and Analysis.

[4]  Dan Bonachea GASNet Specification, v1.1 , 2002 .

[5]  Richard D. Hornung,et al.  The RAJA Portability Layer: Overview and Status , 2014 .

[6]  Patrick Diehl,et al.  An asynchronous and task-based implementation of peridynamics utilizing HPX—the C++ standard library for parallelism and concurrency , 2018, SN Applied Sciences.

[7]  Daniel Sunderland,et al.  Kokkos: Enabling manycore performance portability through polymorphic memory access patterns , 2014, J. Parallel Distributed Comput..

[8]  Kemal Ebcioğlu,et al.  X 10 : Programming for Hierarchical Parallelism and Non-Uniform Data Access ( Extended , 2004 .

[9]  Hartmut Kaiser,et al.  HPX: A Task Based Programming Model in a Global Address Space , 2014, PGAS.

[10]  Thomas L. Sterling,et al.  ParalleX An Advanced Parallel Execution Model for Scaling-Impaired Applications , 2009, 2009 International Conference on Parallel Processing Workshops.

[11]  Nathan Bell,et al.  Chapter 16 – Thrust: A Productivity-Oriented Library for CUDA , 2013 .

[12]  Dietmar Fey,et al.  Higher-level parallelization for local and distributed asynchronous task-based programming , 2015, ESPM '15.

[13]  Hartmut Kaiser,et al.  Using SYCL as an Implementation Framework for HPX.Compute , 2017, IWOCL.

[14]  Nathan Bell,et al.  Thrust: A Productivity-Oriented Library for CUDA , 2012 .

[15]  Vivek Sarkar,et al.  X 10 : an Experimental Language for High Productivity Programming of Scalable Systems ( Extended Abstract ) , 2005 .

[16]  Bradford L. Chamberlain,et al.  Parallel Programmability and the Chapel Language , 2007, Int. J. High Perform. Comput. Appl..

[17]  Thomas Heller,et al.  HPX – An open source C++ Standard Library for Parallelism and Concurrency , 2023, ArXiv.

[18]  Timothy G. Mattson,et al.  The Parallel Research Kernels , 2014, 2014 IEEE High Performance Extreme Computing Conference (HPEC).

[19]  Thomas L. Sterling,et al.  Preliminary design examination of the ParalleX system from a software and hardware perspective , 2011, PERV.