General purpose GPU (GPGPU) programming frameworks such as OpenCL and CUDA allow running individual computation kernels sequentially on a device. However, in some cases it is possible to utilize device resources more efficiently by running kernels concurrently. This raises questions about load balancing and resource allocation that have not previously warranted investigation. For example, what kernel characteristics impact the optimal partitioning of resources among concurrently executing kernels? Current frameworks do not provide the ability to easily run kernels concurrently withne-grained and dynamic control over resource partitioning. We present KernelMerge, a kernel scheduler that runs two OpenCL kernels concurrently on one device. KernelMerge furnishes a number of settings that can be used to survey concurrent or single kernel configurations, and to investigate how kernels interact and influence each other, or themselves. KernelMerge provides a concurrent kernel scheduler compatible with the OpenCL API.
We present an argument on the benefits of running kernels concurrently. We demonstrate how to use KernelMerge to increase throughput for two kernels that efficiently use device resources when run concurrently, and we establish that some kernels show worse performance when running concurrently. We also outline a method for using KernelMerge to investigate how concurrent kernels influence each other, with the goal of predicting runtimes for concurrent execution from individual kernel runtimes. Finally, we suggest GPU architectural changes that would improve such concurrent schedulers in the future.
[1]
Vikram K. Narayana,et al.
GPU Resource Sharing and Virtualization on High Performance Computing Systems
,
2011,
2011 International Conference on Parallel Processing.
[2]
Kevin Skadron,et al.
Enabling Task Parallelism in the CUDA Scheduler
,
2009
.
[3]
Wei Yi,et al.
Kernel Fusion: An Effective Method for Better Power Efficiency on Multithreaded GPU
,
2010,
2010 IEEE/ACM Int'l Conference on Green Computing and Communications & Int'l Conference on Cyber, Physical and Social Computing.
[4]
Arthur W. Toga,et al.
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms
,
2012,
Comput. Methods Programs Biomed..
[5]
Long Chen,et al.
Dynamic load balancing on single- and multi-GPU systems
,
2010,
2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS).
[6]
Tarek A. El-Ghazawi,et al.
Exploiting concurrent kernel execution on graphic processing units
,
2011,
2011 International Conference on High Performance Computing & Simulation.
[7]
Henry Wong,et al.
Analyzing CUDA workloads using a detailed GPU simulator
,
2009,
2009 IEEE International Symposium on Performance Analysis of Systems and Software.