On the way to Exascale, programmers face the increasing challenge of having to support multiple hardware architectures from the same code base. At the same time, portability of code and performance are increasingly difficult to achieve as hardware architectures are becoming more and more diverse. Today’s heterogeneous systems often include two or more completely distinct and incompatible hardware execution models, such as GPGPU’s, SIMD vector units, and general purpose cores which conventionally have to be programmed using separate tool chains representing non-overlapping programming models. The recent revival of interest in the industry and the wider community for the C++ language has spurred a remarkable amount of standardization proposals and technical specifications in the arena of concurrency and parallelism. This recently includes an increasing amount of discussion around the need for a uniform, higher-level abstraction and programming model for parallelism in the C++ standard targeting heterogeneous and distributed computing. Such an abstraction should perfectly blend with existing, already standardized language and library features, but should also be generic enough to support future hardware developments. In this paper, we present the results from developing such a higher-level programming abstraction for parallelism in C++ which aims at enabling code and performance portability over a wide range of architectures and for various types of parallelism. We present and compare performance data obtained from running the well-known STREAM benchmark ported to our higher level C++ abstraction with the corresponding results from running it natively. We show that our abstractions enable performance at least as good as the comparable base-line benchmarks while providing a uniform programming API on all compared target architectures.
[1]
Openmp: a Proposed Industry Standard Api for Shared Memory Programming
,
2022
.
[2]
Simon McIntosh-Smith,et al.
GPU-STREAM: Benchmarking the achievable memory bandwidth of Graphics Processing Units
,
2015,
SC 2015.
[3]
Dietmar Fey,et al.
Higher-level parallelization for local and distributed asynchronous task-based programming
,
2015,
ESPM '15.
[4]
M. Petró‐Turza,et al.
The International Organization for Standardization.
,
2003
.
[5]
Guillaume Mercier,et al.
hwloc: A Generic Framework for Managing Hardware Affinities in HPC Applications
,
2010,
2010 18th Euromicro Conference on Parallel, Distributed and Network-based Processing.
[6]
Daniel Sunderland,et al.
Kokkos: Enabling manycore performance portability through polymorphic memory access patterns
,
2014,
J. Parallel Distributed Comput..
[7]
Richard D. Hornung,et al.
The RAJA Portability Layer: Overview and Status
,
2014
.