In this paper we provide quantitative information about the performance differences between the OpenMP and the MPI version of a large-scale application benchmark suite, SPECseis. We have gathered extensive performance data using hardware counters on a 4-processor Sun Enterprise system. For the presentation of this information we use a Speedup Component Model, which is able to precisely show the impact of various overheads on the program speedup. We have found that overall, the performance figures of both program versions match closely. However, our analysis also shows interesting differences in individual program phases and in overhead categories incurred. Our work gives initial answers to a largely unanswered research question: what are the sources of inefficiencies of OpenMP programs relative to other programming paradigms on large, realistic applications. Our results indicate that the OpenMP and MPI models are basically performance-equivalent on shared-memory architectures. However, we also found interesting differences in behavioral details, such as the number of instructions executed, and the incurred memory latencies and processor stalls.
[1]
David L Weaver,et al.
The SPARC architecture manual : version 9
,
1994
.
[2]
Jay Hoeflinger,et al.
Producing scalable performance with OpenMP: Experiments with two CFD applications
,
2001,
Parallel Comput..
[3]
Abdul Waheed,et al.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessore
,
1998,
HPCN Europe.
[4]
Rudolf Eigenmann,et al.
Benchmarking with real industrial applications: the SPEC High-Performance Group
,
1996
.
[5]
Abdul Waheed,et al.
Parallelization of NAS benchmarks for shared memory multiprocessors
,
1999,
Future Gener. Comput. Syst..
[6]
Rudolf Eigenmann,et al.
Targeting a Shared-Address-Space Version of the Seismic Benchmark Seis1.1
,
1995
.
[7]
David Keppel,et al.
Shade: a fast instruction-set simulator for execution profiling
,
1994,
SIGMETRICS.