We examine the problem of evaluating performance of supercomputer architectures on sparse (matrix) computations and lay out the details of a benchmark package for this problem. Whereas there already exists a number of benchmark packages for scientific computations, such as the Livermore Loops, the Linpack benchmark and the Los Alamos benchmark, none of these deals with the specific nature of sparse computations. Sparse matrix techniques are characterized by the relatively small number of operations per data element and the irregularity of the computation. Both facts may significantly increase the overhead time due to memory traffic. For this reason, the performance evaluation of sparse computations should not only take into account the CPU performance but also the degradation of performance caused by high memory traffic. Furthermore, sparse matrix techniques comprise a variety of different types of basic computations. Taking these considerations into account we propose a benchmark package that consists of several independent modules, each of which has a distinct role.
[1]
I. Duff,et al.
Direct Methods for Sparse Matrices
,
1987
.
[2]
J. Pasciak,et al.
Computer solution of large sparse positive definite systems
,
1982
.
[3]
Alan George,et al.
Computer Solution of Large Sparse Positive Definite
,
1981
.
[4]
William Jalby,et al.
The use of BLAS3 in linear algebra on a parallel processor with a hierarchical memory
,
1987
.
[5]
Allen D. Malony,et al.
Experimental results for vector processing on the Alliant FX/8
,
1986
.
[6]
Iain S. Duff,et al.
Sparse matrix test problems
,
1982
.
[7]
D J Kuck,et al.
Parallel Supercomputing Today and the Cedar Approach
,
1986,
Science.
[8]
F. H. Mcmahon,et al.
The Livermore Fortran Kernels: A Computer Test of the Numerical Performance Range
,
1986
.
[9]
John G. Lewis,et al.
Proposed sparse extensions to the Basic Linear Algebra Subprograms
,
1985,
SGNM.