Operations on Sparse Matrices are the key computational kernels in many scientific and engineering applications. They are characterized with poor substantiated performance. It is not uncommon for microprocessors to gain only 10-20% of their peak floating-point performance when doing sparse matrix computations even when special vector processors have been added as coprocessor facilities. In this paper we present new data format for sparse matrix storage. This format facilitates the continuous reuse of elements in the processing array. In comparison to other formats we achieve lower storage efficiency (only an extra bit per non-zero elements). A conjuncture of the proposed approach is that the hardware execution efficiency on sparse matrices can be improved. Keywords—Sparse Matrix Formats, Operation Efficiency, Hardware
[1]
Jack Dongarra,et al.
Templates for the Solution of Algebraic Eigenvalue Problems
,
2000,
Software, environments, tools.
[2]
André DeHon,et al.
Floating-point sparse matrix-vector multiply for FPGAs
,
2005,
FPGA '05.
[3]
Y. Saad,et al.
Krylov Subspace Methods on Supercomputers
,
1989
.
[4]
Yong Dou,et al.
64-bit floating-point FPGA matrix multiplication
,
2005,
FPGA '05.
[5]
Stamatis Vassiliadis,et al.
Block Based Compression Storage Expected Performance
,
2002
.
[6]
Stamatis Vassiliadis,et al.
BBCS Based Sparse Matrix-Vector Multiplication: Initial Evaluation
,
2000
.
[7]
Viktor K. Prasanna,et al.
Sparse Matrix-Vector multiplication on FPGAs
,
2005,
FPGA '05.