Efficient Data Compression Methods for Multidimensional Sparse Array Operations Based on the EKMR Scheme

We have proposed the extended Karnaugh map representation (EKMH) scheme for multidimensional array representation. We propose two data compression schemes, EKMR compressed row/column storage (ECRS/ECCS), for multidimensional sparse arrays based on the EKMR scheme. To evaluate the proposed schemes, we compare them to the CRS/CCS schemes. Both theoretical analysis and experimental tests were conducted. In the theoretical analysis, we analyze the CRS/CCS and the ECRS/ECCS schemes in terms of the time complexity, the space complexity, and the range of their usability for practical applications. In experimental tests, we compare the compressing time of sparse arrays and the execution time of matrix-matrix addition and matrix-matrix multiplication based on the CRS/CCS and the ECRS/ECCS schemes. The theoretical analysis and experimental results show that the ECRS/ECCS schemes are superior to the CRS/CCS schemes for all the evaluated criteria, except the space complexity in some case.

[1]  Kanad Ghose,et al.  Caching-efficient multithreaded fast multiplication of sparse matrices , 1998, Proceedings of the First Merged International Parallel Processing Symposium and Symposium on Parallel and Distributed Processing.

[2]  C.W. Kessler,et al.  The SPARAMAT approach to automatic comprehension of sparse matrix computations , 1999, Proceedings Seventh International Workshop on Program Comprehension.

[3]  K. Pingali,et al.  Compiling Parallel Code for Sparse Matrix Applications , 1997, ACM/IEEE SC 1997 Conference (SC'97).

[4]  Emilio L. Zapata,et al.  Local enumeration techniques for sparse algorithms , 1998, Proceedings of the First Merged International Parallel Processing Symposium and Symposium on Parallel and Distributed Processing.

[5]  Chun-Yuan Lin,et al.  Efficient Representation Scheme for Multidimensional Array Operations , 2002, IEEE Trans. Computers.

[6]  Rafael Asenjo,et al.  Data-parallel support for numerical irregular problems , 1999, Parallel Comput..

[7]  P. Sadayappan,et al.  On improving the performance of sparse matrix-vector multiplication , 1997, Proceedings Fourth International Conference on High-Performance Computing.

[8]  J. Cullum,et al.  Lanczos algorithms for large symmetric eigenvalue computations , 1985 .

[9]  Joel H. Saltz,et al.  Parallelization Techniques for Sparse Matrix Applications , 1996, J. Parallel Distributed Comput..

[10]  Richard Barrett,et al.  Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods , 1994, Other Titles in Applied Mathematics.

[11]  Iain S. Duff,et al.  Users' guide for the Harwell-Boeing sparse matrix collection (Release 1) , 1992 .

[12]  Barbara M. Chapman,et al.  Vienna-Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilation , 1997, IEEE Trans. Parallel Distributed Syst..

[13]  Jenq Kuen Lee,et al.  Parallel Sparse Supports for Array Intrinsic Functions of Fortran 90 , 2001, The Journal of Supercomputing.

[14]  Emilio L. Zapata,et al.  Cache probabilistic modeling for basic sparse algebra kernels involving matrices with a non-uniform distribution , 1998, Proceedings. 24th EUROMICRO Conference (Cat. No.98EX204).

[15]  Boleslaw K. Szymanski,et al.  Run-Time Optimization of Sparse Matrix-Vector Multiplication on SIMD Machines , 1994, PARLE.