The Storage Formats for Accelerating SMVP on a GPU

This paper aims to study how to choose an effective storage format to accelerate sparse matrix vector product (SMVP) occurring in different numerical methods. We discuss and analyze the storage formats of SMVP which implemented on a GPU. The formats are used for hastening the solution of equations arising from numerical methods. The research in this paper can provide fast selects, which allow low storage space and make memory accesses efficiency, for numerical methods to accelerate SMVP.

[1]  Tao Wang,et al.  Implementation of Jacobi iterative method on graphics processor unit , 2009, 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems.

[2]  Ying Wang,et al.  CUDA-Based Jacobi's Iterative Method , 2009, 2009 International Forum on Computer Science-Technology and Applications.

[3]  M. Okoniewski,et al.  GPU Accelerated Krylov Subspace Methods for Computational Electromagnetics , 2008, 2008 38th European Microwave Conference.

[4]  Samuel Williams,et al.  Optimization of sparse matrix-vector multiplication on emerging multicore platforms , 2009, Parallel Comput..

[5]  Samuel Williams,et al.  Optimization of sparse matrix-vector multiplication on emerging multicore platforms , 2007, Proceedings of the 2007 ACM/IEEE Conference on Supercomputing (SC '07).

[6]  Arutyun Avetisyan,et al.  Automatically Tuning Sparse Matrix-Vector Multiplication for GPU Architectures , 2010, HiPEAC.

[7]  Henk Corporaal,et al.  Compile-time GPU memory access optimizations , 2010, 2010 International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation.

[8]  Ester M. Garzón,et al.  Improving the Performance of the Sparse Matrix Vector Product with GPUs , 2010, 2010 10th IEEE International Conference on Computer and Information Technology.

[9]  Michael Garland,et al.  Efficient Sparse Matrix-Vector Multiplication on CUDA , 2008 .