Some shared memory is desirable in parallel sparse matrix computation

Over the past few years a number of algorithms for solving large sparse systems of equations on distributed-memory multiprocessors have been developed. In this article the authors point out that the properties of sparse matrix problems generally, along with the characteristics of these parallel algorithms for solving them, lead to inefficient use of memory. An example is presented to show that a (relatively small) amount of shared memory on an otherwise pure distributed-memory multiprocessor is very desirable when it is being used to execute these parallel algorithms.