Parallel Quasi-Monte Carlo Methods for Linear Algebra Problems

In this paper we propose an improved quasi-Monte Carlo method for solving Linear Algebra problems. We show that by using low-discrepancy sequences both the convergence and the CPU time of the algorithm are improved. Two parallelization schemes using the Message Passing Interface with static and dynamic load balancing are proposed. The dynamic scheme is useful for computing in the GRID environment.