Parallel Quasi-Monte Carlo Methods for Linear Algebra Problems
暂无分享,去创建一个
In this paper we propose an improved quasi-Monte Carlo method for solving Linear Algebra problems. We show that by using low-discrepancy sequences both the convergence and the CPU time of the algorithm are improved. Two parallelization schemes using the Message Passing Interface with static and dynamic load balancing are proposed. The dynamic scheme is useful for computing in the GRID environment.