In this work, we describe a multilevel-multigraph algorithm. An excellent recent survey on algebraic approaches to multilevel iterative methods is given in Wagner [Wag99]. This article also contains an extensive bibliography. The algorithm discussed here is described more fully in [BS00]. Our goal is to develop an iterative solver with the simplicity of use and robustness of general sparse Gaussian elimination, and at the same time to capture the computational efficiency of classical multigrid algorithms. While we do not believe that the current algorithm achieves this goal, it represents an important step in this direction. To guarantee robustness, general sparse Gaussian elimination with minimum degree ordering is a point in the parameter space of our method. This is a well known and widely used method, among the most computationally efficient of general sparse direct methods [GL81]. To obtain simplicity of use and implementation, our algorithms incorporate many technologies and algorithms originally developed for general sparse Gaussian elimination. Besides the minimum degree algorithm, the Reverse Cuthill-McKee ordering is the basis of our coarsening procedure. Our sparse matrix data structures are a generalization of those first introduced in the symmetric Yale Sparse Matrix Package [EGSS82], and our (incomplete) factorization procedure is a generalization of the sparse row elimination scheme used there. To gain computational efficiency, our method offers the possibility to compute an incomplete factorization with the user able to specify a drop tolerance and an absolute bound on the total fill-in. This factorization becomes the smoother in a multilevel procedure similar to the classical multigrid method. Sparse direct methods typically have two phases. In the initialization phase, equations are ordered, and symbolic and numerical factorizations are computed. In the solution phase, the solution of the linear system is computed using the factorization. Our procedure, as well as other algebraic multilevel methods, also breaks naturally into two phases. The initialization consists of ordering, incomplete symbolic and numeric factorizations, and the computation of the transfer matrices between levels. In the solution phase, the preconditioner computed in the initialization phase is used to compute the solution using the preconditioned Composite Step Conjugate Gradient (CSCG) or the Composite Step Biconjugate Gradient (CSBCG) method [BC93]. In the spirit of general sparse Gaussian elimination, we have tried to minimize the number of user specified control parameters. In the initialization phase, there are three parameters. The most important is the drop tolerance ( ) for the incomplete factorization. Because the fill-in for the ILU tends to be a very nonlinear and unpredictable function of the drop tolerance, we also allow the user to specify an upper bound on the amount to fill-in the be allowed in the
[1]
T. Chan,et al.
An analysis of the composite step biconjugate gradient method
,
1993
.
[2]
J. Pasciak,et al.
Computer solution of large sparse positive definite systems
,
1982
.
[3]
D FalgoutRobert.
An Introduction to Algebraic Multigrid
,
2006
.
[4]
Randolph E. Bank,et al.
PLTMG - a software package for solving elliptic partial differential equations: users' guide 8.0
,
1998,
Software, environments, tools.
[5]
Stanley C. Eisenstat,et al.
Algorithms and Data Structures for Sparse Symmetric Gaussian Elimination
,
1981
.