Optimal elimination for sparse symmetric systems as a graph problem.

The optimal (requiring the minimum number of multiplications) ordering of a sparse symmetric system of linear algebraic equations to be used with Gaussian elimination is first developed as a graph problem which is then treated using the functional equation techniques of dynamic programming. A simple algorithm is proposed as an alternative to the more lengthy procedures of dynamic programming and this algorithm is shown to be effective for systems whose graphs are "grids". Introduction. The motivation for this work is the fact that there exists a large class of physical problems which give rise to sparse symmetric linear systems, for which the computational effort required to obtain a solution by elimination is highly dependent upon the ordering of the equations. Here a system of n linear algebraic equations A'x' = b' (1) is called symmetric if the coefficient matrix is symmetric and sparse if A' has a large number of zero elements. In many problems dealing with structures, networks, finite difference formulations, etc., this is precisely the case. Certainly if there are no zero elements, there is no such thing as an optimum procedure in the sense in which the term is used here. The origins of this work can be traced back to Ivron [1] in the work which he calls "Diakoptics" and more recently to the work of Branin [2] and Roth [3]. It has been pointed out (see also [4], [5]) that to solve these sparse systems by first computing the inverse system matrix can be highly inefficient, and that Gaussian elimination, which is in fact a special case of one of Kron's techniques, is apparently the most efficient procedure, excluding special cases such as, e.g., systems which are highly symmetric (systems in which there is much repetition of elements or groups of elements). There are now digital computer programs available for the automatic analysis of many physical systems. Since the computational effort and therefore the cost is sensitive to the procedure used, it is important to proceed efficiently. In the following, Gaussian elimination is first developed as a graph problem; its functional equation is then treated using the dynamic programming techniques of Bellman [6]; and finally a simple algorithm is discussed which is a computationally attractive alternative to the dynamic programming procedures and which can be easily included in computer programs for automatic analysis. Gaussian elimination. Given a system in the form of Eq. (1), the question considered here is how to find the solution matrix x' using elimination, so that the computing time, and therefore computing cost, is as small as possible. Following von Neumann, the number of multiplications required will be counted as a measure of the computing time. *Received April 11, 1966; revised manuscript received July 21, 1967. 126 W. R, SPILLERS AND NORRIS HICKERSON [Vol. XXVI, No. 3 Consider the distinct phases, elimination and backsubstitution. Generally, a typical step in the elimination phase consists of using, e.g., the zth row of A' to remove all the nonzero terms except a'tl in the jth column by taking linear combinations of rows. Here, a restrictive form is used in which the zth row is used only to delete terms in the ith column. This implies that the system is well conditioned; it also, however, makes the problem tractable. In the backsubstitution phase, known components of the solution matrix x' are used to compute, e.g., the unknown ith component. In this restricted form, the numerical procedure is completely specified for a given system once the equations have been ordered. Further, if only the number of multiplications are to be counted, a reduced matrix A whose elements are defined to be a,, = 1 if a'a 9^ 0 ^