Parallel Computation of Echelon Forms

We propose efficient parallel algorithms and implementations on shared memory architectures of LU factorization over a finite field. Compared to the corresponding numerical routines, we have identified three main specifities of linear algebra over finite fields. First, the arithmetic complexity could be dominated by modular reductions. Therefore, it is mandatory to delay as much as possible these reductions while mixing fine-grain parallelizations of tiled iterative and recursive algorithms. Second, fast linear algebra variants, e.g., using Strassen-Winograd algorithm, never suffer from instability and can thus be widely used in cascade with the classical algorithms. There, trade-offs are to be made between size of blocks well suited to those fast variants or to load and communication balancing. Third, many applications over finite fields require the rank profile of the matrix (quite often rank deficient) rather than the solution to a linear system. It is thus important to design parallel algorithms that preserve and compute this rank profile. Moreover, as the rank profile is only discovered during the algorithm, block size has then to be dynamic. We propose and compare several block decompositions: tile iterative with left-looking, right-looking and Crout variants, slab and tile recursive. Experiments demonstrate that the tile recursive variant performs better and matches the performance of reference numerical software when no rank deficiency occurs. Furthermore, even in the most heterogeneous case, namely when all pivot blocks are rank deficient, we show that it is possbile to maintain a high efficiency.

[1]  Jack Dongarra,et al.  Numerical Linear Algebra for High-Performance Computers , 1998 .

[2]  Jean-Guillaume Dumas,et al.  Simultaneous computation of the row and column rank profiles , 2013, ISSAC '13.

[3]  Gene H. Golub,et al.  Matrix computations , 1983 .

[4]  Jean-Guillaume Dumas,et al.  Dense Linear Algebra over Word-Size Prime Fields: the FFLAS and FFPACK Packages , 2006, TOMS.

[5]  Jack J. Dongarra,et al.  The LINPACK Benchmark: past, present and future , 2003, Concurr. Comput. Pract. Exp..

[6]  William Stein,et al.  Modular forms, a computational approach , 2007 .

[7]  Julien Langou,et al.  A Class of Parallel Tiled Linear Algebra Algorithms for Multicore Architectures , 2007, Parallel Comput..

[8]  Joachim von zur Gathen,et al.  Modern Computer Algebra , 1998 .

[9]  Fred G. Gustavson,et al.  Recursion leads to automatic variable blocking for dense linear-algebra algorithms , 1997, IBM J. Res. Dev..

[10]  J. Faugère A new efficient algorithm for computing Gröbner bases (F4) , 1999 .

[11]  Jack J. Dongarra,et al.  Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting , 2014, Concurr. Comput. Pract. Exp..

[12]  Sivan Toledo Locality of Reference in LU Decomposition with Partial Pivoting , 1997, SIAM J. Matrix Anal. Appl..

[13]  Claude-Pierre Jeannerod,et al.  Rank-profile revealing Gaussian elimination and the CUP matrix decomposition , 2011, J. Symb. Comput..

[14]  Robert A. van de Geijn,et al.  Anatomy of a Parallel Out-of-Core Dense Linear Solver , 1995, ICPP.

[15]  Jack J. Dongarra,et al.  Scheduling dense linear algebra operations on multicore processors , 2010, Concurr. Comput. Pract. Exp..

[16]  Thierry Gautier,et al.  libKOMP, an Efficient OpenMP Runtime System for Both Fork-Join and Data Flow Paradigms , 2012, IWOMP.