. Low rank approximation of large and/or sparse matrices is important in many ap plications. We show that good low rank matrix approximations can be directly obtained from the Lanczos bidiagonalization process without computing singular value decomposition. We also demon strate that a so-called one-sided reorthogonalization process can be used to maintain adequate level of orthogonality among the Lanczos vectors and produce accurate low rank approximations. This technique reduces the computational cost of the Lanczos bidiagonalization process. We illustrate the efficiency and applicability of our algorithm using numerical examples from several applications areas. computation of the SVD of A can be costly, and if we only interested in some Aj with j < min(m, n), the computation of the SVD of A is rather wasteful. Also in many applications it is not necessary to compute Aj to very high accuracy since A itself may contain certain errors. It is therefore desirable to develop less expensive alternatives for computing good approximations of Aj. In this paper, we explore one possible avenue of applying the Lanczos bidiagonalization process for finding approximations of Aj. Lanczos bidiagonalization process has been used for computing a few dominant singular triplets (singular values and the corresponding left and right singular vectors) of large sparse matrices [3, 2). We will show that in many cases of interest good approximations can be directly obtained from the Lanczos bidiagonalization process without computing any singular value decomposition. We will also explore relations between the levels of orthogonality of the left Lanczos vectors and the right Lanczos vectors and propose some more efficient reorthogonalization schemes that can be used to reduce the computational cost of the Lanczos bidiagonalization process. The rest of the paper is organized as follows. In Section 2 we briefly review the Lanczos bidiagonalization process and its several variations in finite precision arithmetic. In Section 3 we discuss both a priori and a posteriori estimation and stopping criteria. Section 4 is devoted to orthogonalization issues in the Lanczos bidiagonalization process and several reorthogonalization schemes are discussed in detail. In Section 5 we perform be obtained directly from the Lanczos bidiagonalization process without c.omputing and singular value decomposition. We discussed several theoretical and practical issues such as a priori and a pos teriori error estimation, recursive computation of stopping criterion, and relations between levels of orthogonality of the left and right Lanczos vectors. We also discussed two efficient reorthogonalization schemes: semi-reorthogonalization and one-sided reorthogonalization. A collection of test matrices from several applications areas were used to illustrate the accuracy and efficiency of Lanczos bidiago nalization process with one-sided reorthogonalization. We are currently working on implementations of the algorithms proposed on distributed memory machines such as Cray T3E.
[1]
T. Kailath,et al.
Fast Estimation of Principal Eigenspace Using LanczosAlgorithm
,
1994
.
[2]
David A. Landgrebe,et al.
Analyzing high-dimensional multispectral data
,
1993,
IEEE Trans. Geosci. Remote. Sens..
[3]
Alice J. O'Toole,et al.
Low-dimensional representation of faces in higher dimensions of the face space
,
1993
.
[4]
M. Turk,et al.
Eigenfaces for Recognition
,
1991,
Journal of Cognitive Neuroscience.
[5]
J. Cullum,et al.
A Lanczos Algorithm for Computing Singular Values and Vectors of Large Matrices
,
1983
.
[6]
Michael A. Saunders,et al.
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
,
1982,
TOMS.
[7]
H. Simon.
The lanczos algorithm for solving symmetric linear systems
,
1982
.
[8]
B. Parlett.
The Symmetric Eigenvalue Problem
,
1981
.
[9]
C. Paige.
Error Analysis of the Lanczos Algorithm for Tridiagonalizing a Symmetric Matrix
,
1976
.
[10]
H. Andrews,et al.
Singular Value Decomposition (SVD) Image Coding
,
1976,
IEEE Trans. Commun..