On the Convergence of the Conjugate Gradient Method for Singular Linear Operator Equations
暂无分享,去创建一个
Let $T:X \to Y$ be a bounded linear operator between two Hilbert spaces X and Y, and let $T^\dag $ denote the generalized inverse of T, and $\Re (T)$ the range of T. If $\Re (T)$ is closed, then starting with $x_0 \in X$, the conjugate gradient method which minimizes the functional $f(x) = ||Tx - y||^2 ,\, y \in Y$ at each step, converges to a least squares solution of $Tx = y$; namely $T^\dag y + (I - p)x_0 $. Here $I - P$ denotes the orthogonal projection of X onto the null space of X. If $\Re (T)$ is not closed with $x_0 \in \Re (T^ * T)$ and $Qy \in \Re (TT^ * T)$, where Q is the orthogonal projection of Y onto the null space of the adjoint operator $T^ * $, then the conjugate gradient method converges to the least squares solution $T^\dag y$ of minimal norm. Bounds on the rate of convergence are given in both cases.