next up previous contents index
Next: Error Bounds for the Up: Error Bounds for the Previous: Error Bounds for the   Contents   Index


Further Details: Error Bounds for the Singular Value Decomposition

The usual error analysis of the SVD algorithms xGESVD and xGESDD in LAPACK (see subsection 2.3.4) or the routines in LINPACK and EISPACK is as follows [25,55]:

The SVD algorithm is backward stable. This means that the computed SVD, $\hat{U} \hat{\Sigma} \hat{V}^T$ , is nearly the exact SVD of A+E where $\Vert E\Vert _2 / \Vert A\Vert _2 \leq p(m,n) \epsilon$ , and p(m,n) is a modestly growing function of m and n. This means $A+E = (\hat{U} + \delta \hat{U}) \hat{\Sigma} (\hat{V}+ \delta \hat{V})$ is the true SVD, so that $\hat{U}+ \delta \hat{U}$ and $\hat{V}+ \delta
\hat{V}$ are both orthogonal, where $\Vert \delta \hat{U} \Vert \leq p(m,n) \epsilon$ , and $\Vert \delta \hat{V} \Vert \leq p(m,n) \epsilon$ . Each computed singular value $\hat{\sigma}_i$ differs from true $\sigma _ i $ by at most

\begin{displaymath}
\vert \hat{\sigma}_i - \sigma_i \vert \leq p(m,n) \cdot \epsilon \cdot \sigma_1
= {\tt SERRBD} \; ,
\end{displaymath}

(we take p(m,n)=1 in the code fragment). Thus large singular values (those near $\sigma_1$ ) are computed to high relative accuracy and small ones may not be.

There are two questions to ask about the computed singular vectors: ``Are they orthogonal?'' and ``How much do they differ from the true eigenvectors?'' The answer to the first question is yes, the computed singular vectors are always nearly orthogonal to working precision, independent of how much they differ from the true singular vectors. In other words

\begin{displaymath}
\vert\hat{u}_i^T \hat{u}_j \vert = O( \epsilon )
\end{displaymath}

for $i \neq j$ .

Here is the answer to the second question about singular vectors. The angular difference between the computed left singular vector $\hat{u}_i$ and a true ui satisfies the approximate bound

\begin{displaymath}
\theta ( \hat{u}_i , u_i ) \mathrel{\raisebox{-.75ex}{$\math...
...(m,n) \epsilon \Vert A\Vert _2}{{\rm gap}_i}
= {\tt UERRBD}(i)
\end{displaymath}

where ${\rm gap}_i = \min_{j \neq i} \vert \sigma_i - \sigma_j \vert$ is the absolute gap between $\sigma _ i $ and the nearest other singular value. We take p(m,n)=1 in the code fragment. Thus, if $\sigma _ i $ is close to other singular values, its corresponding singular vector ui may be inaccurate. When n < m, then ${\rm gap}_n$ must be redefined as $\min ( \min_{j \neq n} ( \vert \sigma_n - \sigma_j \vert , \sigma_n ) )$ . The gaps may be easily computed from the array of computed singular values using function SDISNA. The gaps computed by SDISNA are ensured not to be so small as to cause overflow when used as divisors. The same bound applies to the computed right singular vector $\hat{v}_i$ and a true vector vi.

Let $\hat{\cal S}$ be the space spanned by a collection of computed left singular vectors $\{\hat{u}_i \, , \, i \in {\cal I}\}$ , where $\cal I$ is a subset of the integers from 1 to n. Let $\cal S$ be the corresponding true space. Then

\begin{displaymath}
\theta ( {\hat{\cal S}}, {\cal S}) \mathrel{\raisebox{-.75ex...
...}\frac{p(m,n) \epsilon \Vert A\Vert _2}
{{\rm gap}_{\cal I}} .
\end{displaymath}

where

\begin{displaymath}
{\rm gap}_{\cal I} = \min_{i \in {\cal I} \atop j \not\in {\cal I}}
\vert \sigma_i - \sigma_j \vert
\end{displaymath}

is the absolute gap between the singular values in $\cal I$ and the nearest other singular value. Thus, a cluster of close singular values which is far away from any other singular value may have a well determined space $\hat{\cal S}$ even if its individual singular vectors are ill-conditioned. The same bound applies to a set of right singular vectors $\{\hat{v}_i \, , \, i \in {\cal I}\}$ 4.1.

In the special case of bidiagonal matrices, the singular values and singular vectors may be computed much more accurately. A bidiagonal matrix B has nonzero entries only on the main diagonal and the diagonal immediately above it (or immediately below it). xGESVD computes the SVD of a general matrix by first reducing it to bidiagonal form B, and then calling xBDSQR (subsection 2.4.6) to compute the SVD of B. xGESDD is similar, but calls xBDSDC to compute the SVD of B. Reduction of a dense matrix to bidiagonal form B can introduce additional errors, so the following bounds for the bidiagonal case do not apply to the dense case.

Each computed singular value of a bidiagonal matrix is accurate to nearly full relative accuracy, no matter how tiny it is:

\begin{displaymath}
\vert \hat{\sigma}_i - \sigma_i \vert \leq p(m,n) \cdot \epsilon \cdot \sigma_i.
\end{displaymath}

The following bounds apply only to xBDSQR. The computed left singular vector $\hat{u}_i$ has an angular error at most about

\begin{displaymath}
\theta ( \hat{u}_i , u_i ) \mathrel{\raisebox{-.75ex}{$\math...
...limits^{\textstyle <}$}}\frac{p(m,n) \epsilon}{{\rm relgap}_i}
\end{displaymath}

where ${\rm relgap}_i = \min_{j \neq i} \vert \sigma_i - \sigma_j \vert / ( \sigma_i + \sigma_j )$ is the relative gap between $\sigma _ i $ and the nearest other singular value. The same bound applies to the right singular vector $\hat{v}_i$ and vi. Since the relative gap may be much larger than the absolute gap, this error bound may be much smaller than the previous one. The relative gaps may be easily computed from the array of computed singular values.

In the very special case of 2-by-2 bidiagonal matrices, xBDSQR and xBDSDC call auxiliary routine xLASV2 to compute the SVD. xLASV2 will actually compute nearly correctly rounded singular vectors independent of the relative gap, but this requires accurate computer arithmetic: if leading digits cancel during floating-point subtraction, the resulting difference must be exact. On machines without guard digits one has the slightly weaker result that the algorithm is componentwise relatively backward stable, and therefore the accuracy of the singular vectors depends on the relative gap as described above.

Jacobi's method [34,99,91] is another algorithm for finding singular values and singular vectors of matrices. It is slower than the algorithms based on first bidiagonalizing the matrix, but is capable of computing more accurate answers in several important cases.


next up previous contents index
Next: Error Bounds for the Up: Error Bounds for the Previous: Error Bounds for the   Contents   Index
Susan Blackford
1999-10-01