next up previous contents index
Next: is Hermitian Positive Definite Up: General Use of ARPACK Previous: Naming Conventions, Precisions and

Shift and Invert Spectral Transformation Mode

    The most general problem that may be solved with ARPACK is to compute a few selected eigenvalues and corresponding eigenvectors for

where and are real or complex matrices.

The shift and invert spectral transformation is used to enhance convergence to a desired portion of the spectrum.     If (x,) is an eigenpair for
(,) and then

( - ) -1x = x, where = 1/( - )

This transformation is effective for finding eigenvalues near since the nev eigenvalues j of that are largest in magnitude correspond to the nev eigenvalues j of the original problem that are nearest to the shift in absolute value. These transformed eigenvalues of largest magnitude are precisely the eigenvalues that are easy to compute with a Krylov method. Once they are found, they may be transformed back to eigenvalues of the original problem. The direct relation is

j = + 1/j,

and the eigenvector associated with in the transformed problem is also an (generalized) eigenvector of the original problem corresponding to Usually, the IRAM will rapidly obtain good approximations to the eigenvalues of of largest magnitude. However, to implement this transformation, one must provide a means to solve linear systems involving either with a matrix factorization or with an iterative method.    

In general, will be non-Hermitian even if ${\bf A}$ and ${\bf M}$ are both Hermitian. However, this is easily remedied. The assumption that ${\bf M}$ is Hermitian positive definite implies that the bi-linear form

is an inner product . If ${\bf M}$ is positive semi-definite and singular, then a semi-inner product  results. We call this a weighted ${\bf M}$-inner product    and vectors are called ${\bf M}$-orthogonal if . It is easy to show that if ${\bf A}$ is Hermitian (self-adjoint) then is Hermitian (self-adjoint) with respect to this ${\bf M}$-inner product (meaning for all vectors ). Therefore, symmetry will be preserved if we force the computed basis vectors to be orthogonal in this ${\bf M}$-inner product. Implementing this ${\bf M}$-orthogonality requires the user to provide a matrix-vector product on request along with each application of . In the following sections we shall discuss some of the more familiar transformations to the standard eigenproblem. However, when ${\bf M}$ is positive (semi) definite, we recommend using the shift-invert spectral transformation with ${\bf M}$-inner products if at all possible. This is a far more robust transformation when ${\bf M}$ is   ill-conditioned  or singular . With a little extra manipulation (provided automatically in __eupd) the (semi-) inner product induced by ${\bf M}$ prevents corruption of the computed basis vectors by roundoff-error associated with the presence of infinite eigenvalues.   These very ill-conditioned eigenvalues are generally associated with a singular or highly ill-conditioned A detailed discussion of this theory may be found in Chapter 4.

Shift-invert spectral transformations are very effective and should even be used on standard problems () whenever possible. This is particularly true when interior eigenvalues   are sought or when the desired eigenvalues are clustered. Roughly speaking, a set of eigenvalues is clustered if the maximum distance between   any two eigenvalues in that set is much smaller than the maximum distance between any two eigenvalues of .

If one has a generalized problem (), then one must provide a way to solve linear systems with either ${\bf A}$, ${\bf M}$ or a linear combination of the two matrices in order to use ARPACK. In this case, a sparse direct method should be used to factor the appropriate matrix whenever possible.     The resulting factorization may be used repeatedly to solve the required linear systems once it has been obtained. If an iterative method is used for the linear system  solves, the accuracy of the solutions must be commensurate with the convergence tolerance used for ARPACK. A slightly more stringent tolerance is needed for the iterative linear system solves (relative to the desired accuracy of the eigenvalue calculation). See  [18,32,30,40] for further information and references.

The main drawback with using the shift-invert spectral transformation is that the coefficient matrix is typically indefinite in the Hermitian case and has in   the interior of the convex hull of the spectrum in the non-Hermitian case. These are typically the most difficult situations for iterative methods and also for sparse direct methods.

The decision to use a spectral transformation on a standard   eigenvalue problem () or to use one of the simple modes described in Chapter 2 is problem dependent. The simple modes have the advantage that one only need supply a matrix vector product ${\bf w}\leftarrow{\bf A}{\bf v}$. However, this approach is usually only successful for problems where extremal non-clustered eigenvalues are sought. In non-Hermitian problems, extremal means     eigenvalues near the boundary of the convex hull   of the spectrum of ${\bf A}$. For Hermitian problems, extremal means eigenvalues at the left or right end points of the spectrum of ${\bf A}$. The notion of non-clustered (or well separated)         is difficult to define without going into considerable detail. A simplistic notion of a well-separated eigenvalue for a Hermitian problem would be for all with     Unless a matrix vector product is quite difficult to code or extremely expensive computationally, it is probably worth trying to use the simple mode first if you are seeking extremal eigenvalues.

The remainder of this section discusses additional transformations that may be applied to convert a generalized eigenproblem to a standard eigenproblem. These are appropriate when ${\bf M}$ is well conditioned (Hermitian or non-Hermitian).



 
next up previous contents index
Next: is Hermitian Positive Definite Up: General Use of ARPACK Previous: Naming Conventions, Precisions and
Chao Yang
11/7/1997