Ability to return k eigenvalues which satisfy a user specified
criterion such as largest real part, largest absolute value,
largest algebraic value (symmetric case), etc.
For many standard problems, the action of the matrix on a
vector is all that is needed.
A fixed pre-determined storage requirement suffices throughout
the computation. Usually this is where
k is the number of eigenvalues to be computed and n is the order
of the matrix. No auxiliary storage or interaction with such devices
is required during the course of the computation.
Sample driver routines are included that may be used as
templates to implement various spectral transformations
to enhance convergence and to solve the generalized
eigenvalue problem.
Special consideration is given to the generalized
problem for singular or ill-conditioned
symmetric positive semi-definite .
Eigenvectors and/or Schur vectors may be computed on request.
A Schur basis of dimension k is always computed.
The Schur basis consists of vectors which are numerically
orthogonal to working accuracy. Computed eigenvectors of symmetric
matrices are also numerically orthogonal.
The numerical accuracy of the computed eigenvalues
and vectors is user specified. Residual tolerances may be set to the
level of working precision. At working precision, the accuracy
of the computed eigenvalues and vectors is consistent with
the accuracy expected of a dense method such as the implicitly
shifted QR iteration.
Multiple eigenvalues offer no theoretical difficulty.
This is possible through deflation techniques similar to those
used with the implicitly shifted QR algorithm for dense problems.
With the current deflation rules, a fairly tight convergence tolerance
and sufficiently large subspace will be required to capture
all multiple instances. However, since a block method is not used,
there is no need to ``guess" the correct block size that
would be needed to capture multiple eigenvalues.