next up previous contents index
Next: Storage Declarations Up: An Example for a Previous: Post Processing for Eigenvalues

Setting up the problem

To set up the problem, the user needs to specify the number of eigenvalues to compute, which eigenvalues are of interest, the number of basis vectors to use, and whether or not the problem is standard or generalized. These items are controlled with the parameters listed in Table 2.2.

The simple codes described in this chapter are set up to solve the standard eigenvalue problem using only matrix-vector products ${\bf w}\leftarrow{\bf A}{\bf v}$.  Generalized eigenvalue problems require selection of another mode. These are addressed in Chapter 3. The value of ncv must be at least The options available for which include `LA' and `SA' for the algebraically largest and smallest eigenvalues, `LM' and `SM' for the eigenvalues of largest or smallest magnitude, and `BE' for the simultaneous computation of the eigenvalues at both ends of the spectrum. For a given problem, some of these options may converge more rapidly than others due to the approximation properties of the IRLM as well as the distribution of the eigenvalues of Convergence behavior can be quite different for various settings of the which parameter. For example, if the matrix is indefinite then setting which = `SM' will require interior eigenvalues to be computed and the Lanczos process may require many steps before these are resolved.

For a given ncv, the computational work required is proportional to FLOPS. Setting nev and ncv for optimal performance is very much problem dependent. If possible, it is best to avoid setting nev in a way that will split clusters of eigenvalues. For example, if the the five smallest eigenvalues are positive and on the order of 10-4 and the sixth smallest eigenvalue is on the order of 10-1 then it is probably better to ask for than for even if the three smallest are the only ones of interest.

Setting the optimal value   of ncv relative to nev is not completely understood. As with the choice of which, it depends upon the underlying approximation properties of the IRLM as well as the distribution of the eigenvalues of As a rule of thumb, is reasonable. There are tradeoffs due to the cost of the user supplied matrix-vector products and the cost of the implicit restart mechanism and the cost of maintaining the orthogonality of the Lanczos vectors. If the user supplied matrix-vector product is relatively cheap, then a smaller value of ncv may lead to more user matrix-vector products, but an overall decrease in computation time. Chapter 4 will discuss these issues in more detail.


next up previous contents index
Next: Storage Declarations Up: An Example for a Previous: Post Processing for Eigenvalues
Chao Yang
11/7/1997