# Jacobi-Davidson example

Let's take a look how the Jacobi-Davidson algorithm targets interior eigenvalues of a Poisson matrix.

June 11, 2017 - 4 minute read -

I’ve put Jacobi-Davidson to the test on the simple Poisson matrix from the previous post: it has $\begin{bmatrix}-1 & 2 & -1\end{bmatrix}$ on the diagonals and is $100 \times 100$. Its eigenvalues are all real and located in the interval $[0, 4]$.

## Using the exact solver

When solving the correction equation with the “exact” solver, we should be able to see quadratic convergence. I picked $m_{min} = 10$ and $m_{max} = 15$ as minimum and maximum dimension of the search space. The target is $\tau = 3$, and we’re after 10 eigenvalues closest to it. The API for this is currently just

A sanity check is to see whether the partial Schur decomposition $AQ = QR$ is correct:

That seems reasonable for our prescribed tolerance. Next we look at the convergence history.

A couple things to note from the convergence plot: after 15 iterations the search space becomes too big, and clearly only the best approximate Ritz values are retained at restart. Another thing is that as soon as a first Ritz value converges, the others converge very quickly as well. This is the main reason why we want a search space in the first place.

## Using a couple steps of GMRES

Using the exact solver for the correction equation is basically cheating; Jacobi-Davidson should be seen as the method that targets interior eigenvalues without solving systems exactly at each iteration.

What we can do instead is use just 7 steps of GMRES without preconditioning, and see what happens then.

Again we do the sanity check:

And we inspect the convergence plot

Now we need a whole lot of iterations more to let a Ritz value converge, but the convergence seems quite monotonically and is not hindered by restarts, which is great. Also, it’s clear that once the first Ritz value has converged, the others follow quickly as well.

The convergence can probably be improved if we would add a preconditioner, but this is something I have yet to implement.

## Loss of orthogonality

It seems that during the iterations a simple modified Gram-Schmidt is not always enough to guarantee orthogonality of matrices $V$ and $W$. A solution is to do Gram-Schmidt twice, or use iterative refinement. This seems rather costly, but maybe it’s necessary.