## Initial search space

We could initialize the search space $\mathcal{V}$ with a single random vector $v_0$; extract an approximate eigenvector from $\mathcal{V}$; solve the correction equation approximately for $t$; expand the search space by $t$; and repeat the process.

Clearly this makes little sense at the start. The correction equation was motivated by the assumption that the search space already contains a reasonable approximation to an eigenvector, and this is not the case at the very first iteration. In fact the first approximate eigenvector is the initial random vector $v_0$.

Now we have a couple options:

- As long as the residual of a Ritz pair $(\theta, u)$ is large, we could replace $\theta$ in the correction equation with our target $\tau$.
- In the first few iterations we could simply add the residual $r$ to the search subspace.
- Option 2 is actually equivalent to “solving” the correction equation with a single iteration of a Krylov solver, so a generalized approach would be to incrementally increase the accuracy level of the Krylov solver at each Jacobi-Davidson iteration.
- We could also initialize the search space as a Krylov subspace; so Jacobi-Davidson would start out as the Arnoldi method.
- In some applications we already have a good guess for the eigenvectors and we could initialize the search subspace with them. For instance, in bifurcation analysis people typically compute eigenvalues of a Jacobian matrix multiple times near a bifurcation point. This Jacobian matrix only varies slightly, so one would expect the eigenpairs to vary slightly as well. Hence, you could reuse the eigenvectors from the first solve, to obtain a very good initial search subspace for the second solve.

It seems sensible to use option 3 as a default when there is no prior information about the eigenpairs. Option 4 and 5 can be implemented by letting the user provide an initial search subspace.