# File:Arnoldi.f

$ \renewcommand{\vec}[1]{ {\bf #1} } \newcommand{\bnabla}{\vec{\nabla}} \newcommand{\Rey}{Re} \def\vechat#1{\hat{\vec{#1}}} \def\mat#1{#1} $

## Contents

## The Arnoldi Method

The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy \[ \sigma\,\vec{x} = A\,\vec{x} \] for a given $n\times n$ matrix $A$.

The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself. This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$. Given a starting vector $\vec{x}_0$, the method seeks eigenvectors $\vec{x}$ in $\mathrm{span}\{\vec{x}_0,\,A\vec{x}_0,\,A^2\vec{x}_0,...\}$, but uses Gram-Schmidt orthogonalisation to improve the suitability of this basis set. The set of orthogonalised vectors is called the Krylov-subspace. In practice we need to limit the size of this space to m vectors, but if this number is reached, it is possible to restart without losing information. Restarts are not implemented here; see the following which can dramatically reduce the number of multiplies by $A$ required.

### Time Integration and Exponentiation

The method finds the eigenvalues most separated in the complex plane first. If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, i.e. the eigenproblem $\mathrm{e}^\sigma\vec{x}=\mathrm{e}^A\vec{x}$. This problem shares the same eigenvectors as the original problem, but often has more suitable eigenvalues for the Arnoldi method, $\tilde{\sigma}=\mathrm{e}^\sigma$. The negative eigenvalues $\sigma$ then correspond to eigenvalues $\tilde{\sigma}$ bunched close to the origin. The Arnoldi method favours the $\tilde{\sigma}$ most separated in the complex plane, being the $\sigma$ with largest real parts.

Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: Taking eigenvector $\vec{x}$ with eigenvalue $\sigma$ as an initial condition, the result of time integration from $0$ to $T$ is $\mathrm{e}^{\sigma T}\vec{x}$. We have that $\mathrm{e}^{\sigma T}\vec{x}=\int_0^T A\,\vec{x}\,dt=\mathrm{e}^{AT}\vec{x}\equiv B\,\vec{x}$. i.e., $\mathrm{e}^{\sigma T}$ is the eigenvalue of the time integration operator $B$ for the same eigenvector $\vec{x}$, from which it is straight forward to recover $\sigma$.

Consider the case of a perturbation $\vec{\delta x}$ linearised about a solution $\vec{x}_0$, i.e. $\partial_t \vec{\delta x} = A(\vec{x}_0)\,\vec{\delta x}$. Let $\vec{X}(\vec{x})$ be the result of time integration of $\vec{x}$. For this system, the result of $B\,\vec{\delta x}$ for a given $\vec{\delta x}$ may be approximated by $\frac{1}{\epsilon}(\vec{X}(\vec{x}_0+\epsilon\,\vec{\delta x})-\vec{X}(\vec{x}_0))$ for some small value $\epsilon$. (Although we expect that $\vec{X}(\vec{x}_0)=\vec{x}_0$, numerical accuracy is likely to be better with the given form.) Note that to find the eigenvalues $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ of $B$ with the Arnoldi method, only a routine for time integration of a given initial condition is required.

## How to use the code

To download, click the link above. The Lapack package is also required.

The subroutine `arnold(...)` needs to be passed a subroutine that calculates the dot product of two eigenvectors. It should look like, for example,

double precision function dotprod(n,a,b) implicit none integer :: n double precision :: a(n), b(n) dotprod = sum(a*b) end function dotprod

`arnold(...)` needs to be called repeatedly. It communicates the status of the computation via the flag `ifail`, which tells the user how many eigenvalues are converged up to a given tolerance, to multiply a vector by $A$ again, or tells the user if the method has failed, e.g. reached maximum number of vector that can be stored.

An example of use of the code:

! declare workspace vectors, h, q, b... - see header of arnoldi.f sv = ... ! random initial vector x k = 0 ! initialise iteration counter do while(.true.) call arnold(n,k,kmax,ncgd,dotprod,tol,sv,h,q,b,wr,wi,ifail) if(ifail==-1) then print*, ' arnoldi converged!' exit else if(ifail==0) then call multA(sv, sv) ! possibly complicated routine that multiplies sv by A else if(ifail==1) then print*, 'WARNING: arnoldi reached max its' exit else if(ifail>=2) then print*, 'WARNING: arnoldi error:', ifail exit end if end do

On exit, the eigenvectors are stored in columns of `b`, in order corresponding to eigenvalues in `wr` and `wi`. If the first eigenvalue is real (`wi(1)==0.`), the eigenvector occupies the first column of `b` only. If the next eigenvalue is complex, the real and imaginary parts will occupy the next two columns of `b`.

## Parallel use

No special adaptations are required for parallel (MPI) use -- let each thread pass its subsection for the vector `sv`, and let the `dotprod` function `allreduce` the result of the dot product.

## File history

Click on a date/time to view the file as it appeared at that time.

Date/Time | Dimensions | User | Comment | |
---|---|---|---|---|

current | 03:06, 13 December 2016 | (8 KB) | Apwillis (Talk | contribs) | For calculating the eigenvalues of a matrix. |

- You cannot overwrite this file.

## File usage

The following 4 pages link to this file: