File:Arnoldi.f: Difference between revisions

From openpipeflow.org
Jump to navigation Jump to search
mNo edit summary
(17 intermediate revisions by the same user not shown)
Line 5: Line 5:
The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy
The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy
\[
\[
   \sigma\,A = A\,\vec{x}
   \sigma\,\vec{x} = A\,\vec{x}
\]
\]
for a given $n\times n$ matrix $A$.
for a given $n\times n$ matrix $A$.


The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself.  This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$.  The method seeks eigenvectors in $\mathrm{span}\{\vec{x},\,A\vec{x},\,A^2\vec{x},...\}$, but uses Gram-Schmidt orthogonalisation to improve the suitability of the basis.  The set of orthogonalised vectors is called the Krylov-subspace, and m is the maximum number of vectors stored.  It is possible to restart without losing information; not implemented here.
The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself.  This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$.  Given a starting vector $\vec{x}_0$, the method seeks eigenvectors $\vec{x}$ in $\mathrm{span}\{\vec{x}_0,\,A\vec{x}_0,\,A^2\vec{x}_0,...\}$, but uses Gram-Schmidt orthogonalisation to improve the suitability of this basis set.  The set of orthogonalised vectors is called the Krylov-subspace.  In practice we need to limit the size of this space to m vectors, but if this number is reached, it is possible to restart without losing information. Restarts are not implemented here; see the following which can dramatically reduce the number of multiplies by $A$ required.


The method finds the largest eigenvalues and those most separated in the complex plane first.  If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2}A^2+...$, which has no effect on the eigenvectors but exponentiates the eigenvalues, $\tilde{\sigma}=\mathrm{e}^\sigma$.  The negative eigenvalues then correspond to bunched eigenvalues close to the origin.  The Arnoldi method then favours the largest $|\tilde{\sigma}|$, being the $\sigma$ with largest real parts.
===Time Integration and Exponentiation===


Note that time integration corresponds to exponentiation: An eigenvector of a differential equation with a strongly negative eigenvalue decaysUpon time integration, the decay corresponds to multiplication of the initial condition by a value close to zero, i.e. the eigenvalue of the action of the time integral is a value close to zero; compare with $\tilde{\sigma}=\mathrm{e}^\sigma$.  Time integration of a marginal eigenvector corresponds to multiplication by an eigenvalue on the unit circle.
The method finds the eigenvalues most separated in the complex plane first.  If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, i.e. the eigenproblem $\mathrm{e}^\sigma\vec{x}=\mathrm{e}^A\vec{x}$.  This problem
shares the same eigenvectors as the original problem, but often has more suitable eigenvalues for the Arnoldi method, $\tilde{\sigma}=\mathrm{e}^\sigma$.  The negative eigenvalues $\sigma$ then correspond to eigenvalues $\tilde{\sigma}$ bunched close to the origin.  The Arnoldi method favours the $\tilde{\sigma}$ most separated in the complex plane, being the $\sigma$ with largest real parts.
 
Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: Taking eigenvector $\vec{x}$ with eigenvalue $\sigma$ as an initial condition, the result of time integration from $0$ to $T$ is $\mathrm{e}^{\sigma T}\vec{x}$We have that $\mathrm{e}^{\sigma T}\vec{x}=\int_0^T A\,\vec{x}\,dt=\mathrm{e}^{AT}\vec{x}\equiv B\,\vec{x}$.
i.e., $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ is the eigenvalue of the time integration operator $B$ for the same eigenvector $\vec{x}$.  We apply the Arnoldi method to $B$, then recover $\sigma$ from $\tilde{\sigma}$.
 
Consider the case of a perturbation $\vec{\delta x}$ linearised about a solution $\vec{x}_0$, i.e. $\partial_t \vec{\delta x} = A(\vec{x}_0)\,\vec{\delta x}$.  Let $\vec{X}(\vec{x})$ be the result of time integration of $\vec{x}$. For this system, the result of $B\,\vec{\delta x}$ for a given $\vec{\delta x}$ may be approximated by $\frac{1}{\epsilon}(\vec{X}(\vec{x}_0+\epsilon\,\vec{\delta x})-\vec{X}(\vec{x}_0))$ for some small value $\epsilon$.  (Although we expect that $\vec{X}(\vec{x}_0)=\vec{x}_0$, numerical accuracy is likely to be better with the given form.)  Note that to find the eigenvalues $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ of $B$ with the Arnoldi method, only a routine for time integration of a given initial condition is required.


== How to use the code ==
== How to use the code ==

Revision as of 06:38, 25 February 2019

$ \renewcommand{\vec}[1]{ {\bf #1} } \newcommand{\bnabla}{ \vec{\nabla} } \newcommand{\Rey}{Re} \def\vechat#1{ \hat{ \vec{#1} } } \def\mat#1{#1} $

The Arnoldi Method

The Arnoldi method is a method for calculating the eigenvalues and eigenvectors of a matrix, i.e. for calculating the scalar $\sigma$ and $n$-vectors $\vec{x}$ that satisfy \[

  \sigma\,\vec{x} = A\,\vec{x}

\] for a given $n\times n$ matrix $A$.

The main advantage of the method is that it only requires calculations of multiplies by $A$ for a given $\vec{x}$ -- it does not need to know $A$ itself. This means that $A$ need not even be stored, and could correspond to a very complex linear 'action' on $\vec{x}$, e.g. a time integral with initial condition $\vec{x}$. Given a starting vector $\vec{x}_0$, the method seeks eigenvectors $\vec{x}$ in $\mathrm{span}\{\vec{x}_0,\,A\vec{x}_0,\,A^2\vec{x}_0,...\}$, but uses Gram-Schmidt orthogonalisation to improve the suitability of this basis set. The set of orthogonalised vectors is called the Krylov-subspace. In practice we need to limit the size of this space to m vectors, but if this number is reached, it is possible to restart without losing information. Restarts are not implemented here; see the following which can dramatically reduce the number of multiplies by $A$ required.

Time Integration and Exponentiation

The method finds the eigenvalues most separated in the complex plane first. If $A$ is expected to have many negative eigenvalues of little interest, it may be better to work with $\tilde{A}=\mathrm{e}^A=1+A+\frac{1}{2!}A^2+...$, i.e. the eigenproblem $\mathrm{e}^\sigma\vec{x}=\mathrm{e}^A\vec{x}$. This problem shares the same eigenvectors as the original problem, but often has more suitable eigenvalues for the Arnoldi method, $\tilde{\sigma}=\mathrm{e}^\sigma$. The negative eigenvalues $\sigma$ then correspond to eigenvalues $\tilde{\sigma}$ bunched close to the origin. The Arnoldi method favours the $\tilde{\sigma}$ most separated in the complex plane, being the $\sigma$ with largest real parts.

Note that for the system $\partial_t \vec{x} = A\,\vec{x}$, time integration corresponds to exponentiation: Taking eigenvector $\vec{x}$ with eigenvalue $\sigma$ as an initial condition, the result of time integration from $0$ to $T$ is $\mathrm{e}^{\sigma T}\vec{x}$. We have that $\mathrm{e}^{\sigma T}\vec{x}=\int_0^T A\,\vec{x}\,dt=\mathrm{e}^{AT}\vec{x}\equiv B\,\vec{x}$. i.e., $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ is the eigenvalue of the time integration operator $B$ for the same eigenvector $\vec{x}$. We apply the Arnoldi method to $B$, then recover $\sigma$ from $\tilde{\sigma}$.

Consider the case of a perturbation $\vec{\delta x}$ linearised about a solution $\vec{x}_0$, i.e. $\partial_t \vec{\delta x} = A(\vec{x}_0)\,\vec{\delta x}$. Let $\vec{X}(\vec{x})$ be the result of time integration of $\vec{x}$. For this system, the result of $B\,\vec{\delta x}$ for a given $\vec{\delta x}$ may be approximated by $\frac{1}{\epsilon}(\vec{X}(\vec{x}_0+\epsilon\,\vec{\delta x})-\vec{X}(\vec{x}_0))$ for some small value $\epsilon$. (Although we expect that $\vec{X}(\vec{x}_0)=\vec{x}_0$, numerical accuracy is likely to be better with the given form.) Note that to find the eigenvalues $\tilde{\sigma}=\mathrm{e}^{\sigma T}$ of $B$ with the Arnoldi method, only a routine for time integration of a given initial condition is required.

How to use the code

To download, click the link above. The Lapack package is also required.

The subroutine arnold(...) needs to be passed a subroutine that calculates the dot product of two eigenvectors. It should look like, for example,

double precision function dotprod(n,a,b)
   implicit none
   integer :: n
   double precision :: a(n), b(n)
   dotprod = sum(a*b)
end function dotprod

arnold(...) needs to be called repeatedly. It communicates the status of the computation via the flag ifail, which tells the user how many eigenvalues are converged up to a given tolerance, to multiply a vector by $A$ again, or tells the user if the method has failed, e.g. reached maximum number of vector that can be stored.

An example of use of the code:

  ! declare workspace vectors, h, q, b... - see header of arnoldi.f
  sv = ... ! random initial vector x
  k = 0    ! initialise iteration counter
  do while(.true.)
     call arnold(n,k,kmax,ncgd,dotprod,tol,sv,h,q,b,wr,wi,ifail)
     if(ifail==-1) then
        print*, ' arnoldi converged!'
        exit
     else if(ifail==0) then
        call multA(sv, sv)      ! possibly complicated routine that multiplies sv by A
     else if(ifail==1) then
        print*, 'WARNING: arnoldi reached max its'
        exit
     else if(ifail>=2) then
        print*, 'WARNING: arnoldi error:', ifail
        exit
     end if   
  end do

On exit, the eigenvectors are stored in columns of b, in order corresponding to eigenvalues in wr and wi. If the first eigenvalue is real (wi(1)==0.), the eigenvector occupies the first column of b only. If the next eigenvalue is complex, the real and imaginary parts will occupy the next two columns of b.

Parallel use

No special adaptations are required for parallel (MPI) use -- let each thread pass its subsection for the vector sv, and let the dotprod function allreduce the result of the dot product.

File history

Click on a date/time to view the file as it appeared at that time.

Date/TimeDimensionsUserComment
current03:06, 13 December 2016 (8 KB)Apwillis (talk | contribs)For calculating the eigenvalues of a matrix.

The following 3 pages use this file: