PDF
\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\)

Subspace accelerated contour integration methods for eigenvalue problems

Luka Grubišić

Abstract

In this talk, we will present a class of adaptive approximation methods for computing the partial solution of eigenvalue problems. We will concentrate on algorithms which are matrix-free in the sense that they treat a matrix \(A\), or its shifted inverse \((z-A)^{-1}\), as a mapping \(A:x\mapsto Ax\), and \((z-A)^{-1}:x\mapsto (z-A)^{-1}x\), respectively. We present a Beyn-type eigensolver (see [1]) accelerated by the use of adaptive reduced-order model of the matrix resolvent. As prototype examples, we will consider both linear as well as nonlinear (in the spectral parameter) eigenvalue problems. In particular, we will study examples from thermoacoustics applications [14].

In the interest of clarity, let us first concentrate on the standard linear eigenvalue problem for a diagonalisable matrix \(A\). When the resolvent is given as a mapping \((z-A)^{-1}:x\mapsto (z-A)^{-1}x\), one has to incorporate the inexactness (due to the approximation truncation) of the evaluation of this mapping into an analysis. This is a known and structurally challenging problem in the theory of Krylov-type solvers [10, 16]. An alternative approach is to transform the problem of approximating the eignvalue cluster enclosed by the finite contour \(\Gamma \) into an eigenvector problem for the spectral projector \(P_\Gamma \)

\[ P_\Gamma = \frac {1}{2\pi \mathrm {i}}\int _\Gamma (z-A)^{-1}~dz\approx \mathbf {\Pi _\Gamma }:=\sum _{i=1}^N\omega _i(z_i-A)^{-1}. \]

One can then apply the standard subspace iteration to extract eigenvector information using the approximation

\[ x_j\mapsto P_\Gamma x_j = \frac {1}{2\pi \mathrm {i}}\int _\Gamma (z-A)^{-1}x_j~dz\approx \sum _{i=1}^N\omega _i(z_i-A)^{-1}x_j,\;\;j=1,\cdots ,d \]

For this talk we choose not to discuss the implications of embarrassing paralelism (in terms of sampling the resolvent with respect to the spectral parameter \(z_i\) and vectors \(x_j\)) on the evaluation of the action of \(P_\Gamma \).

We will loosely call this approach interpolatory and nonintrusive. Namely, to produce a reliable eigenvalue/vector approximation method, one only needs a solver for the shifted system \((z, x)\mapsto (z-A)^{-1}x\) as a black box, but with an error estimate and error control. The projection \(P_\Gamma \) is a dense, but low-rank matrix. The dimension of its range equals the joint algebraic multiplicity of the eigenvalues enclosed by the contour \(\Gamma \), denoted by \(\#\Gamma \). The problem of computing an orthonormal basis of the eigensubspace associated with the enclosed cluster of eigenvalues can now be reduced to the calculation of the SVD of a large implicitly defined matrix \(P_\Gamma \) of low rank. This orthonormal basis can then be used to construct a small auxiliary spectral problem from which eigenvector/eigenvalue information can be directly and robustly extracted (not the topic of this talk). Randomized SVD has distinguished itself as a method of choice for analyzing approximate low rank matrices. It has been studied in many settings, including its infinite-dimensional incarnation [13, 3, 2], which is suitable for the study of numerical methods applied to discretizations of partial differential operators in physics and engineering. Note that in our notation the randomized SVD algorithm for \(\Pi _\Gamma \approx P_\Gamma \) starts with the random draw of the interpolation directions \(x_j\), \(j=1,\cdots ,d\) for \(d\geq \#\Gamma +2\). Here we assume that \(x_j\) have been drawn appropriately, [2].

The nonintrusive nature of contour integration methods is the reason for inclusion in SLEPc or even as Extended Eigensolver Routines in the Intel MKL library. This is the easiest way to incorporate any monolithic solver for the shifted system into an eigenvalue/eigenvector approximation routine. Large-scale matrices in NLA are typically discretizations of partial differential operators, and the use of contour integration approach allows more flexibility to seamlessly incorporate various discretisations of the shifted system (called in the engineering jargon the Helmholtz solvers). These include rectangular approximations of the resolvent such as those from [8] used in chebop object or the Discontinuous Petrov Gelerkin approach which also leads to rectangular approximations of the resolvent [11, 9, 7].

Based on the (infinite-dimensional) randomized SVD for Hilbert–Schmidt operators, an extension of Beyn’s contour integration method for operators in Hilbert spaces has been described in [5]. The key ingredient, encapsulated in the phrase solve than discretize, is adaptive error control for the Helmholtz solver. Pushing discretization by adaptivity to the later stage, the randomized part of the algorithm gives us means to explore the Hilbert space more broadly and generate an accelerating subspace with better candidates for eigenvector approximations.

The use of advances in the rational function approximation problem in the context of the solution of the spectral problem has been thoroughly analyzed, in a slightly different context, in [6]. To coarsely assess the performance of this method, consider a finite difference discretization of \(A=-\triangle - V\), \(V>0\), with Gaussian potential \(V\), \(\|V\|_\infty <\infty \). Using the Matlab toolbox SpecSolve1 on a computer with \(10\) cores, it took \(104\) seconds to approximate the spectral density in the interval \(\left [-\|V\|_\infty ,0\right ]\) with tolerance \(\varepsilon =0.05\). In comparison, Matlab eigs on the same machine applied to a \(10^4\times 10^4\) discretization computed all eigenvalues in the same interval within \(0.5\) seconds. Apart from the further use of obvious embarrassing parallelism in the sampling of the resolvent, a speedup can be achieved by exploiting the product structure in the construction of random vectors [4] (not this talk) or by speeding up the evaluation of the resolvent using subspace acceleration [14] (this talk).

As prototypes, we will consider a large class of (nonlinear) eigenvalue problems which are defined by the generalized resolvent

\[ R(z)=(A_0+ f_1(z)A_1+\cdots +f_s(z)A_s)^{-1} \]

with self-adjoint coefficients \(A_i\), \(i=0,\dots ,s\) and scalar functions \(f_i\), \(i=1,\dots ,s\). We will present an analysis and improvements of the method described in [14] which uses subspace acceleration together with reduced-order interpolatory modeling of the nonlinear resolvent \(R\). Our method will be cast within the context of scientific computing with particular emphasis on problems in thermoacoustics. We will discuss the comparison of the performance of the contour integration method with the performance of the method based on the direct rational interpolation of the resolvent and the application of the rational Arnoldi to its linearization [15, 12]. Finally, we will present a general analysis of the randomized SVD algorithm for operators of the form

\[ \mathsf {r}(A_0+V)+W~. \]

Here \(\mathsf {r}\) is a rational function approximation of an indicator function, \(A_0\) is self-adjoint and positive definite, potential \(V\) is relatively compact with respect to \(A_0\), and we use functions of \(A_0\) to construct a Gaussian kernel for random sampling. Finally, \(W\) (not necessarily self-adjoint) is a small bounded operator presenting the errors caused by adaptive discretization.

References