PDF
\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \)

Matrix equations from the \(\star \)-algebra with quantum chemistry applications

Stefano Pozza, Christian Bonhomme, Lorenzo Lazzarino, Niel Van Buggenhout

Abstract

Consider the matrix-valued function \(\tilde {A}(t) \in \mathbb {C}^{N \times N}\) analytic over the bounded interval \(\mathcal {I}=[a, b]\), the vector \(v \in \mathbb {C}^N\), and let \(\tilde {u}(t) \in \mathbb {C}^{N \times N}\) be the solution of the non-autonomous ordinary differential equation

\begin{equation} \label {eq:ode} \frac {\partial }{\partial t}\tilde {u}(t) = \tilde {A}(t)\tilde {u}(t), \quad \tilde {u}(a) = v, \quad t \in \mathcal {I}=[a,b]. \end{equation}

When \(\tilde {A}(t)\) commutes with itself at different times, i.e., \(\tilde {A}(t_1)\tilde {A}(t_2) = \tilde {A}(t_2)\tilde {A}(t_1)\), the solution is given through the matrix exponential as \(\tilde {u}(t) = \exp (\int _a^t \tilde {A}(\tau ) \text {d}\tau ) v\), \(t \in [a,b]\). However, in the general case, there is no explicit formula for \(\tilde {U}(t)\) in terms of usual matrix functions. In quantum chemistry, spin dynamics are often modeled by Eq. (1) where the time-dependent matrix takes the form

\[\tilde {A}(t) = A_1 f_1(t) + \dots A_k f_k(t),\]

with \(A_1, \dots , A_k\) (constant) large and sparse matrices, \(f_1(t), \dots , f_k(t)\) scalar analytic functions, and \(k\) a small integer; see, e.g., [11]

In [16, 17], we introduced a new spectral approach for this kind of ODEs, that gives the solution in terms of the coefficients of the expansion \(\tilde {u}(t) = \sum _{j=0}^\infty u_j p_j(t)\), \(t \in [-1, 1]\), with \(p_0(t), p_1(t), p_2(t), \dots \) the orthonormal Legendre polynomials, and \(u_j \in \mathbb {C}^N\). For a large enough integer \(m\), it is possible to approximate the coefficients \(u_0, \dots , u_m\) of the truncated expansion \(\tilde {u}(t) \approx \sum _{j=0}^{m-1} u_j p_j(t)\) by solving the matrix equation

\begin{equation} \label {eq:mtxeq} X - F_1 X A_1^T - \dots - F_k X A_k^T = \phi v^T, \end{equation}

for a certain vector \(\phi \), where the \(m \times m\) matrices \(F_1, \dots , F_k\) represents the functions \(f_1(t), \dots , f_k(t)\) in the so-called \(\star \)-algebra [18]. We named the described strategy \(\star \)-approach. The matrix equation (2) enjoys many properties: (i) the matrices \(F_1, \dots , F_k\) are banded; (ii) in the applications of interest, the matrices \(A_1, \dots , A_k\) have a Kronecker structure; (iii) the equation has a rank 1 right-hand side. We can solve the matrix equation by iterative methods exploiting properties (i) and (ii) to reduce the cost of matrix-vector multiplication. Moreover, the rank 1 right-hand side suggests that the solution \(X\) might be numerically low-rank. Indeed, this is the case in all the applications we treated (e.g., [16]). Hence, property (iii) allows for the use of low-rank approximation.

This novel numerical approach has proved highly competitive in the solution of ODEs related to a specific model, the generalized Rosen-Zener model. In [2], with Christian Bonhomme (Sorbonne University) and Niel Van Buggenhout (Universidad Carlos III), we introduced a new algorithm, named \(\star \)-method, that exploits properties (i)–(iii). Its computational cost scales linearly with the model size (Fig 2, [2]) and is also highly competitive for increasing interval sizes (Fig 4, [2]). These first results might open the way to more general efficient methods for spin simulations. However, the spectral properties and structure of other, more complex, quantum problems can make the solution of Eq. (2) challenging. For instance, we are working on the solution of an ODE system that considers dipolar interactions in a Nuclear Magnetic Resonance application [12]. In this case, the strategies used in [2] are not efficient enough. This is why we are currently testing randomized approaches (joint work with Lorenzo Lazzarino, University of Oxford) and tensor methods, with promising results.

A closer look into the \(\star \)-approach

The difficulties emerging in the numerical solution of Eq. (1) are linked with the lack of a general analytic expression of \(\tilde {u}(t)\). Analytic approaches are typically based on Floquet formalism [10, 19], Magnus series [1, 13], or hybrids of these with ad-hoc approximate/numerical methods [3, 14, 20]. These analytic approaches rarely provide exact solutions in a finite number of steps, might suffer from convergence issues [1], and be intractable [4]. There is a perception in the physics community that no exact solutions are achievable [7]. This also influences the development of numerical solvers since the most advanced numerical methods are typically built on analytical approaches [1, 9, 10]. As noted by M. Grifoni and P. Hänggi in [8]: “Solving the time-dependent Schrödinger equation necessitates the development of novel analytic and computational schemes [...] in a nonperturbative manner” – a remark still relevant today. The \(\star \)-approach follows this suggestion as it is based on a new nonperturbative expression for \(\tilde {u}(t)\), obtained through the so-called \(\star \)-product.

Let us define the set \(\mathcal {A}(\mathcal {I})\) of the bivariate distributions for which there exists a finite \(k\) so that

\begin{equation*} f(t,s) = \tilde {f}_{-1}(t,s)\Theta (t-s) + \tilde {f}_{0}(t,s)\delta (t-s) + \dots + \tilde {f}_{k}(t,s)\delta ^{(k)}(t-s), \end{equation*}

where \(\Theta (t-s)\) is the Heaviside function (\(\Theta (t-s) = 1\) for \(t \geq s\), and \(0\) otherwise), and \(\delta (t-s), \delta '(t-s), \delta ^{(2)}(t-s), \dots \) are the Dirac delta and its derivatives. The \(\star \)-product of \(f, g \in \mathcal {A}(\mathcal {I})\) is the non-commutative product defined as

\begin{equation*} (f \star g)(t,s) := \int _\mathcal {I} f(t,\tau ) g(\tau ,s) \,\text {d}\tau \in \mathcal {A}(\mathcal {I}); \end{equation*}

see [18]. The \(\star \)-product straightforwardly extends to a scalar-matrix \(\star \)-product and to a matrix-matrix (matrix-vector) \(\star \)-product for matrices with compatible sizes composed of elements from \(\mathcal {A}(\mathcal {I})\). We denote with \(\mathcal {A}^{N \times M}(\mathcal {I})\) the space of the \(N \times M\) matrices with elements from \(\mathcal {A}(\mathcal {I})\). Note that \(I_\star = I \delta (t-s)\) is the identity matrix in \(\mathcal {A}^{N \times N}(\mathcal {I})\). As shown in [5], the solution \(\tilde {u}(t)\) of the Eq. (1) can then be expressed as

\begin{align} \tilde {u}(t) = u(t,a), \quad u(t,s) &= \Theta (t-s) \star x(t,s), \\ \left (I_\star - \tilde {A}(t)\Theta (t-s)\right ) \star x(t,s) &= \tilde {v}\delta (t-s), \quad t \in [a,b], \label {eq:sol:star:vec} \end{align} with \(x(t,s) \in \mathcal {A}^{N}(\mathcal {I})\). Therefore, solving a system of non-autonomous linear ODEs is equivalent to solving a linear system in the \(\star \)-algebra. Note that, for \(m= \infty \), the matrix equation (2) is the matrix algebra counterpart of the \(\star \)-linear system (4); see [15, 16, 17]. As a consequence, numerical methods for the solution of Eq. (2) can be interpreted as algorithms in the \(\star \)-algebra. Vice versa, it is possible to devise new techniques (such as preconditioners) and numerical methods (e.g., the \(\star \)-Lanczos algorithm [6]) in the \(\star \)-algebra and then map them in the usual algebra of matrices [15] where they can be implemented.

In conclusion, the \(\star \)-approach has proved extremely fast in certain quantum applications [2]. Its success is based, on the one hand, on advanced linear algebra techniques and, on the other, on applying \(\star \)-algebra results in numerical linear algebra.

References