Randomized Algorithms for the Simultaneous Compression
and LU Factorization of Hierarchical Matrices
Anna Yesypenko, Per-Gunnar Martinsson
Abstract
Dense matrices related to solving elliptic PDEs often have internal structure that allows for the linear system \(\mtx A \mtx x = \mtx b\) to be solved in linear or near-linear time. The hierarchical matrix (\(\mathcal {H}^{2}\)-matrix) formalism [1] partitions \(\mtx A\) into blocks, with each block small enough to be stored densely or of numerically low rank \(k\), where \(k\) controls the accuracy of the approximation. This format enables matrix-vector products to be computed to a desired accuracy in linear or near-linear time, allowing for the rapid solution of dense linear systems using iterative methods when \(\mtx A\) is well-conditioned. However, in many situations, the matrix \(\mtx A\) is ill-conditioned, and iterative methods do not offer satisfactory performance.
The objective of this work is to compute an \(\mathcal H^2\) approximation to the inverse \(\mtx A^{-1}\), enabling the fast solution of ill-conditioned dense linear systems. Previously, the use of invertible factorizations of \(\mathcal H^2\) matrices has been limited due to two challenges:
-
1. Compression. Although a fast matrix vector product for \(\mtx A\) is often available, individual entries may be challenging to access, which leads to challenges in compressing \(\mtx A\).
-
2. Invertible Factorization. While invertible factorization algorithms are efficient for specialized formats like HBS/HSS matrices [2, 4, 7], inversion algorithms for general \(\mathcal {H}^{2}\)-matrices typically involve nested recursions and recompressions, making efficient implementation challenging.
This work [8] describes a novel algorithm for simultaneously compressing and inverting an \(\mathcal {H}^{2}\)-matrix and extends the applicability of \(\mathcal H^2\) inversion algorithms [5, 6] to a generic class of dense matrices for which there is a means of applying the matrix and its adjoint to vectors. The method leverages the randomized SVD (RSVD) and novel sketching techniques [3] to efficiently recover numerically low-rank sub-blocks of the matrix \(\mtx A\) within a hierarchical framework by reusing random sketches of the matrix. The precise problem formulation is:
Suppose that \(\mtx {A} \in \mathbb {R}^{N \times N}\) is an \(\mathcal {H}^{2}\)-matrix, and that you are given a fast method for applying \(\mtx {A}\) and its adjoint \(\mtx {A}^{*}\) to vectors. We are also given geometric information corresponding to the matrix \(\mtx A\), e.g. entry \(\mtx A_{ij}\) corresponds to interactions between points \(\pxx _i\) and \(\pxx _j\) in 2 or 3-dimensional space. Using test matrices \(\mtx {\Omega }\) and \(\mtx {\Psi }\) and the set of matrices \(\{ \mtx Y, \mtx Z, \mtx \Omega , \mtx \Psi \}\), where
\(\seteqnumber{0}{}{0}\)\begin{equation*} \underset {N \times s}{\mtx {Y}}\ =\ \underset {N \times N}{\mtx {A}}\ \underset {N \times s}{\mtx {\Omega }} \qquad \mbox {and}\qquad \underset {N \times s}{\mtx {Z}}\ =\ \underset {N \times N}{\mtx {A}}^* \underset {N \times s}{\mtx {\Psi }}, \label {eq:setting_intro} \end{equation*}
the framework constructs an invertible \(\mathcal H^2\) factorization \(\mtx K\) such that \(\mtx K^{-1} \approx \mtx A^{-1}\).
The number of samples \(s\) needed to construct an \(\mathcal H^2\) approximation of the inverse \(\mtx A^{-1}\) is independent of \(N\) and depends only on the chosen rank parameter \(k\) as well as properties of the geometry. The numerical results demonstrate the effectiveness of the algorithm across a range of problems, including the discretization of partial differential equations (PDEs) and boundary integral equations (BIEs) in both 2D and 3D. The results demonstrate the algorithm’s performance in terms of speed, memory efficiency, and precision, as well as its robustness in handling indefiniteness and ill-conditioning.
References
-
[1] Steffen Börm, Lars Grasedyck, and Wolfgang Hackbusch. Introduction to hierarchical matrices with applications. Engineering analysis with boundary elements, 27(5):405–422, 2003.
-
[2] Kenneth L Ho and Leslie Greengard. A fast direct solver for structured linear systems by recursive skeletonization. SIAM Journal on Scientific Computing, 34(5):A2507–A2532, 2012.
-
[3] James Levitt and Per-Gunnar Martinsson. Linear-complexity black-box randomized compression of rank-structured matrices. SIAM Journal on Scientific Computing, 46(3):A1747–A1763, 2024.
-
[4] Per-Gunnar Martinsson and Vladimir Rokhlin. A fast direct solver for boundary integral equations in two dimensions. Journal of Computational Physics, 205(1):1–23, 2005.
-
[5] Victor Minden, Kenneth L Ho, Anil Damle, and Lexing Ying. A recursive skeletonization factorization based on strong admissibility. Multiscale Modeling & Simulation, 15(2):768–796, 2017.
-
[6] Daria Sushnikova, Leslie Greengard, Michael O’Neil, and Manas Rachh. FMM-LU: A fast direct solver for multiscale boundary integral equations in three dimensions. Multiscale Modeling & Simulation, 21(4):1570–1601, 2023.
-
[7] Jianlin Xia, Shivkumar Chandrasekaran, Ming Gu, and Xiaoye S Li. Fast algorithms for hierarchically semiseparable matrices. Numerical Linear Algebra with Applications, 17(6):953–976, 2010.
-
[8] Anna Yesypenko and Per-Gunnar Martinsson. Randomized Strong Recursive Skeletonization: Simultaneous compression and factorization of H-matrices in the Black-Box Setting. arXiv preprint arXiv:2311.01451, 2023.