PDF
\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\)

Streaming low-rank approximation of tree tensor networks

Alberto Bucci, Gianfranco Verzella

Abstract

Low-rank tensor approximation has emerged as a powerful tool in scientific computing, enabling the efficient handling of large-scale linear and multilinear algebra problems that would otherwise be computationally infeasible with classical methods. By exploiting the inherent low-dimensional structure within high-dimensional data, these techniques reduce storage costs and computational complexity, making it possible to approximate solutions to problems in fields as diverse as quantum physics, machine learning, and computational biology.

Recent advances in randomized techniques for low-rank matrix approximations, including methods like randomized SVD [1] and the generalized Nyström method [2, 3], have paved the way for substantial progress in tensor approximation as well. A range of specialized randomized methods have emerged for tensor decompositions. For instance, the randomized higher-order SVD and its sequential truncated version [4] provide efficient tools for approximating tensors in Tucker format. Likewise, randomized adaptations of TT-SVD [5] extend matrix-based techniques to the tensor train format, enabling the approximation of high-dimensional data while mitigating the curse of dimensionality.

The multilinear Nyström method [6], its sequential counterpart [7], and the streaming tensor train approximation [8] further advance this field, allowing for the streaming low-rank approximation of a given tensor \(\mathcal {A}\) in the Tucker or Tensor-Train format respectively.

Both methods build on the generalized Nyström approach, accessing the tensor \(\mathcal {A}\) exclusively via two-sided random sketches of the original data, making them single-pass and facilitating parallel implementation.

Tucker and tensor train decompositions are specific instances of the more general tree tensor network (TTN) decomposition, where the underlying tree structure takes the form of either a star or chain configuration.

By combining the multilinear Nyström method [6] with the streaming tensor train approximation [8], we introduce the tree tensor network Nyström algorithm [9] (TTNN): a novel approach for the streaming low-rank approximation of tensors in the tree tensor network format. We also introduce a sequential variant of the algorithm that operates on increasingly compressed versions of the tensor, while remarkably preserving streamability. We also provide a detailed analysis of the accuracy of both methods.

However, in practical applications, tensors are often provided in a low-rank TTN format, as working with the full tensor would be computationally prohibitive. In these cases, the challenge lies in achieving further compression or rounding of these representations.

We demonstrate that our algorithm can be readily adapted to this specific setting by leveraging structured embeddings.

Our results indicate that TTNN is capable of achieving nearly optimal approximation error when the sizes of the sketches are appropriately selected. A series of numerical experiments further illustrate the performance of TTNN in comparison to existing deterministic and randomized methods.

References