On the Convergence of the Singular Value Expansion of 2D functions
Sungwoo Jeong, Alex Townsend
Abstract
In this work we study the convergence of the singular value expansion (SVE) of 2D functions (kernels). Consider a square-integrable kernel \(K:[a, b]\times [c, d]\to \mathbb {R}\), where \([a, b], [c, d]\subset \mathbb {R}\). We define (i) Right singular functions denoted by \(u_1,u_2,\ldots \), which are orthonormal with respect to \(L^2([a,b])\) and (ii) Left singular functions denoted by \(v_1,v_2,\ldots \), which are orthonormal with respect to \(L^2([c,d])\). These singular functions are defined to satisfy the relationships
\(\seteqnumber{0}{}{0}\)\begin{equation} \sigma _n u_n(x) = \int _c^d K(x, y)v_n(y) dy, \hspace {1cm} \sigma _n v_n(y) = \int _a^b K(x, y) u_n(x) dx. \end{equation}
The values \(\sigma _1\geq \sigma _2\geq \cdots >0\) are called the (positive) singular values of \(K\). The SVE of \(K\) is then defined as
\(\seteqnumber{0}{}{1}\)\begin{equation} \label {eq:sve} K(x, y) = \sum _{n=1}^\infty \sigma _n u_n(x) v_n(y). \end{equation}
Recall that the singular vectors of a matrix \(A\) is defined with relationships \(Av_n = \sigma _n u_n\), \(u_n^*A = \sigma _n v_n^*\) and the singular value decomposition (SVD) can be defined as \(A = \sum _n \sigma _n u_n v_n^*\). Thus, the SVE can be thought of as a continuous analogue of the SVD [1].
Before the SVD of a matrix, several pioneers of modern functional analysis in the early 20th century figured out the existence and properties of the SVE for a general square-integrable kernel. Within these developments, Mercer [2], in 1909, showed that any continuous, symmetric positive definite kernel \(K:[a,b]\times [a,b]\rightarrow \mathbb {R}\) has a uniformly and absolutely converging SVE,
\(\seteqnumber{0}{}{2}\)\begin{equation} K(x,y) = \sum _{n=1}^\infty \lambda _n u_n(x) u_n(y), \quad (x,y)\in [a,b]\times [a,b], \end{equation}
which is also equivalent to its eigenfunction expansion. This is often called Mercer’s theorem. For general kernels without positive definiteness or symmetricity, the convergence property (pointwise, uniform, and absolute) of the SVE is an open problem.
In this work, we first prove that the conclusion of Mercer’s theorem does not hold for general symmetric and asymmetric kernels, whenever the positive-definiteness condition is dropped. We provide novel examples which lead to the following result.
-
Theorem 1. For any \([a, b]\subset \mathbb {R}\) there are continuous symmetric indefinite kernels on \([a,b]\times [a,b]\) such that the SVE, equation (2), (i) does not converge pointwise, (ii) converges pointwise but not uniformly, or (iii) converges pointwise but not absolutely.
We hope this theorem will clarify some confusion in the literature regarding the convergence of the SVE whenever a symmetric kernel is not positive definite. In practice, a symmetric indefinite kernel often possesses a pointwise converging SVE but we prove that such convergence is not always guaranteed. Our work provides a rigorous underpinning for kernel methods using indefinite and asymmetric kernels.
We then prove our second main result, which is the convergence result of the SVE when a kernel is equipped with a mild regularity condition. We say a kernel \(K:[a, b]\times [c, d]\to \mathbb {R}\) is of uniform bounded variation if
\(\seteqnumber{0}{}{3}\)\begin{equation} \int _a^b \frac {\partial }{\partial x}K(x,y)dx <V, \qquad \int _c^d \frac {\partial }{\partial y}K(x,y)dy <V, \label {eq:bv} \end{equation}
holds for any fixed \(x, y\) and a uniform constant \(V>0\). We remark that this is a larger class of general continuous kernels which includes, for instance, Lipschitz continuous kernels. For a continuous kernel of uniform bounded variation, we prove the following result using the singular value decay and a generalization of the Rademacher-Menchov theorem. (In fact, we prove that the same conclusion holds for any continuous kernel that has a singular value decay \(\sigma _n = \mathcal {O}(n^{-\alpha })\) with \(\alpha >\frac {1}{2}\).)
-
Theorem 2. For any \([a, b], [c, d]\subset \mathbb {R}\), let \(K:[a,b]\times [c,d]\to \mathbb {R}\) be a continuous kernel of uniform bounded variation (see equation (4)). Then, the SVE of \(K\), equation (2), converges pointwise almost everywhere, unconditionally almost everywhere, and almost uniformly.
To prove the second theorem, we also provide a new bound on the decay of singular values, which is state in the following proposition. We use a recent result [3] on the decay of the error of truncated Legendre series approximation to prove the decay bound.
Furthermore, we provide an efficient numerical algorithm for computing the SVE of a given function. The algorithm is divided into two steps. In the first step, we compute a pseudo-skeleton approximation using Gaussian elimination with complete pivoting (GECP), which is an iterative procedure to approximate the kernel \(K(x,y)\) as a sum of rank-1 functions [4]. After we have computed a rank \(\leq R\) pseudo-skeleton approximation, \(K_R(x, y)\), in the first step, we improve it by performing a low-rank SVD. The SVD decomposes \(K_R(x, y)\) into a sum of outer products of orthonormal functions with singular values and gives us an accurate truncated singular value expansion of \(K\).
References
-
[1] Alex Townsend and Lloyd N Trefethen. Continuous analogues of matrix factorizations. Proc. Roy. Soc. A, 471(2173):20140585, 2015.
-
[2] James Mercer. XVI. Functions of positive and negative type, and their connection the theory of integral equations. Philosophical Transactions of the Royal Society of London. Series A, 209(441-458):415–446, 1909.
-
[3] Haiyong Wang. New error bounds for Legendre approximations of differentiable functions. Journal of Fourier Analysis and Applications, 29(4):42, 2023.
-
[4] Alex Townsend and Lloyd N Trefethen, An extension of Chebfun to two dimensions. SIAM Journal on Scientific Computing, 35(6):C495–C518, 2013.