Block-adaptive Cross Approximation of Discrete Integral Operators

In this article we extend the adaptive cross approximation (ACA) method known for the efficient approximation of discretisations of integral operators to a block-adaptive version. While ACA is usually employed to assemble hierarchical matrix approximations having the same prescribed accuracy on all blocks of the partition, for the solution of linear systems it may be more efficient to adapt the accuracy of each block to the actual error of the solution as some blocks may be more important for the solution error than others. To this end, error estimation techniques known from adaptive mesh refinement are applied to automatically improve the block-wise matrix approximation. This allows to interlace the assembling of the coefficient matrix with the iterative solution.


Introduction
Methods for the solution of boundary value problems of elliptic partial differential equations typically involve non-local operators. Examples are fractional diffusion problems (see [1,14,15]) and boundary integral methods (see [25,26]), in which amenable boundary value problems are reformulated as integral equations over the boundary of the computational domain. The discretisation of such non-local operators usually leads to fully populated matrices. The corresponding systems of linear equations are quite expensive to solve numerically in the typical case of large numbers of unknowns. Additionally, assembling and storing the discrete operators may be quite costly. Even if the formulation of the boundary value problem is local and its discretisation (e.g. by the finite element method) leads to a sparse matrix, it may be useful to construct non-local approximations (e.g. for preconditioning) to its inverse or to triangular factorisations. In both cases, suitable techniques have to be employed if the boundary value problem is to be solved with linear or almost linear complexity.
There are several ways how fully populated discretisations of elliptic operators can be treated efficiently. In addition to fast multipole methods [20] or mosaic decomposition methods [27], hierarchical matrices (H-matrices) [21,22] have been introduced for the asymptotic optimal numerical treatment.
This kind of methods can be regarded to be based on a hierarchical partition of the discretised operator A ∈ R N ×N into suitable blocks combined with a local low-rank approximation. Hence, fully populated matrices can be approximated by data-sparse representations which require only a logarithmic-linear amount of storage. Furthermore, these representations can be applied to vectors with logarithmic-linear complexity. In addition, H-matrices provide approximate substitutes for the usual matrix operations including the LU factorisation with logarithmic-linear complexity; see [8]. These properties strongly accelerate iterative solvers like the conjugate gradient (CG) method or GMRES (see [24]) for the solution of the arising linear systems of equations.
A crucial step for the efficient treatment of fully populated matrices A is their assembling. While explicit kernel approximations were used in the early stages of fast methods for the treatment of integral operators, in recent years the adaptive cross approximation (ACA) (see [4,13]) has become quite popular. The latter method employs only few of the original matrix entries of A for its datasparse approximation by hierarchical matrices. Since one is usually interested in approximationsÃ of A leading to a sufficiently accurate result whenÃ or its inverse is applied to arbitrary vectors, it is common to approximate all blocks of the matrix partition uniformly with a prescribed accuracy. The universality of the approximation to the discrete operator A comes at a high price due to the generation of information that may be redundant for the solution of the linear system. When the quantity of interest is the error of the solution x of the linear system Ax = b,Ã obtained from usual ACA may not be the ideal approximation to A. In such a situation, certain parts of the discrete operator A may be more important than others. Conversely, depending on the right-hand side b some blocks may not be that important for the error of x.
In this article we propose to construct the hierarchical matrix approximation in a block-adaptive way, i.e., in addition to the adaptivity of ACA on each block we add another level of adaptivity to the construction of the hierarchical matrix. This variant of ACA will be referred to as BACA. In order to find an H-matrix approximation that is better suited for the solution error, we employ techniques known from adaptivity together with suitable error estimators. The strategy of error estimation is well known in the context of numerical methods for partial differential or integral equations. Adaptive methods usually focus on mesh refinement. Here, these strategies will be used to successively improve block-wise low-rank approximations, whereas the underlying grid (and the hierarchical block structure) does not change. Although the ideas of this approach are presented in connection with ACA, they can also be applied to other low-rank approximation techniques.
In contrast to constructing the approximationÃ of A the usual way and solving a single linear systemÃx = b, we generate a sequence of H-matrix approximations A k to A and solve each linear system A k x k = b for x k . At first glance this seems to be more costly than solvingÃx = b only once. However, the construction of the next approximation A k+1 can be steered by the approximate solution x k , and x k+1 can be efficiently computed as an update of x k . Furthermore, the accuracy of x k in the iterative solver can be adapted to the error estimator. This allows to interlace the assembling process of the matrix with the iterative solution of the linear system. In addition to reducing the numerical effort for assembling and storing the matrix, this approach has the practical advantage that the block-wise accuracy of ACA is not a required parameter any more. Choosing this parameter in usual ACA approximations is not obvious as the relation between the block-wise accuracy and the error of the solution usually depends on the respective problem. With the new approach presented in this article the accuracy of the approximation is automatically adapted.
The article is organised as follows. At first the model problem is formulated for which the new method will be explained. After that we give a brief overview of hierarchical matrices and ACA as these methods are the foundations of our new method. Suitable error estimators will be proposed and investigated with respect to efficiency and reliability in section four. Using these, a marking and refinement strategy will be presented and analysed. Numerical examples will be presented at the end of the article which show that the new method is more efficient than ACA with respect to both computational time and required storage in situations where the solution in some sense contains structural differences or situations that result from locally over-refined meshes.

Model Problem
As a model problem we consider the Laplace equation on a Lipschitz domain Ω ⊂ R d (or its comple- where the Dirichlet data g is a given function in the trace space H 1/2 (∂Ω) of the Sobolev space H 1 (Ω). This kind of problem (in particular the exterior boundary value problem with suitable conditions at infinity) can be treated via boundary integral formulation using the fundamental solution where the Neumann data ψ ∈ H −1/2 (∂Ω) is the solution of the boundary integral equation (2) If we extend the L 2 -scalar product to a duality pairing between H −1/2 (∂Ω) and H 1/2 (∂Ω), the boundary integral equation (2) can be stated in variational form as Due to the coercivity of the bilinear form (V·, ·) L 2 , the Riesz-Fischer theorem yields the existence of the unique solution ψ ∈ H −1/2 (∂Ω).
In order to solve (2) or its variational form numerically, Galerkin discretisations are often used. In the case d = 3, the boundary ∂Ω is decomposed into a regular partition T consisting of N triangles. Let the set {ψ 1 , . . . , ψ N } denote a basis of the the space of piecewise constant functions P 0 (T ). Then for ψ = N j=1 x j ψ j , x j ∈ R, the discretisation of the variational problem reads which due to the Riesz-Fischer theorem is uniquely solvable, too. With the above stated discretisation results in the system of linear equations Since the matrix A is fully populated, our goal is to reduce the required storage for A and the numerical effort for its construction to logarithmic-linear complexity. To this end, hierarchical matrices will be employed; see Sect. 3. The efficiency of hierarchical matrices is based on a blockwise low-rank approximation. The construction of these low-rank approximations is often done via adaptive cross approximation; see Sect. 3.1. While in hierarchical matrix approximations all blocks are commonly approximated with the same prescribed accuracy, in this article we propose to construct the hierarchical matrix approximation in a block-adaptive way, i.e., in addition to the adaptivity of ACA on each block we add another level of adaptivity to the construction of the hierarchical matrix.
Although we consider the model problem (1), the method presented in this article can be applied in more general situations as long as the matrix A results from finite element discretisation of an integral representation of the problem. Examples are the Helmholtz equation, Lamé equations, the Lippmann-Schwinger equation, fractional diffusion problems, the Gauss transform and many others.

Hierarchical Matrices and the Adaptive Cross Approximation
This section gives a short overview of the hierarchical matrix (H-matrix) structure and the ACA method, which we are going to extend. H-matrices were introduced by Hackbusch [21] and Hackbusch and Khoromskij [22]. Their efficiency is based on a suitable hierarchical partitioning of the matrix into sub-blocks and the blockwise approximation by low-rank matrices.
We consider matrices A ∈ R N ×N whose entries result from the discretisation of a non-local operator such as the boundary integral formulation of the boundary value problem described in Sect. 2. Although A is fully populated, it can often be approximated by low-rank matrices on suitable blocks t×s consisting of rows and columns t, Figure 1: Two clusters X t and X s corresponding to an admissible block t × s.
where the rank r is small compared with |t| and |s|. If A discretises an integral representation or the inverse of second-order elliptic partial differential operators, then the condition on the block t × s turns out to be appropriate; see [5]. Here, X t is the union of the supports denote the diameter and the distance of two bounded sets X, Y ⊂ R d . All blocks t × s satisfying this condition are called admissible; see Fig. 1. A partition P of the matrix indices I × I is usually constructed using cluster trees, i.e. trees T I satisfying the conditions (1) I is the root of the tree, The -th level of T I will be referred to as T ( ) I , = 0, . . . , L, where L is the depth of T I . The set of leaves of the tree T I is denoted by L(T I ). Given a cluster tree T I for I, a block cluster tree T I×I for I × I is constructed by recursive subdivision descending the cluster tree T I for the rows and for the columns concurrently until (4) is satisfied or one the clusters cannot be subdivided any further. Hence, a matrix partition consisting of blocks either satisfying (4) or being small can be found as the leaves P := L(T I×I ) of T I×I . P is the union of the sets of admissible and non-admissible blocks P adm and P non-adm , respectively. The sparsity constant c sp is defined as denotes the maximum number of blocks t × s contained in P for a given cluster t ∈ T I and c c sp (s) := |{t ⊂ I : t × s ∈ P }| the maximum number of blocks t × s ∈ P for a cluster s ∈ T I . For more details on cluster trees (in particular for their construction) the reader is referred to [8].
Given a matrix partition P , we define the set of H-matrices with blockwise rank at most r by Due to the structure of hierarchical matrices and due to the fact that each block t × s has complexity r(|t| + |s|), the complexity of the whole H-matrix can be shown to be of the order r N log N ; see [8].

Matrix construction
There are several ways how low-rank approximations of a matrix block can be constructed. For matrices A resulting from the discretisation of non-local operators the adaptive cross approximation (ACA) method introduced in [4, 13, 10] constructs the low-rank approximation using only few of the original entries of A. Non-admissible blocks cannot be approximated but are small, so they are computed entry by entry. We focus on a single admissible block A ts ∈ R t×s of A. With R 0 := A ts we define two sequences of vectors u r ∈ R t and v r ∈ R s for r = 1, 2, 3, . . . by the following algorithm.
The matrix has rank at most r and can be shown (under reasonable assumptions) to be an approximation of A ts with remainder R r := A ts − S r having relative accuracy where · F denotes the Frobenius norm. The set Z collects all vanishing rows of the matrix R r . The vectors u r andṽ r are columns and rows of R r−1 , i.e.
The row indices i r have to be chosen such that the Vandermonde matrix corresponding to the system in which the approximation error is to be estimated is non-singular; for details see [8]. If the i r -th row of R r−1 is nonzero and hence is used as v r after scalingṽ r with 1/(ṽ r ) jr , it is also added to Z since the i r -th row of the following remainder R r will vanish. It can be shown that the numerical effort of ACA is of the order |Z| 2 (|t| + |s|).

Error Estimators and the Block-Adaptive Cross Approximation (BACA)
The usual way of treating discrete problems (3) is to construct an hierarchical matrix approximationÃ (e.g. via ACA) such that each blockÃ b , b ∈ P , satisfies (cf. (6)) Due to the independence of the blocks, this can be done in parallel; see [11]. While (7) guarantees local properties of the H-matrix approximation, the global impact of this condition is difficult to estimate. Notice that (7) still implies the obvious global property however, for instance the eigenpairs of A andÃ are not as accurate in general and one has to apply suitable techniques (see [9]) in order to be able to guarantee spectral equivalence of A andÃ. In other words, the low-rank approximation is usually constructed regardless of the importance of the respective block for global properties of the matrix.
A obtained from usual ACA may not be the ideal approximation to A when the norm of the solution error of the associated solution is the measure. In order to find an H-matrix approximation that is better suited for this problem, we employ techniques known from adaptivity together with suitable error estimators. The strategy of error estimation is well known in the context of numerical methods for partial differential or integral equations. There are several types of a posteriori error estimators. The method of this article is inspired by the (h − h/2)-version investigated in [17]. Adaptive methods usually focus on mesh refinement. Here, these strategies will be used to successively improve blockwise low-rank approximations, whereas the underlying grid (and the hierarchical block structure P ) does not change.
In contrast to constructing the matrix approximationÃ of A in the usual way (via ACA) and solving a single linear systemÃx = b, we build a sequence of H-matrix approximations A k of A and solve each linear system A k x k = b for x k . At first glance this seems to be more costly than solvingÃx = b only once. However, the approximation of A k+1 can be steered by the approximate solution x k , and x k+1 can be computed as an update of x k . Furthermore, the accuracy of x k can be low for small k and we can adapt it to the estimated error.
The following method will be based on residual error estimators. Notice that if A : V → V denotes the operator that is discretised by A and u h refers to the Galerkin approximation of the exact solution to Au = f , then for the residual error f − Au h V we have Hence, the solution error u − u h V is equivalent with the residual error f − Au h V and in the rest of this section we may focus on the residual error.
In the following algorithm,Â k denotes a better approximation of A than A k , i.e. we assume that the saturation condition holds with 0 < c sat < 1. A natural choice forÂ k is the improved approximation that results from A k by adding a fixed number of additional steps of ACA for each admissible block and by setting (Â k ) b = A b for all other blocks b ∈ P non-adm . We define the matrix Algorithm 2 Block-adaptive ACA 1. Start with a coarse H-matrix approximation A 0 of A and set k = 0.
2. Solve the linear system A k x k = b for x k with residual error δ k (use x k−1 as a starting vector of an iterative solver; x −1 := 0).

Let
5. If η k > ε BACA increment k and go to 2.
Notice that Algorithm 1 contains a stopping parameter ε ACA , which describes the desired accuracy of the low-rank approximation for the respective block. While it is difficult to choose ε ACA so that the resulting solution of the linear system satisfies a prescribed accuracy, the parameter ε BACA in Algorithm 2 gives an upper bound on the residual error b − Ax k 2 of x k as we shall see in the next Lemma 1.
It seems that two H-matrices A k andÂ k are required for the previous algorithm. Since they are strongly related, it is actually sufficient to store onlyÂ k . Notice that the ACA steps required for the construction of A k+1 can be adopted fromÂ k , andÂ k+1 can be computed fromÂ k by improving the blocks contained in M k .
In addition to the locatability of the error estimator introduced in Algorithm 2 its reliability is a main issue for a successful adaptive procedure. In particular, the dependence on the residual error δ k : Algorithm 2 has to be investigated.

Lemma 1. Let assumption (8) be valid and let
Proof. We make use of the decomposition of A ∈ H(P, k) into a sum of level matrices A ( ) , which contain the blocks b ∈ P of A belonging to the -th level of the block cluster tree T I×I Notice that A ( ) is a Cartesian product block matrix. Due to ( n i=1 a i ) 2 ≤ n n i=1 a 2 i for all a i ∈ R, i = 1, . . . , n, we have

The lemma follows from
Although the efficiency of the error estimator η k , i.e. η k b − Ax k 2 , can be observed in numerical experiments, its provability is still an open question. Nevertheless, the computable expression W k x k 2 can be used as a lower bound for b − Ax k 2 .

Lemma 2. Let (8) be valid and let
Proof. The saturation assumption (8) yields In the following lemma the convergence of the error estimator η k is investigated. Its convergence is a consequence of the Dörfler marking strategy (9). To prove it, we use the obvious property Lemma 3. Assume that the sequence { A −1 k 2 } k∈N 0 is bounded and that δ k → 0 for k → ∞. Then it holds that where z k converges to zero and q := 1 − 1 2 θ 2 < 1. Furthermore, lim k→∞ η k = 0.
Proof. Using the definition of A k+1 , η 2 k+1 reads Due to Young's inequality and (9) we have for the second term for all > 0. The choice := 1 2 θ 2 /(1 − θ 2 ) and 2 2 x k+1 2 2 leads to the first part of the assertion due to the boundedness of A −1 k 2 , the convergence of δ k to 0 and (11).
For the proof of the second part we pursue the ideas of the estimator reduction principle, which was introduced in [3]. Let z > 0 be a number with z k ≤ z for all k. With the estimator reduction (12) it follows that Hence, the sequence {η k } k∈N 0 is bounded and we define M := lim sup k→∞ η 2 k . Using the estimator reduction (12)

Numerical Experiments
The numerical experiments are subdivided into three parts. At first we investigate the storage requirements of BACA. After that the quality of the proposed error estimator is discussed. Finally, the third part treats the acceleration of the matrix approximation and solution of the boundary value problem in comparison with ACA. In order to be able to compare these results with the results obtained from ACA, we prescribe the same error of u h . Therefore, describes the relative error of the solution u h . The approximation steps are executed without any parallelisation in both cases. We use the conjugate gradient method as a solver for the arising linear systems of equations without preconditioning. In the case of BACA, the accuracy of the iterative solver is adapted to the size of W k x k 2 ; see Lemma 1. For all the experiments that are based on ACA the accuracy of CG is set to 10 −8 .
As numerical examples we consider a family of boundary value problems, in which the singularity of the right-hand side is moved towards the boundary of the computational domain Ω, i.e.

Storage Reduction
We start with a uniform decomposition of the unit ball into 642 points and 1280 triangles for the boundary value problems (13). Note that ACA does not take into account the right-hand side. Hence the approximation does not depend on the choice of p i . So, we obtain the left picture of Fig. 2 after the application of ACA to the discrete integral operator in each of the cases i = 1, . . . , 4.
The proposed algorithm BACA provides the results shown in Tab. 1, in which the parameter θ = 0.9 was used. Here, the ranks ofÂ k are ahead of A k by two ACA steps. The parameter α from Lemma 1 is set to 100. Compared with the approximation obtained from ACA, we obtain lower storage requirements for the matrix approximationÂ k .
The storage reduction and the improved compression rate observed in Tab. 1 are also visible from Fig. 2, where the respective approximationÂ k of A with its blockwise ranks is shown (green blocks). Therein, red blocks are constructed entry-wise without approximation.
Example (13) has been chosen in order to investigate different right-hand sides which lead to structural differences in the respective solution. Notice that the influence of the singularity on the solution is stronger the smaller the distance of p i to the boundary of Ω becomes. Hence, for p i close to the boundary certain parts of the discrete integral operator are more important for the accuracy of the solution than others. Even for a relatively large distance, i.e. p 1 = (10, 0, 0) T , smaller ranks and hence a better compression rate than ACA can be observed; see Fig. 2. This effect is even   Fig. 2 indicate that the proposed error estimator detects those matrix blocks which are important for the respective problem; see Fig. 3. Therefore, BACA is able to construct an H-matrix approximation that is particularly suited to solve a linear system for a specific right-hand side. The improved storage requirements can be observed also for finer triangulations of the boundary. Tab. 3 shows the results for the boundary value problem (13) in the case i = 3 for several numbers of degrees of freedom N . In addition to the compression rate and the storage requirements, the number of computed entries for the construction ofÂ k are presented. Since the solution u h is expected to be more accurate the larger N is, the accuracy ε BACA of the error estimator has to be adapted to N . All other parameters remain unchanged. In comparison, the results of ACA applied to the boundary value problem (13) in the case i = 3 can be seen in Tab. 4.
Hence, BACA requires significantly less original matrix entries than the usual construction via ACA of the H-matrix approximation to obtain approximately the same relative error of u h .

Quality of the error estimator
In the following tests we validate the error estimator η k introduced in (10). At first the reliability statement of Lemma 1 together with the lower bound W k x k 2 of Lemma 2 is investigated. Again, the boundary element method combined with BACA is tested with the problem described in (13) and a triangulation with 14338 points and 28672 triangles. The accuracy ε BACA of the error estimator is set to 10 −7 . We start BACA with a coarse approximation A 0 which has been generated by applying four ACA steps to each block. Tab. 5 and in Fig. 4 contain the results for α = 1/2 and three refinement parameters θ. The ranks ofÂ k are ahead of A k by three ACA steps. The proposed error estimator η k estimates the residual error b−Ax k 2 in an appropriate way. The expression 1 2 W k x k 2 can serve as a lower bound for the residual error. Tab. 5 shows that the conver-   gence with respect to k is the faster the refinement parameter θ is closer to 1. A larger parameter θ in (9) leads to a larger set M k of marked blocks, which is likely to produce redundant information in A k+1 . Conversely, small refinement parameters θ usually lead to matrix approximations with lower storage requirements.  The numerical results confirm the theoretical findings of Lemma 1 and Lemma 2. The proposed error estimator η k is reliable and 1 2 W k x k 2 can be used as a lower bound on the residual error.

Acceleration of the matrix approximation
The results represent only a part of our aims of the extension of ACA. As already mentioned in the introduction, we are also interested in an acceleration of the matrix approximation and an acceleration of the solution process. Since the computation of matrix entries is by far the most time-consuming part of the boundary element method and we have seen that BACA requires less entries, we can expect improved computational times.  We consider again the family of boundary value problems (13). The ranks ofÂ k are ahead of A k by two ACA steps. and we choose α = 100. The ratio of the time required for BACA and the time of the ACA can be observed in the Figs. 5 and 6 for two different triangulations. The two figures show that we obtain a lower time consumption even in the case of p 1 , where the structural differences in the right-hand side are low. If the singularity is moved closer to the boundary of Ω, BACA speeds up continuously.
One of the biggest advantages of ACA is its linear logarithmic complexity. In order to observe this behaviour, the sphere discretised with 642 points and 1280 triangles is considered. We keep the resulting geometry and increase the number of triangles N for the boundary value problem with p 1 . The time columns of Tab. 6 show the expected complexity of ACA and also of the new BACA. The growth factor of N is four and the growth factor of the computational time is between five and six.   Table 6: Numerical results of BACA with p 1 and ε BACA = 5 · 10 −8 .

Effects on partially overrefined surfaces
In the following tests we apply ACA and BACA to triangulations which are partially overrefined. The boundary of the ellipsoid Ω = {x ∈ R 3 : x 2 1 + x 2 2 + x 2 3 /9 = 1} is more refined on the mid-region than on the remaining geometry; see Fig. 7.
The numerical results of BACA for the ellipsoid (p 2 , ε BACA = 10 −6 ) with a growing number of triangles are contained in Tab. 7. The accuracy of the block approximation for ACA is ε ACA = 10 −6 . The minimal block size is also given in Tab. 7, which we increase with the growing size of the problem. In order to compare the results, we keep the relative error at the same level.
Inspecting the time columns of Tabs. 7 and 8 reveals that BACA is significantly faster than ACA. For the largest problem BACA is more than twice as fast as ACA. The reason for the acceleration is that the solution does not benefit from the local over-refinement. While the solver based on ACA is not able to detect this, the combination of solution and error estimation used in BACA leads to lower number of CG iterations and computational time. For the largest problem the number of CG iterations required for the solution based on ACA is more than twice as large as for the solution based on BACA. We obtain no significant difference in the storage requirements after the application of the two algorithms. Hence, the observed speed-up results from the coupling of the iterative solver and the error estimator rather than from the reduction of the storage requirements. In addition to these facts, a decreasing number of required iterations k can be observed with a growing number of     The ratio of the time required for ACA and for BACA can be seen in Fig. 8. The curves stabilize at a certain constant, which seems to depend on the geometry and the number and the location of refined areas.