Stable implementation of adaptive IGABEM in 2D in MATLAB

We report on our MATLAB program package IGABEM2D, which provides an easily accessible implementation of adaptive Galerkin boundary element methods in the frame of isogeometric analysis.

1. Introduction 1.1.Isogeometric analysis.The central idea of isogeometric analysis (IGA) is to use the same ansatz functions for the discretization of a partial differential equation (PDE), as is used for the representation of the corresponding problem geometry in computer-aided design (CAD), namely (rational) splines.This concept, originally invented in [HCB05] for finite element methods (IGAFEM) has proved very fruitful in applications; see also the monograph [CHB09].
1.2.Isogeometric boundary element method.The idea of the boundary element method (BEM) is to reformulate the considered PDE on a domain Ω as an equivalent integral equation on the boundary ∂Ω.The solution of the latter is the missing Cauchy data, i.e., the Neumann or the Dirichlet data, which can be plugged into the so-called representation formula to obtain the PDE solution itself.Since CAD usually provides only a parametrization of ∂Ω, this makes BEM the most attractive numerical scheme, if applicable (i.e., provided that the fundamental solution of the differential operator is explicitly known); see [PGK + 09, PGKF13] for the first works on isogeometric BEM (IGABEM) for 2D resp.3D.Compared to standard BEM, which uses discontinuous or continuous piecewise polynomials as ansatz functions, IGABEM typically saves degrees of freedom by using smooth splines instead.
1.3.Adaptive IGABEM.On the one hand, IGA naturally leads to high-order ansatz functions.On the other hand, however, optimal convergence behavior with higherorder discretizations is only observed in simulations, if the (given) data as well as the (unknown) solution are smooth.Therefore, a posteriori error estimation and related adaptive strategies are mandatory to exploit the full potential of IGA.Rate-optimal adaptive strategies for IGAFEM (using hierarchical splines) have been proposed and analyzed independently in [BG17,GHP17] for IGAFEM, while the earlier work [BG16] proves only linear convergence.Meanwhile, these results have been extended to T-splines [GP20b].
As far as IGABEM is concerned, available results focus on weakly-singular integral equations with energy space H −1/2 (Γ); see [FGP15,FGHP16] for a posteriori error estimation as well as [FGHP17] for the analysis of a rate-optimal adaptive IGABEM in 2D, and [Gan17, GP20a, GP21] and [BGG + 21] for corresponding results for IGABEM in 3D with hierarchical splines and T-splines, respectively.Recently, [FGPS19] investigated optimal preconditioning for IGABEM in 2D with locally refined meshes.Moreover, [GPS20] proved rate-optimality of an adaptive IGABEM in 2D for hyper-singular integral equations with energy space H 1/2 (Γ).The recent work [BGG + 21] provides an overview of adaptive IGAFEM as well as IGABEM.
1.4.Model problems.The present paper reports on our Matlab implementation of adaptive IGABEM in 2D, which is available online [GPS22].On a bounded Lipschitz domain Ω ⊂ R 2 with diam(Ω) < 1, we consider the Laplace model problem −∆P = 0 with Dirichlet/Neumann boundary conditions.Rewriting this PDE as boundary integral equation involves (up to) four integral operators, namely the weakly-singular operator V, the hyper-singular operator W, the double-layer operator K, and the adjoint double-layer operator K .Let Γ ⊆ ∂Ω be a connected part of the boundary.Given boundary densities φ, u : Γ → R and x ∈ Γ, the operators are formally defined by [Wu](x) := 1 2π For our implementation, we consider the corresponding weakly-singular integral equation where f : Γ → R is given and the integral density φ : Γ → R is sought.Moreover, we consider the hyper-singular integral equation where ν denotes the outer unit normal vector on ∂Ω, g : Γ → R is given, and the integral density u : Γ → R is sought.For Γ = ∂Ω, these integral equations are equivalent formulations of the 2D Laplace problem −∆P = 0 in Ω, where (5) with f = (1/2+K)(P | Γ ) is equivalent to the Dirichlet problem and (6) with g = (1/2 − K )(∂P/∂ν) is equivalent to the Neumann problem.Knowing both Dirichlet as well as Neumann boundary data allows to compute P via the representation formula P = V(∂P/∂ν) − K(P | Γ ), where V and K are defined as in (1) and (3) for x ∈ Ω.While this approach is called direct, an alternative indirect approach is to make the ansatz P = Vφ or P = Ku and solve Vφ = f := P | Γ or Wu = g := ∂P/∂ν, respectively.In all cases, the singularities of the involved integral kernels pose a significant challenge, which is overcome in the presented stable IGABEM implementation.
1.5.Matlab library IGABEM2D and contributions.The library IGABEM2D is available online [GPS22].It is distributed as a single file igabem2d.zip.To install the library, download and unpack the zip archive, start Matlab, and change to the root folder igabem2d/.You can directly run the main files IGABEMWeak.m and IGABEMHyp.m,where the adaptive Algorithm 3.2 and 4.1 for the problems of Section 1.4 are implemented.There, you may choose out of a variety of different parameters.These functions both automatically run mexIGABEM.m,which adds all required paths, compiles the C files of source/ and stores the resulting mex-files in MEX-files/.The Matlab functions provided by IGABEM2D are contained in the folder functions/.Further details are also provided by README.txt,which is found in the root folder igabem2d/.
1.6.Outline.Section 2 recalls the definition of B-splines and NURBS and discusses necessary assumptions on parametrized NURBS geometries.Section 3 recalls the weaklysingular integral equation along with the required Sobolev spaces on the boundary and formulates an adaptive Galerkin IGABEM, which is applied in numerical experiments at the end of the section.Similarly, Section 4 covers the hyper-singular integral equation.Details on the implementation, i.e., the stable computation of the involved (singular) boundary integrals, are given in Section 5. Finally, Appendix A provides an overview of all functions in IGABEM2D.
1.7.General notation.Throughout and without any ambiguity, | • | denotes the absolute value of scalars, the Euclidean norm of vectors in R 2 , the measure of a set in R (e.g., the length of an interval), or the arclength of a curve in R 2 .Throughout, meshrelated quantities have the same index, e.g., N is the set of nodes of the partition Q , and h is the corresponding local mesh-width function etc.The analogous notation is used for partitions Q + , Q , etc.We sometimes use • to transform notation on the boundary to the parameter domain or vice versa, e.g., Q is the partition of the parameter domain related to the partition Q of the boundary.

Isogeometric analysis on 1D boundaries
In this section, we recall the definition of B-splines and NURBS and discuss necessary assumptions on parametrized NURBS geometries.This provides the basis for the implemented IGABEM discretization discussed in Section 3-4.

B-Splines and NURBS in 1D.
We consider an arbitrary but fixed sequence where we suppose the convention (•)/0 := 0. It is well known that for arbitrary I = [a, b) and p ∈ N 0 , the set B ,i,p | I : i ∈ Z ∧ B ,i,p | I = 0 is a basis for the space of all rightcontinuous piecewise polynomials of degree lower or equal p with breakpoints N on I which are, at each knot t ,i , p − # t ,i times continuously differentiable if p − # t ,i ≥ 0.Moreover, the B-splines are non-negative and have local support Indeed, B ,i,p depends only on the knots t ,i−1 , . . ., t ,i+p .If t ,i−1 < t ,i+p , then Moreover, B-splines form a partition of unity, i.e., i∈Z B ,i,p = 1 for all p ∈ N 0 .For p ≥ 1 and i ∈ Z, the right derivative of the corresponding B-spline can be computed as Finally, if i∈Z a ,i B ,i,p is a given spline and K + is obtained from K by adding a single knot t with t ∈ (t , −1 , t , ] for some ∈ Z, there exist coefficients (a With the multiplicity # + t of t in the knots K + , the new coefficients can be chosen as convex combinations of the old coefficients If # t k ≤ p + 1 for all k ∈ Z, these coefficients are unique.Proofs are found, e.g., in [dB86]. In addition to the knots K = (t ,i ) i∈Z , we consider fixed positive weights W := (w ,i ) i∈Z with w ,i > 0. For i ∈ Z and p ∈ N 0 , we define the i-th NURBS (non-uniform rational B-spline) by Note that the denominator is locally finite and positive.For any p ∈ N 0 , we define the spline space as well as the rational spline space 2.2.Boundary parametrization.Recall that Ω ⊂ R 2 is a bounded Lipschitz domain with diam(Ω) < 1 and Γ ⊆ ∂Ω is a connected part of its boundary.We further assume that either Γ = ∂Ω is parametrized by a closed continuous and piecewise contin- ∂Ω is parametrized by a bijective continuous and piecewise continuously differentiable path γ : [a, b] → Γ.In the first case, we speak of closed Γ = ∂Ω, whereas the second case is referred to as open Γ ∂Ω.Throughout and by abuse of notation, we write γ −1 for the inverse of γ| [a,b) and γ| (a,b] .The meaning will be clear from the context.For the left-and right-hand derivative of γ, we assume that γ (t) = 0 for t ∈ (a, b] and γ r (t) = 0 for t ∈ [a, b).Moreover, we assume that γ (t) + cγ r (t) = 0 for all c > 0 and t ∈ [a, b] and t ∈ (a, b), respectively.Note that these assumptions provide a pointwise representation of the arc-length derivative Finally and without loss of generality, we suppose that γ is positively oriented in the sense that the outer normal vector of Ω has the form While the given assumptions are sufficient for the abstract analysis of adaptive BEM, the later implementation requires that γ is a NURBS curve of degree p γ ∈ N in the following sense: Let K γ = (t γ,i ) Nγ i=1 be a sequence of knots with and multiplicity # γ t γ,i ≤ p γ for i ∈ {1, . . ., N γ − p γ }.Let denote the corresponding set of nodes with x γ,j−1 < x γ,j for j ∈ {2, . . ., n γ }.Note that N γ = nγ j=1 # γ x γ,j .With x γ,0 := a, we define the induced mesh on [a, b], To use the definition of B-splines as in Section 2.1, we extend the knot sequence arbitrarily to (t γ,i ) i∈Z with t γ,−pγ = • • • = t γ,0 = a, t γ,i ≤ t γ,i+1 , and lim i→±∞ t γ,i = ±∞.For the extended sequence, we also write K γ .Moreover, let W γ = (w γ,i ) Nγ −pγ i=1−pγ be given positive weights and control points in R 2 , respectively, which satisfy that w γ,1−pγ = w γ,Nγ −pγ and C γ,1−pγ = C γ,Nγ −pγ in the case of Γ = ∂Ω.We extend W γ and C γ arbitrarily to (w γ,i ) i∈Z and (C γ,i ) i∈Z with positive weights w γ,i and control points C γ,i in R 2 , respectively.With the standard NURBS R γ,i,pγ defined in Section 2.1, we suppose that γ has the form where the second equality follows from the locality (8) of the B-splines resp.NURBS.The locality of NURBS even shows that this definition does not depend on how the knots, the weights, and the control points are precisely extended.We note that the assumptions w γ,1−pγ = w γ,Nγ −pγ and C γ,1−pγ = C γ,Nγ −pγ together with (9) below ensure that γ(a) = γ(b−) in the case of closed Γ = ∂Ω.
2.3.Rational splines on Γ.Let p ∈ N 0 be an arbitrary but fixed polynomial degree.Let K 0 = (t 0,i ) N 0 i=1 be a sequence of initial knots with and multiplicity # 0 t 0,i ≤ p + 1 for i ∈ {1, . . ., N 0 − p}.Let denote the corresponding set of nodes with x 0,j−1 < x 0,j for j ∈ {2, . . ., n 0 }.In order to apply standard quadrature rules, we assume that N γ ⊆ N 0 .Moreover, let W 0 = (w 0,i ) N 0 −p i=1−p be given positive weights.We extend the knots and weights as in Section 2.2 and define the weight function where B 0,k,p are the standard B-splines defined in Section 2.1.We define W 0 : [a, b] → R by continuously extending this function at b. Now, let K = (t 0,i ) N i=1 be a finer knot vector, i.e., K 0 is a subsequence of K which satisfies the same properties as K 0 .Outside the interval (a, b], we extend K as K 0 .Let denote the corresponding set of nodes on [a, b] with x ,j−1 < x ,j for j ∈ {1, . . ., n }, and let N := γ( x ,j ) : j ∈ {0, . . ., n } denote the corresponding nodes on Γ.Note that N does not contain the node x ,0 = a, which is natural for the case Γ = ∂Ω, since then γ( x ,0 ) = γ( x ,n ).We define the induced mesh on [a, b], where we set x ,0 := a, and the induced mesh on Γ, Choosing these weights, we ensure that the denominator of the rational splines does not change.These weights are just convex combinations of the initial weights W 0 and can be computed via the knot insertion procedure (11).Finally, we extend W arbitrarily to (w ,i ) i∈Z with w ,i > 0 and define the space of (transformed) rational splines on Γ as where S p ( K ) denotes the space of all right-continuous piecewise polynomials of degree lower or equal p with breakpoints N on [a, b) which are, at each knot t ,i , p − # t ,i times continuously differentiable if p − # t ,i ≥ 0; see also Section 2.1.Here, we extend each quotient S / W left-continuously at b.The locality (8) of B-splines shows that this definition does not depend on how the knots and the weights are precisely extended.
With the standard NURBS R ,i,p from Section 2.1, a basis of S p (K , W ) is given by where the functions R ,i,p are again left-continuously extended at b.
All details and equivalent definitions of the Sobolev spaces are, for instance, found in the monographs [McL00,HW08,Ste08,SS11].

Galerkin IGABEM.
Let p ∈ N 0 be a fixed polynomial degree.Moreover, let K and W be knots and weights as in Section 2.3.We introduce the ansatz space Recall that where the set even forms a basis.We define the Galerkin approximation Φ ∈ X of φ by Note that (35) is equivalent to solving the finite-dimensional linear system where Φ = N −p i =1−p c ,i R ,i ,p .The computation of the matrix and the right-hand side vector in (36) is realized in VMatrix.c and RHSVectorWeak.c and can be called in Matlab via buildVMatrix and buildRHSVectorWeak; see Appendix A. For details on the implementation, we refer to Section 5.1 and 5.2.

3.4.
A posteriori error estimation.Let K be a knot vector as in Section 4.3 with corresponding ansatz space X .We define the mesh-size function h We consider the following three different node-based estimators: the (h − h/2)-estimator where K + are the uniformly refined knots obtained from K by adding the midpoint of each element Q ∈ Q with multiplicity one (see Algorithm 3.1); the Faermann estimator and the weighted-residual estimator which requires the additional regularity f ∈ H 1 (Γ) to ensure that f − VΦ ∈ H 1 (Γ) and hence (40) is well-defined.If f = (K + 1/2)u stems from a Dirichlet problem as in Section 1.4, Section 3.2 shows that this is satisfied for u ∈ H 1 (Γ).The computation of the estimators is implemented in HHEstWeak.c,FaerEstWeak.c,and ResEstWeak.c and can be called in Matlab via buildHHEstWeak, buildFaerEstWeak, and buildResEstWeak; see Appendix A. The residual estimators require the evaluation of the boundary integral operators V and K applied to some function, which is implemented in ResidualWeak.c.The evaluations can be used in Matlab via evalV and evalRHSWeak; see Appendix A. The implementation of the estimators and the evaluations are discussed in Section 5.

Adaptive IGABEM algorithm.
To refine a given ansatz space X , we compute one of the corresponding error estimators η and determine a set of marked nodes with large error indicator η (x).By default, we then apply the following refinement algorithm, which uses h-refinement controlling the maximal mesh-ratio in the parameter domain as well as multiplicity increase.
Algorithm 3.1.Input: Polynomial degree p ∈ N 0 , initial maximal mesh-ratio κ 0 , knot vector (iii) For all other nodes in M , increase the multiplicity if it is less or equal to p. Otherwise mark the elements which contain one of these nodes, by adding them to M .(iv) Bisect all Q ∈ M in the parameter domain by inserting the midpoint of γ −1 (Q) with multiplicity one to the current knot vector.Use a minimal number of further bisections to guarantee that the new knot vector K + satisfies that κ + ≤ 2 κ 0 .
The marked elements M and the nodes whose multiplicity should be increased are determined in markNodesElements.m.The multiplicity increase and the computation of the new weights are realized in increaseMult.m.An optimal 1D bisection algorithm as in (iv) is discussed and analyzed in [AFF + 13].Together with the computation of the new weights, it is realized in refineBoundaryMesh.m.We stress that we have also implemented two further relevant strategies that rely on h-refinement only: Replace (iii) by adding all elements Q ∈ Q containing one of the other nodes in M to M ; or replace (iii) by adding all elements Q ∈ Q containing one of the other nodes in M to M and insert the midpoints in (iv) with multiplicity p + 1.The first strategy leads to refined splines of full regularity, whereas the second one leads to lowest regularity.
(ii) Compute refinement indicators η (x) for all nodes x ∈ N .
(iii) Determine a minimal set of nodes M ⊆ N such that (iv) Generate refined knot vector K +1 via Algorithm 3.1.
Output: Approximate solutions Φ and estimators η for all ∈ N 0 .
The adaptive algorithm is realized in the main function IGABEMWeak.m.The considered problem as well as the used parameters can be changed there by the user.In Section 5, we discuss how the arising (singular) boundary integrals are computed.We also mention that Algorithm 3.2 (iii) is realized in markNodesElements.m, which also determines the corresponding set of marked elements M and the nodes whose multiplicity should be increased from Algorithm 3.1 (ii)-(iii).
3.6.Numerical experiments.In this section, we empirically investigate the performance of Algorithm 3.2 for different geometries, ansatz spaces, and error estimators.Figure 1 shows the different geometries, namely a pacman geometry and a slit.The boundary of the pacman geometry can be parametrized via rational splines of degree 2, while the slit can be parametrized by splines of degree 1.As initial ansatz space, we either consider the same (rational) splines, i.e., K 0 = K γ and W 0 = W γ , splines of degree p and smoothness C p−1 , i.e., ) and W 0 = (1, . . ., 1), or piecewise polynomials of degree p, i.e., ) and W 0 = (1, . . ., 1).In the latter case, we always consider h-refinement with new knots having multiplicity p + 1 as explained after Algorithm 3.1.
and has a generic singularity at the origin.
Figure 2 shows the knot distribution (i.e., the relative number of knots lower or equal than the parameter points on the x-axis) as well as a histogram of the knots in the parameter domain [a, b] of the first step of Algorithm 3.2 where the residual error estimator is below 10 −6 .We compare pure h-refinement (top) to the proposed refinement strategy including multiplicity increase (bottom), i.e., Algorithm 3.1, where the knots with full multiplicity p + 1 are plotted in red.Splines of order p = 2 are used as ansatz spaces, and the weighted-residual estimator with θ = 0.75 is used to steer the adaptive refinement.It can be seen that Algorithm 3.2 heavily refines the mesh towards the reentrant corner (0, 0) ∈ Γ, which corresponds to γ(1/2) and where φ has a singularity.The solution φ additionally has jumps at γ(1/3) and γ(2/3).Since we use splines of polynomial degree p = 2 and there are no knots with multiplicity p + 1 in (0, 1) for Algorithm 3.2 without multiplicity increase (top), we know that the resulting approximation is at least continuous at each knot, cf.Section 2.1, and that it can thus not resolve these jumps appropriately.As a result, the algorithm uses h-refinement at these jump points to reduce the error.Since Algorithm 3.2 with multiplicity increase (bottom) allows for knots with full multiplicity p + 1, i.e., it allows for jumps in the resulting approximation, the jumps at γ(1/3) and γ(2/3) are simply resolved by knots with full multiplicity.
In Figure 3, we compare uniform refinement with θ = 1 and adaptive refinement with θ = 0.75, different ansatz spaces, and different estimators for Algorithm 3.2.First (top), for p = 2 fixed, we compare (an approximation of the) error for the different ansatz spaces and the algorithm steered either by the Faermann estimator (top, left) or the weighted-residual estimator (top, right).Instead of the exact error φ−Φ V , we compute Φ + − Φ V , where K + are the uniformly refined knots, which result from K by adding the midpoint of each element Q ∈ Q with multiplicity one (see also Algorithm 3.1).We get similar plots, where uniform mesh refinement leads to the suboptimal rate of convergence O(N −4/7 ) because the solution lacks regularity.However, for all adaptive cases, Algorithm 3.2 regains the optimal rate of convergence O(N −3/2−p ), where splines and NURBS exhibit a significantly better multiplicative constant than piecewise polynomials.Next (middle), we consider the estimators for piecewise polynomials.For both the Faermann estimator (middle, left) and the weighted-residual estimator (middle, right), we compare different orders p ∈ {0, 1, 2, 3} for the ansatz spaces.Again, uniform mesh refinement leads to the suboptimal rate O(N −4/7 ) (only displayed for p = 0), whereas the adaptive strategy regains the optimal order of convergence O(N −3/2−p ).Lastly, we get similar results for splines (bottom).
In Figure 4, the knot distribution (i.e., the relative number of knots lower or equal than the parameter points on the x-axis) as well as a histogram of the knots in the parameter domain [a, b] of the first step of Algorithm 3.2, where the residual error estimator is below 10 −6 are plotted over the parameter domain.For splines of degree p = 2 and the weighted-residual estimator with θ = 0.75, we compare Algorithm 3.2 without (top) and with (bottom) multiplicity increase.We see that for this example the difference between these two approaches is smaller than for the example of Section 3.6.1.Since the solution φ is continuous in (a, b), the knot distribution looks quite similar for both cases.However, we can see that multiplicity increase takes place nonetheless (bottom) and Algorithm 3.2 refines towards the two tips x = ±1, where the solution has singularities.

Pacman geometry
In Figure 5, we compare uniform refinement with θ = 1 and adaptive refinement with θ = 0.75, different ansatz spaces, and different estimators for Algorithm 3.2.First (top), for p = 1 fixed, we compare (an approximation of the) error for the different ansatz  spaces and the algorithm steered either by the Faermann estimator (top, left) or the weighted-residual estimator (top, right).Instead of the exact error φ−Φ V , we compute Φ + − Φ V , where K + are the uniformly refined knots, which result from K by adding the midpoint of each element Q ∈ Q with multiplicity one (see also Algorithm 3.1).We get similar plots, where uniform mesh refinement leads to the suboptimal rate of convergence O(N −1/2 ) because the solution lacks regularity at the tips x = ±1.However, for all adaptive cases, Algorithm 3.2 regains the optimal rate of convergence O(N −3/2−p ), where splines exhibit a better multiplicative constant than piecewise polynomials.Next (middle), we consider the estimators for piecewise polynomials.For both the Faermann estimator (middle, left) and the weighted-residual estimator (middle, right), we compare different orders p ∈ {0, 1, 2, 3} for the ansatz spaces.Again, uniform mesh refinement leads to the suboptimal rate O(N −1/2 ) (only displayed for p = 0), whereas the adaptive strategy regains the optimal order of convergence O(N −3/2−p ).Lastly, we get similar results for splines (bottom).4.2.Hyper-singular integral equation.For σ ∈ {0, 1/2, 1}, the hyper-singular integral operator W : H σ (Γ) → H σ−1 (Γ) and the adjoint double-layer operator K : H σ−1 (Γ) → H σ−1 (Γ) are well-defined, linear, and continuous.Moreover, K is indeed the adjoint operator of K.
We introduce the ansatz space According to Section 2.1, it holds that In each case, the corresponding sets even form a basis of Y .We abbreviate the (continuous) basis functions R c ,i,p := R ,i,p for i = 2 − p, . . ., N − p − 1 and R c ,i,p := R ,1−p,p + R ,N −p,p for i = 1 − p.
We assume the additional regularity φ ∈ L 2 (Γ) and approximate φ by φ := Π φ, where Π is the L 2 -orthogonal projection onto the space of transformed piecewise polynomials Note that φ , 1 ∂Ω = 0 in case of Γ = ∂Ω also yields that φ , 1 ∂Ω = 0.With g := φ resp.g := (1/2 − K )φ , the corresponding Galerkin approximation U ∈ Y reads At least for the direct method, we empirically observed that working with g instead of with g leads to strong implementational instabilities (since a possible singularity of g meets the singularity of the integral kernels of the boundary integral operators.) Note that (50) is equivalent to solving the finite-dimensional linear system where k = 1 for Γ = ∂Ω and k = 2 for Γ ∂Ω.Then, U = N −p i =k−p c ,i R c ,i ,p .The computation of the matrix and the right-hand side vector in (51) is realized in WMatrix.c and RHSVectorHyp.c,where the required projection Π is realized in PhiApprox.c.They can be called in Matlab via buildWMatrix, buildRHSVectorHyp, and buildPhiApprox; see Appendix A. For details on the implementation, we refer to Section 5.1 and 5.2.

4.4.
A posteriori error estimation.In the hyper-singular case, we consider the following two different node-based estimators: the (h − h/2)-estimator where K + are the uniformly refined knots, which result from K by adding the midpoint of each element Q ∈ Q with multiplicity one (see also Algorithm 3.1); and the weightedresidual estimator where the additional regularity φ ∈ L 2 (Γ) ensures that g −WU ∈ L 2 (Γ); see Section 4.2.
To account for the approximation of g by g , we additionally consider the oscillations The computation of the estimators and oscillations is implemented in HHEstHyp.c,ResEstHyp.c,and OscHyp.c.They can be called in Matlab via buildHHEstHyp, buildResEstHyp, and buildOscHyp.The residual estimators require the evaluation of the boundary integral operators W and K applied to some function, which is implemented in ResidualHyp.c.The evaluations can be used in Matlab via evalW.cand evalRHSHyp.c;see Appendix A. The implementation of the estimators and the evaluations are discussed in Section 5.

Adaptive IGABEM algorithm.
To enrich a given ansatz space X , we compute one of the error estimators η and determine a set of marked nodes with large indicator η (x).By default, we then apply an adapted version of the refinement Algorithm 3.1 with the following two obvious modifications: First, we only consider p ∈ N instead of p ∈ N 0 as input.Second, in (iii), we only increase the multiplicity if it is less or equal to p − 1 to ensure that the basis functions are at least continuous.As before, the required marking is realized in markNodesElements.m.Together with the computation of the new weights, the h-refinement is realized in refineBoundaryMesh.m,while the multiplicity increase is realized in increaseMult.m.We stress that we have also implemented two further relevant strategies that rely on h-refinement only: Replace (iii) by adding all elements Q ∈ Q containing one of the other nodes in M to M ; or replace (iii) by adding all elements Q ∈ Q containing one of the other nodes in M to M and insert the midpoints in (iv) with multiplicity p.The first strategy leads to refined splines of full regularity, whereas the second one leads to lowest possible regularity, i.e., mere continuity.
(iv) Generate refined knot vector K +1 via the adapted version of Algorithm 3.1.
Output: Approximate solutions U and estimators η for all ∈ N 0 .
The adaptive algorithm is realized in IGABEMHyp.m.The considered problem as well as the used parameters can be changed there as pleased.In Section 5, we discuss how the arising (singular) boundary integrals are computed.We also mention that Algorithm 4.1 (iii) is realized in markNodesElements.m, which also determines the corresponding set of marked elements M and the nodes whose multiplicity should be increased from Algorithm 3.1 (ii)-(iii).

Numerical experiments.
In this section, we empirically investigate the performance of Algorithm 4.1 for different ansatz spaces, and error estimators.Figure 6 shows the different geometries, namely a circle and a heart geometry whose boundary can be parametrized via rational splines of degree 2. As initial ansatz space, we either consider the same rational splines, i.e., K 0 = K γ and W 0 = W γ , splines of degree p and smoothness C p−1 , i.e., ) and W 0 = (1, . . ., 1), or continuous piecewise polynomials of degree p, i.e., ) and W 0 = (1, . . ., 1).In the latter case, we always consider h-refinement with new knots having multiplicity p as before Algorithm 4.1.4.6.1.Singular problem on heart geometry.Similarly to Section 3.6.1,we prescribe an exact solution P of the Laplace problem in polar coordinates (x 1 , x 2 ) = r(cos β, sin β) on the heart geometry, cf. Figure 6.The solution u of the the hyper-singular equation ( 6) with g = (1/2 − K )(∂P/∂ν) is just the restriction P onto ∂Ω, which has a generic singularity at the origin.In Figure 7, the knot distribution (i.e., the relative number of knots lower or equal than the parameter points on the x-axis) as well as a histogram of the knots in the parameter domain [a, b] of the first step of Algorithm 4.1 where the residual error estimator is below 10 −6 are plotted over the parameter domain.For splines of degree p = 3 and the weighted-residual estimator with θ = 0.5, Algorithm 4.1 without multiplicity increase (top) and Algorithm 4.1 with multiplicity increase (bottom) heavily refine the mesh towards γ(1/2) = (0, 0), where u has a generic singularity.For the latter one, we can see that multiplicity increase leads to a smoother knot distribution where less h-refinement takes place besides the singularity.
In Figure 8, we compare uniform refinement with θ = 1 and adaptive refinement with θ = 0.5, different ansatz spaces, and different estimators for Algorithm 4.1.First (top), for p = 2 fixed, we compare (an approximation of the) error for the different ansatz spaces and the algorithm steered either by the (h − h/2)-estimator (top, left) or the weighted-residual estimator (top, right).Instead of the exact error u−U W , we compute U + − U W , where K + are the uniformly refined knots, which result from K by adding the midpoint of each element Q ∈ Q with multiplicity one (see also Algorithm 3.1).We get similar plots, where uniform mesh refinement leads to the suboptimal rate of convergence O(N −2/3 ) because the solution lacks regularity.However, for all adaptive cases, Algorithm 4.1 regains the optimal rate of convergence O(N −1/2−p ), where splines exhibit a better multiplicative constant than piecewise polynomials.Next (middle), we consider the estimators for continuous piecewise polynomials.For both the (h − h/2)estimator (middle, left) and the weighted-residual estimator (middle, right), we compare different orders p ∈ {1, 2, 3, 4} for the ansatz spaces.Again, uniform mesh refinement leads to the suboptimal rate O(N −2/3 ) (only displayed for p = 1), whereas the adaptive strategy regains the optimal order of convergence O(N −1/2−p ).Lastly, we get similar results for splines (bottom).

Implementational aspects
5.1.Computation of Galerkin matrices.The computation of the Galerkin matrices of (36) and ( 51) is realized in VMatrix.c and WMatrix.c, and can be called in Matlab via buildVMatrix and buildWMatrix.
which can be approximated by standard tensor-Gauss quadrature.For the term WR c ,i ,p , R c ,i,p Γ , we employ Maue's formula see [Mau49].Note that ∂ Γ R c ,i ,p and ∂ Γ R c ,i,p can be easily calculated via (10) and ( 15).Thus, it is sufficient to explain how to compute φ , ψ V for (at least) Q -piecewise continuous functions φ, ψ ∈ H −1/2 (Γ).
Case 1 (no intersection): We assume that Q ∩ Q = ∅.In this case, the integrand is (at least) continuous and we can use standard tensor-Gauss quadrature to approximate the integral.
Case 1 (no intersection): We assume that Q ∩ Q = ∅.In this case, the integrand is (at least) continuous and we can use standard tensor-Gauss quadrature to approximate the integral.
Case 2 (identical elements): We assume that Q = Q , which also implies that j = j .Note that Taylor expansion together with the fact that γ is piecewise C 2 shows that see, e.g., [Gan14, Equation (5.4)] for a detailed calculation.Since γ is even piecewise C ∞ , this argument even yields that (σ, τ ) → G ν γ j (σ), γ j (τ ) is smooth.Thus, in principle, one can use standard tensor-Gauss quadrature to approximate the integral.However, to avoid the computation of G ν (γ j (σ), γ j (τ )) in the limit case σ = τ , we make the same steps as in Case 2 of Section 5.1, i.e., we split the integral into two summands, use Fubini's theorem as well as the reflection (σ, τ ) → (τ, σ) for the second one, and use the Duffy transformation (σ, τ ) → (σ, στ ) with Jacobi determinant σ (see Figure 9 for a visualization).Altogether, this results in In particular, this shifts the limit case to the boundary σ = 0 ∨ τ = 1, while tensor-Gauss quadrature only evaluates the integral in the interior of the unit square.
Remark 5.1.We mention that the smoothness of (σ, τ ) → G ν (γ j (σ), γ j (τ )) hinges on the considered Laplace equation.It is also satisfied for the Helmholtz equation, but fails, for instance, for the Lamé equation from linear elasticity.Nevertheless, the same transformations as above yield a final integrand that is (at least) continuous if the Lamé equation is considered.
Case 3 (common node): We assume that Q ∩ Q contains only one point.Without loss of generality, we further assume that the singularity is at (σ, τ ) = (0, 1), i.e., x ,j−1 = x ,j or x ,j−1 = a ∧ x ,j = b if Γ = ∂Ω.We make the same steps as in Case 3 of Section 5.1, i.e., we rotate the integration domain by π/2, i.e., (σ, τ ) → (τ, 1 − σ), which transforms the singularity to (σ, τ ) = (0, 0), and then employ the same transformations as in Case 2 (see also Figure 9).Altogether this results in . In particular, one can show with the fundamental theorem of calculus and the fact that γ is piecewise C 1 with |γ | > 0 that these terms are (at least) continuous.Thus, one can use standard tensor-Gauss quadrature to approximate the integral.
5.3.Evaluation of single-layer operator V. Let K be a knot vector.Let x ∈ Γ and s ∈ For φ ∈ H −1/2 (Γ) (at least) Q -piecewise continuous, we want to evaluate [Vφ](x).These values are required to compute the Faermann estimator in Section 5.8 and the weighted-residual estimator in Section 5.9.We also mention that they can be used to implement a collocation method instead of a Galerkin method to approximate the solution of (5).The evaluation is realized in ResidualWeak.cand can be called in Matlab via evalV.
We start with For all Q ∈ Q with Q = Q, the integrands are (at least) continuous and we can use standard tensor-Gauss quadrature after transforming the integration domain to the unit square.Recall that G(x, y) = − 1 2π log |x − y|.With the abbreviation φ(t With the fundamental theorem of calculus and the fact that γ is piecewise smooth with |γ | > 0, it is easy to see that τ → γ(s) − γ x ,j−1 + τ (s − x ,j−1 ) /τ and τ → |γ(s) − γ s + τ ( x ,j − s) /τ are smooth and (uniformly) larger than 0; see [Gan14, Lemma 5.6] for details.Thus, we can use Gauss quadrature with weight function 1 and log(τ ) to approximate the final integrals.

Evaluation of double-layer operator
we want to evaluate [Ku](x).These values are required to compute the Faermann estimator in Section 5.8 and the weighted-residual estimator in Section 5.9.We also mention that they can be used to implement a collocation method1 instead of a Galerkin method to approximate the solution of (5).The evaluation is realized in ResidualWeak.cand can be called in Matlab via evalRHSWeak.
Recall the abbreviation G ν (x, y) of (58).We start with With the fundamental theorem of calculus and the fact that γ is piecewise smooth with |γ | > 0, it is easy to see that τ → γ(s) − γ x ,j−1 + τ (s − x ,j−1 ) /τ and τ → |γ(s) − γ s + τ ( x ,j − s) /τ are smooth and (uniformly) larger than 0; see [Gan14, Lemma 5.6] for details.Based on (59), one can show that the final two integrands are (at least) continuous; see [Gan14, Lemma 5.7] for details.Thus, we can use standard Gauss quadrature to approximate the integrals.5.5.Evaluation of hyper-singular integral operator W.Under the assumptions of Section 5.4, we want to evaluate [Wu](x) for x ∈ Γ \ N γ .These values are required to compute the weighted-residual estimators in Section 5.9.We also mention that they can be used to implement a collocation method instead of a Galerkin method to approximate the solution of (6).The evaluation is realized in ResidualHyp.cand can be called in Matlab via evalW.Maue's formula (56), V∂u ∈ H 1 (Γ) (see Section 3.2), and integration by parts show that Wu , v Γ = − ∂ Γ V∂ Γ u , v Γ for all v ∈ H 1 (Γ).By density, we see that To approximate (Wu)(x), we replace (V∂ Γ u) • γ| Q by an interpolation polynomial, where the required point evaluations have been discussed in Section 5.3.The arclength derivatives can be computed with (10) and (15).
5.6.Evaluation of adjoint double-layer operator K .Under the assumptions of Section 5.3, we want to evaluate [Ku](x) for x ∈ Γ \ N γ .These values are required to compute the weighted-residual estimators in Section 5.9.We also mention that they can be used to implement a collocation method instead of a Galerkin method to approximate the solution of (6).The evaluation is realized in ResidualHyp.cand can be called in Matlab via evalRHSHyp.To approximate [K φ](x), one can proceed along the lines of Section 5.4; see [Sch16, Section 6.4] for details.5.7.Computation of h-h/2 error estimators.Let K be a knot vector with uniformly refined knot vector K + (see Section 3.4).The computation of the error indicators η V,hh2, (x) and η W,hh2, (x), x ∈ N , of (38) and (52), is realized in HHEstWeak.cand HHEstHyp.c,and can be called in Matlab via buildHHEstWeak and buildHHEstHyp (which expect among others the coefficients of the Galerkin approximations on the coarse and on the fine mesh.)Clearly, η V,hh2, (x) can be approximated by standard Gauss quadrature.With (10) and (15), also the indicators η W,hh2, (x) can be approximated by standard Gauss quadrature.5.8.Computation of Faermann estimator.Let K be a knot vector.The computation of the error indicators η V,fae, (x), x ∈ N , of (39) is realized in FaerEstWeak.c,and can be called in Matlab via buildFaerEstWeak.Let Q, Q ∈ Q with Q ∪ Q = ω (x).We assume that Q = Q , i.e., x is not at the boundary of Γ ∂Ω.The case Q = Q works analogously.We abbreviate the residual r := f − VΦ , which gives that Given the evaluation procedures of Section 5.3 and 5.4, the terms |r | 2 H 1/2 (Q) and |r | 2 can be approximated exactly as in Case 2 of Section 5.2, where one can additionally exploit the symmetry of the integrand.The remaining integral can be approximated exactly as in in Case 3 of Section 5.2.
5.9.Computation of weighted-residual error estimators.Let K be a knot vector.The computation of the error indicators η V,res, (x) and η W,res, (x), x ∈ N , of (40) and (53), is realized in ResEstWeak.c and ResEstHyp.c,where the required projection Π is realized in PhiApprox.c.They can be called in Matlab via buildResEstWeak, buildResEstHyp, and buildPhiApprox.c.To approximate the error indicators η V,res, (x), x ∈ N , of (40), we replace (f −VΦ )•γ| Q , Q ∈ Q with γ( Q) ⊆ ω (x), by an interpolation polynomial, where the required point evaluations have been discussed in Section 5.3-5.4.The arclength derivatives can be computed with (10) and (15).Finally, we use standard Gauss quadrature.To approximate the error indicators η W,res, (x) x ∈ N , of (53), we can use standard Gauss quadrature.The required point evaluations have been discussed in Section 5.5-5.6.5.10.Computation of data oscillations.Let K be a knot vector.Clearly, the oscillations osc W, (x), x ∈ N , of (54) can be approximated by standard Gauss quadrature.

Appendix A. Overview of functions provided by IGABEM2D
Root folder igabem2d/.The folder contains the three subfolders source/, MEX-files/, and functions/.With mexIGABEM.m,all required paths are added, the C files of source/ are compiled (via the Matlab-C interface mex) and the resulting mex-files are stored points).The Dörfler marking (55) together with the marking of Algorithm 3.1 is implemented in markNodesElements.m.The actual refinement steps of Algorithm 3.1 are implemented in increaseMult.m,which increases the multiplicity of marked nodes, and refineBoundaryMesh.m,which bisects at least all marked elements.Other auxiliary functions are BasisTrafo.m(to transform coefficients corresponding to a coarse basis to coefficients corresponding to a finer basis), ChebyNodes.m (to compute Chebyshev nodes), GaussData.m(to compute standard Gauss nodes and weights), IntMean.m (to compute the integral mean over Γ), LagrDerivMatrix.m (to compute derivatives of Lagrange polynomials), LogGaussData.m(to compute Gauss nodes and weights for the logarithmic weight function), ShapeRegularity.m (to compute κ ).

Figure 1 .
Figure 1.Geometries and initial nodes for examples from Section 3.6.

Figure 2 .
Figure 2. Example from Section 3.6.1:Distribution and histogram of the knots of the first step of Algorithm 3.2 where the error estimator is below 10 −6 without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order p = 2 and the weighted-residual estimator with θ = 0.75.

Figure 4 .
Figure 4. Example from Section 3.6.2:Distribution and histogram of the knots of the first step of Algorithm 3.2, where the error estimator is below 10 −6 without multiplicity increase (top) as well as with mulitplicity increase (bottom) for splines of order p = 2 and the weighted-residual estimator with θ = 0.75.

Figure 6 .
Figure 6.Geometry and initial nodes for examples from Section 4.6.

Figure 7 .
Figure 7. Example from Section 4.6.1:Distribution and histogram of the knots of the first step of Algorithm 4.1 where the error estimator is below 10 −6 without increase (top) as well as with mulitplicity increase (bottom) for splines of order p = 3 and the weighted-residual estimator with θ = 0.5.

Figure 9 .
Figure 9. Visualization of the integral transformations from Case 2 and 3 in Section 5.1: the reflection (σ, τ ) → (τ, σ) for the upper triangle and the (inverse) Duffy transformation (σ, τ ) → (σ, τ /σ) for both triangles.The colors indicate the sets onto which each of the colored points and lines are mapped.