Skip to content
BY 4.0 license Open Access Published online by De Gruyter July 29, 2023

A Convenient Inclusion of Inhomogeneous Boundary Conditions in Minimal Residual Methods

  • Rob Stevenson ORCID logo EMAIL logo

Abstract

Inhomogeneous essential boundary conditions can be appended to a well-posed PDE to lead to a combined variational formulation. The domain of the corresponding operator is a Sobolev space on the domain Ω on which the PDE is posed, whereas the codomain is a Cartesian product of spaces, among them fractional Sobolev spaces of functions on Ω . In this paper, easily implementable minimal residual discretizations are constructed which yield quasi-optimal approximation from the employed trial space, in which the evaluation of fractional Sobolev norms is fully avoided.

MSC 2010: 35B35; 35B45; 65N30

1 Introduction

1.1 MINRES Methods for Handling Inhomogeneous (Essential) Boundary Conditions

The possibility to handle inhomogeneous boundary conditions when solving PDEs is often mentioned as an advantage of minimal residual (MINRES) discretizations (e.g. [3, Chapter 12]). In most cases, however, it is not so clear how this can lead to satisfactory results.

Considering linear elliptic PDEs of second order, one option is to write the boundary value problem in an ultra-weak first-order system variational formulation, which renders all boundary conditions natural. The resulting “practical” MINRES method [16, §§ 4.4 & 5] yields a quasi-best approximation from the trial space to all variables with respect to L 2 -norms.

If one is interested in approximation with respect to other norms, then one can resort to first-order system weak or mild-weak variational formulations, or to the standard second-order variational formulation. In those cases, Dirichlet or Dirichlet and Neumann boundary conditions are essential ones. In the case that homogeneous versions of those boundary conditions lead to a well-posed variational problem, inhomogeneous ones can be appended by enforcing them weakly by introducing additional test spaces of functions defined on the boundary Ω of the domain Ω on which the PDE is posed. Then the combined variational formulation of the PDE and the boundary conditions can be shown to be well-posed. Indeed, for some Hilbert spaces 𝑋, Y 1 and Y 2 , let F : X Y 1 and T : X Y 2 be bounded, and with X 0 := ker T , let F : X 0 Y 1 be boundedly invertible. In applications, 𝐹 and 𝑇 are operators corresponding to a weak imposition of the PDE and the boundary condition(s), and so 𝑋 and Y 1 are spaces of functions on Ω, and Y 2 is a (product of) space(s) of functions on Ω . Then, assuming 𝑇 is surjective, G := ( F , T ) : X Y := Y 1 × Y 2 is boundedly invertible [12, Lemma 2.7].

Given a finite-dimensional “trial” subspace X δ X (“𝛿” stands for “discrete”), and a given forcing term and essential boundary data, the solution u X of the well-posed problem can be approximated by the residual minimizer u δ X δ in the norm of Y . A complication is that some coordinate spaces of Y will be negative – or, in particular for function spaces on Ω , fractional-order Sobolev spaces, all with norms that cannot be evaluated. Several solutions for this problem have been proposed, e.g. [4] for the case of negative-order Sobolev norms on Ω, and [18] for fractional-order Sobolev norms on Ω . The resulting MINRES methods, however, are not guaranteed to produce approximate solutions that are quasi-best.

Viewing all negative- or fractional-order Sobolev norms as dual norms, so writing e.g. H 1 2 ( Ω ) as H 1 2 ( Ω ) , our approach in [16] is to discretize dual norms by replacing the involved suprema over infinite-dimensional spaces by a supremum over a finite-dimensional (product) test space Y δ Y . Under an inf-sup condition on the pair ( X δ , Y δ ) , the resulting MINRES approximation u δ X δ was shown to be quasi-best. The evaluation of the so-discretized dual norms is implemented by introducing the Riesz lift of the residual, viewed as an element of Y δ , as an extra variable λ δ , which results in a saddle-point system for ( λ δ , u δ ) Y δ × X δ .

In the case that one or more components of 𝑌 are fractional Sobolev norms, a remaining issue is the evaluation of the scalar product between functions from Y δ . Without compromising quasi-optimality, it can be solved by replacing this scalar product on Y δ by an equivalent scalar product defined in terms of the inverse of an optimal preconditioner of linear complexity for the corresponding stiffness matrix. This construction permits then to eliminate the extra variable from the system, after which a symmetric positive definite system on X δ remains, with a system matrix that can be applied in linear complexity. Optimal preconditioners of linear complexity for fractional Sobolev spaces are available, e.g. BPX or multigrid for positive orders and [11, 19] for negative orders.

Finally, by applying an optimal preconditioner of linear complexity on the trial space X δ , the symmetric positive definite system can be solved iteratively within a tolerance of the order of the discretization error in linear complexity.

1.2 Current Paper

A disadvantage of the method from [16] is that its implementation is rather demanding, in particular because of the incorporation of the preconditioner(s) for fractional Sobolev norms on the boundary. In the current paper, we introduce an alternative approach, which can be expected to be computationally more costly, but that can be very easily implemented with a finite element package like NGSolve [17].

As in our approach from [16], given a trial space X δ X , we select a test space Y δ Y such that ( X δ , Y δ ) is (uniformly) inf-sup stable, replace the norm on Y for the residual by the discretized norm on Y δ and introduce its Riesz lift in Y δ as an extra variable resulting in a saddle-point problem.

We proceed differently to deal with the problem that also the scalar product on 𝑌 might not be evaluable. Recalling that G : X Y and thus G : Y X is boundedly invertible, we equip 𝑌 with the equivalent norm G X , known as the optimal test norm. The resulting dual norm on Y then equals G 1 X so that the (unfeasible) exact residual minimization in this norm would yield the best approximation from X δ with respect to X . Since also the new scalar product G , G X on 𝑌 cannot be evaluated, for some finite-dimensional X ̂ δ X , we replace it by a discretized version G , G X ̂ δ . Assuming that also ( Y δ , X ̂ δ ) satisfies a (uniform) inf-sup condition, the resulting MINRES approximation is quasi-best. To make this approximation computable, we introduce the Riesz lift θ δ X ̂ δ of G λ δ , viewed as an element of X ̂ δ , as a third variable, and so saddle the system once more.

The resulting 3 × 3 block system for the triple ( θ δ , λ δ , u δ ) X ̂ δ × Y δ × X δ has one block that corresponds to the stiffness matrix of X ̂ δ with respect to the “easy” scalar product , X , whereas all four other non-zero blocks correspond to system matrices of the bilinear form ( G ) ( ) with respect to X δ × Y δ or X ̂ δ × Y δ , or transposes of those blocks. So the “difficult” scalar product , Y vanished completely from the system, and the implementation is simple.

For suitable Y δ and X ̂ δ , we show that θ δ X is an efficient and asymptotically reliable a posteriori estimator for u u δ X . We will use local 𝑋-norms of θ δ to drive an adaptive finite element method.

1.3 A Nonintrusive Approach

Although it is not subject of this paper, for completeness and comparison, in our abstract framework, we recall an alternative classical approach to deal with inhomogeneous essential boundary data (e.g. [10]).

Recall the setting of T : X Y 2 being bounded and surjective, F : X 0 := ker T Y 1 being boundedly invertible and G := ( F , T ) : X Y := Y 1 × Y 2 being boundedly invertible. We will use that 𝑇 has a bounded right inverse E : Y 2 X (e.g. E := g G 1 ( 0 , g ) ).

Given ( f , g ) Y 1 × Y 2 , consider the problem G u = ( f , g ) . Let X δ be a finite-dimensional subspace of 𝑋, and X 0 δ := X δ X 0 . Suppose that one has a method available that for g = 0 (i.e., “homogeneous essential boundary data”) produces a u δ X δ that is quasi-best, i.e., for some constant C > 0 , u u δ X C inf w δ X 0 δ u w δ X . Using this method, one can approximate the solution 𝑢 for general 𝑔 as follows. First, construct g δ ran T | X δ , i.e., g δ = T v δ for some v δ X δ , that approximates 𝑔; and second, approximate with aforementioned method the solution z X 0 of G z = ( f F v δ , 0 ) by z δ X 0 δ . Then

(1.1) u ( z δ + v δ ) X u ( z + v δ ) X + z z δ X G 1 L ( Y , X ) g g δ Y 2 + C inf { w δ X δ : T w δ = g δ } z + v δ w δ X C inf { w δ X δ : T w δ = g δ } u w δ X + ( 1 + C ) G 1 L ( Y , X ) g g δ Y 2 .

Furthermore, following [1, § 3.3], let us consider the case that there exists a uniformly bounded projector J δ : X X δ that preserves “homogeneous essential boundary conditions”, i.e., T J δ ( Id E T ) = 0 . Then

T ( J δ u J δ E ( g g δ ) ) = T ( J δ ( u E T u ) + J δ E T v δ ) = T v δ = g δ ,

and so

(1.2) inf { w δ X δ : T w δ = g δ } u w δ X u J δ u X + J δ E ( g g δ ) X J δ L ( X , X ) ( inf w δ X δ u w δ X + E L ( Y 2 , X ) g g δ Y 2 ) .

Finally, if g δ := P δ g for some uniformly bounded projector P δ , then

(1.3) g g δ Y 2 P δ L ( Y 2 , Y 2 ) inf w δ X δ T u T w δ Y 2 P δ L ( Y 2 , Y 2 ) T L ( X , Y 2 ) inf w δ X δ u w δ X ,

and by combining (1.1), (1.2) and (1.3), we conclude that z δ + v δ is a quasi-best approximation from X δ to 𝑢.

An a posteriori estimator of the error in u ( z δ + v δ ) must control the error in both z z δ in 𝑋 and in g g δ in Y 2 . Reliable error estimators for the latter term have been developed in [14] (for Dirichlet data) and [5] (for Neumann data), but require additional smoothness of 𝑔 beyond being in Y 2 (Dirichlet and Neumann data are required to be in H 1 - or L 2 -spaces on the boundary).

1.4 Outline

In Section 2, we recall the principle of a MINRES discretization and recall approaches to deal with the situation when the residual is measured in a dual norm Y and when possibly also the norm on 𝑌 cannot be evaluated. In Section 3, we present an alternative solution for the second problem by equipping 𝑌 with the optimal test norm. In addition, we discuss a posteriori error estimation. The method from Section 3 requires two uniform inf-sup conditions to be valid. In Section 4, we discuss them for four different well-posed variational formulations of Poisson’s problem with (generally inhomogeneous) Dirichlet and/or Neumann boundary conditions. Numerical results are presented in Section 5.

2 Least Squares or Minimal Residual Discretizations

2.1 Well-Posed Operator Equation

For some Hilbert spaces 𝑋 and 𝑌, for convenience over ℝ, an operator G L is ( X , Y ) and an f Y , we consider the equation

(2.1) G u = f .

With the notation G L is ( X , Y ) , we mean that 𝐺 is a boundedly invertible linear operator X Y , i.e., G L ( X , Y ) and G 1 L ( Y , X ) . In the following, 𝑢 will always denote the solution of (2.1).

For any closed, in applications finite-dimensional subspace { 0 } X δ X from a family { X δ } δ of such subspaces[1], consider the minimal residual or least squares approximation

(2.2) u δ := argmin w X δ 1 2 G w f Y 2 .

2.2 Discretizing the Norm on Y and Saddle-Point Formulation

Unless 𝑌 is such that the Riesz map R Y : Y Y can be efficiently evaluated, i.e., 𝑌 is a (Cartesian product of) L 2 -space(s), the minimizer u δ from (2.2) is not computable.

Therefore, for a closed, in applications finite-dimensional subspace { 0 } Y δ = Y δ ( X δ ) Y with

(2.3) γ δ := inf 0 w X δ sup 0 v Y δ | ( G w ) ( v ) | v Y G w Y > 0 ,

we replace above u δ by

(2.4) u δ := argmin w X δ 1 2 sup 0 v Y δ | ( G w f ) ( v ) | 2 v Y 2 .

Assuming inf δ γ δ > 0 , the following theorem shows that u δ is a quasi-best approximation to 𝑢 from X δ .

Theorem 2.1

Theorem 2.1 ([16, Theorem 3.3])

Setting | | | | | | X := G Y on 𝑋, for u δ from (2.4), it holds that

inf u X X δ inf w X δ | | | u w | | | X | | | u u δ | | | X = γ δ .

The solution u X of G u = f is equal to argmin w X 1 2 G w f Y 2 , and so it solves the corresponding Euler–Lagrange equations

f G u , G u ~ Y = 0 ( u ~ X ) .

Introducing λ := R Y ( f G u ) , the pair ( λ , u ) Y × X is the unique solution of

λ , λ ~ Y + ( G u ) ( λ ~ ) + ( G u ~ ) ( λ ) = f ( λ ~ ) ( ( λ ~ , u ~ ) Y × X ) .

Completely analogously, the minimal residual approximation u δ from (2.4) can be computed as the second component of the pair ( λ δ , u δ ) Y δ × X δ that solves

(2.5) λ δ , λ ~ δ Y + ( G u δ ) ( λ ~ δ ) + ( G u ~ δ ) ( λ δ ) = f ( λ ~ δ ) ( ( λ ~ δ , u ~ δ ) Y δ × X δ ) .

2.3 Changing the Norm on Y δ

In several applications, not only Y but also , Y cannot be efficiently evaluated. This occurs for example when 𝑌 is a Cartesian product of spaces with one or more of them being fractional Sobolev spaces. Therefore, let , Y δ be an (efficiently evaluable) scalar product on Y δ , whose corresponding norm Y δ satisfies, for some 0 < m δ M δ < ,

(2.6) m δ Y δ 2 Y 2 M δ Y δ 2 on Y δ .

Now we replace u δ from (2.4) by

u δ := argmin w X δ 1 2 sup 0 v Y δ | ( G w f ) ( v ) | 2 v Y δ 2 ,

being the second component of the pair ( λ δ , u δ ) Y δ × X δ that solves

(2.7) λ δ , λ ~ δ Y δ + ( G u δ ) ( λ ~ δ ) + ( G u ~ δ ) ( λ δ ) = f ( λ ~ δ ) ( ( λ ~ δ , u ~ δ ) Y δ × X δ ) .

Assuming inf δ γ δ > 0 and sup δ M δ / m δ < , the next result shows that this u δ is a quasi-best approximation to 𝑢 from X δ .

Proposition 2.2

Proposition 2.2 ([16, Theorem 3.6])

For the solution ( λ δ , u δ ) Y δ × X δ of (2.7), it holds that

| | | u u δ | | | X M δ γ δ m δ inf w X δ | | | u w | | | X .

Concerning the selection of Y δ , let K δ = K δ L is ( Y δ , Y δ ) be an operator whose application can be computed efficiently. Such an operator is called a preconditioner for A δ L is ( Y δ , Y δ ) defined by ( A δ v ) ( v ~ ) := v , v ~ Y . Setting

v , v ~ Y δ := ( ( K δ ) 1 v ) ( v ~ ) ( v , v ~ Y δ ) ,

the corresponding norm Y δ satisfies (2.6) with m δ = λ min ( K δ A δ ) and M δ = λ max ( K δ A δ ) . By substituting above choice for , Y δ in (2.7) and by subsequently eliminating λ δ from the saddle-point system, one infers that u δ X δ can be computed as the solution of the symmetric positive definite system

( G u ̃ δ ) ( K δ ( G u δ f ) ) = 0 ( u ̃ δ X δ ) .

3 Equipping 𝑌 with the Optimal Test Norm

3.1 Optimal Test Norm

In applications, the solution proposed in Section 2.3 to circumvent the evaluation of a “difficult” scalar product , Y by means of the introduction of a preconditioner may require quite some efforts concerning coding. This holds true when 𝑌 is a Cartesian product of spaces with one or more of them being fractional Sobolev spaces on a manifold. In the current section, we give an alternative for this approach by replacing the canonical norm on 𝑌 by the so-called optimal test norm.

As a consequence of G L is ( X , Y ) , it holds that G L is ( Y , X ) . From here on, we replace the canonical norm on 𝑌 by the equivalent norm G X and, correspondingly, equip Y with the resulting dual norm

sup 0 v Y | ( v ) | G v X = sup 0 g X | g ( G 1 ) | g X = G 1 X .

We conclude that, with respect to these new norms on 𝑌 and Y , G : Y X and G : X Y are isometries. For this reason, G X is known as the optimal test norm on 𝑌 (see [2, 23, 8, 6]).

3.2 Discretizing the Norm on Y and Saddle-Point Formulation

Also with the new norm on Y , the minimizer in (2.2) cannot be computed. Following Section 2.2, writing this norm in dual form sup 0 v Y | ( v ) | G v X , we discretize it by supremizing over Y δ { 0 } and rewrite the resulting least-squares problem in saddle-point form.

So we consider

u δ := argmin w X δ 1 2 sup 0 v Y δ | ( G w f ) ( v ) | 2 G v X 2 ,

being the second component of ( λ δ , u δ ) Y δ × X δ that solves

(3.1) G λ δ , G λ ~ δ X + ( G u δ ) ( λ ~ δ ) + ( G u ~ δ ) ( λ δ ) = f ( λ ~ δ ) ( ( λ ~ δ , u ~ δ ) Y δ × X δ ) .

Theorem 2.1 applies, where now | | | | | | X reads as the canonical norm X , i.e.,

inf u X X δ inf w X δ u w X u u δ X = γ δ ,

with the definition of γ δ reading as

(3.2) γ δ := inf 0 w X δ sup 0 y Y δ | ( G w ) ( y ) | G y X w X > 0 .

3.3 Discretizing the Norm on X and Saddling the System Once More

Unless 𝑋 is such that the Riesz map R X : X X can be efficiently evaluated, i.e., 𝑋 is a (Cartesian product of) L 2 -space(s), as in Section 2.3, we are in the situation that the norm on Y δ , here G X , cannot be evaluated so that (3.1) does not correspond to an implementable method. Similar to Section 2.3, we will therefore replace this norm by an equivalent one.

Let { 0 } X ̂ δ = X ̂ δ ( Y δ ) X be a finite-dimensional subspace for which[2]

(3.3) μ δ := inf 0 v Y δ sup 0 w X ̂ δ | ( G v ) ( w ) | w X G v X > 0 .

Setting, on X ̂ δ X , X ̂ δ := sup 0 w X ̂ δ | ( w ) | w X and denoting with , X ̂ δ the corresponding bilinear form, we have

(3.4) G X ̂ δ 2 G X 2 1 ( μ δ ) 2 G X ̂ δ 2 on Y δ ,

being the counterpart of (2.6). As in Section 2.3, we now replace (3.1) by the problem of finding ( λ δ , u δ ) Y δ × X δ that solves

(3.5) G λ δ , G λ ~ δ X ̂ δ + ( G u δ ) ( λ ~ δ ) + ( G u ~ δ ) ( λ δ ) = f ( λ ~ δ ) ( ( λ ~ δ , u ~ δ ) Y δ × X δ ) .

Recalling that, with our current norm G X on 𝑌, the norm | | | | | | X reads as X , an application of Proposition 2.2 shows the following.

Proposition 3.1

With γ δ and μ δ as defined in (3.2) and (3.3), for u δ X δ as defined in (3.5), it holds that

u u δ X 1 γ δ μ δ inf w X δ u w X .

To turn (3.5) into an equivalent evaluable system, we introduce an additional variable. With the Riesz map R X ̂ δ : X ̂ δ X ̂ δ , let θ δ := R X ̂ δ G λ δ , i.e.,

( G λ δ ) ( θ ~ δ ) = θ δ , θ ~ δ X ( θ ~ δ X ̂ δ ) .

Then

G λ δ , G λ ~ δ X ̂ δ = R X ̂ δ G λ δ , R X ̂ δ G λ ~ δ X = ( G λ ~ δ ) ( θ δ ) ( λ ~ δ Y δ ) .

By substituting the latter relation into (3.5) and by adding the preceding equation which defined θ δ , we arrive at the equivalent problem of finding ( θ δ , λ δ , u δ ) X ̂ δ × Y δ × X δ that satisfies

θ δ , θ ~ δ X ( G θ ~ δ ) ( λ δ ) ( G θ δ ) ( λ ~ δ ) ( G u δ ) ( λ ~ δ ) ( G u ~ δ ) ( λ δ ) = f ( λ ~ δ )

for all ( θ ~ δ , λ ~ δ , u ~ δ ) X ̂ δ × Y δ × X δ , or, in equivalent block form,

(3.6) { θ δ , θ ~ δ X ( G θ ~ δ ) ( λ δ ) = 0 ( θ ~ δ X ̂ δ ) , ( G θ δ ) ( λ ~ δ ) ( G u δ ) ( λ ~ δ ) = f ( λ ~ δ ) ( λ ~ δ Y δ ) , ( G u ~ δ ) ( λ δ ) = 0 ( u ~ δ X δ ) .

Notice that, in comparison to (2.5), the “difficult” scalar product , Y completely disappeared from the system and that, apart from the usually “easy” scalar product , X , only the bilinear form ( G ) ( ) has to be implemented. To satisfy the inf-sup conditions γ δ > 0 and μ δ > 0 , the auxiliary spaces Y δ and X ̂ δ have to be sufficiently large (in any case, dim X ̂ δ dim Y δ dim X δ is needed), which makes solving (3.6) computationally relatively expensive. On the other hand, the implementation of the method is quite simple, whereas Proposition 3.1 shows that, for “large” Y δ and X ̂ δ , the obtained solution will be close to the best approximation from X δ . Indeed, notice that, for Y δ = Y and X ̂ δ = X , u δ is the best approximation to 𝑢 from X δ with respect to X , whereas the second line in (3.6) shows that then θ δ = u u δ .

3.4 A Posteriori Error Estimation

As is well known, γ δ > 0 is guaranteed by existence of a Π δ L ( Y , Y δ ) with

(3.7) ( G X δ ) ( ran ( Id Π δ ) ) = 0 ,

where γ δ G Π δ G 1 L ( X , X ) 1 , and conversely, γ δ > 0 guarantees existence of such a “Fortin interpolator”, being even a projector onto Y δ , with G Π δ G 1 L ( X , X ) = 1 / γ δ (e.g. [9, Lemma 26.9] or [20, Proposition 5.1]).

In the following, let Π δ be a Fortin interpolator with G Π δ G 1 L ( X , X ) 1 / γ δ (so that inf δ γ δ > 0 is equivalent to sup δ G Π δ G 1 L ( X , X ) < ).

Proposition 3.2

With ( θ δ , λ δ , u δ ) X ̂ δ × Y δ × X δ being the solution of (3.6), and osc δ ( f ) := G 1 ( Id Π δ ) f X , it holds that

(3.8) μ δ θ δ X u u δ X G Π δ G 1 L ( X , X ) θ δ X + osc δ ( f ) .

Proof

Thanks to θ δ = R X ̂ δ G λ δ , we have

(3.9) θ δ X = sup 0 v δ Y δ | ( G v δ ) ( θ δ ) | G v δ X ̂ δ = sup 0 v δ Y δ | ( f G u δ ) ( v δ ) | G v δ X ̂ δ .

Using (3.7) and the left inequality in (3.4), we find that for v Y ,

| ( f G u δ ) ( v ) | | ( f G u δ ) ( Π δ v ) | + | f ( ( Id Π δ ) v ) | G Π δ v X sup 0 v δ Y δ | ( f G u δ ) ( v δ ) | G v δ X + | ( G v ) ( G 1 ( Id Π δ ) f ) | G Π δ v X sup 0 v δ Y δ | ( f G u δ ) ( v δ ) | G v δ X ̂ δ + | ( G v ) ( G 1 ( Id Π δ ) f ) | ( G Π δ G 1 L ( X , X ) θ δ X + G 1 ( Id Π δ ) f X ) G v X .

From

sup 0 v Y | ( f G u δ ) ( v ) | G v X = u u δ X ,

we conclude the right inequality in (3.8).

The left inequality in (3.8) follows from the right inequality in (3.4) and again (3.9). ∎

Remark 3.3

Remark 3.3 (Bounding the Oscillation Term)

By taking Π δ to be the Fortin projector with

G Π δ G 1 L ( X , X ) = 1 γ δ ,

we find that

osc δ ( f ) = sup 0 v Y | ( ( Id Π δ ) f ) ( v ) | G v X = sup 0 v Y | f ( ( Id Π δ ) v ) | G v X = sup 0 v Y inf w δ X δ | G ( u w δ ) ( ( Id Π δ ) v ) | G v X = sup 0 v Y inf w δ X δ | ( G ( Id Π δ ) v ) ( u w δ ) | G v X ,

and so osc δ ( f ) 1 γ δ inf w δ X δ u w δ X .

Even better is when Y δ is selected such that it allows for the construction of a (uniformly bounded) Fortin interpolator such that osc δ ( f ) is of higher order than inf w δ X δ u w δ X . Then, in any case asymptotically, besides being efficient, the estimator θ δ X is also reliable.

The derivation of the a posteriori error estimator in this subsection is similar to that in [7] specialized to the use of the optimal test norm on 𝑌. Modifications were needed because of the replacement on Y δ of the optimal test norm G X by G X ̂ δ and the introduction of the extra variable θ δ . In [7], the 𝑌-norm of the approximate Riesz lift λ δ Y δ of the residual f G u δ was used as the a posteriori error estimator, whereas we use the 𝑋-norm of the approximate error θ δ X ̂ δ . Local 𝑋-norms of θ δ X ̂ δ will turn out to be effective for driving an adaptive finite element method.

4 Applications

4.1 Model Elliptic Second-Order Boundary Value Problem

On a bounded Lipschitz domain Ω R d , where d 2 , and closed Γ D , Γ N Ω , with

Γ D Γ N = Ω and | Γ D Γ N | = 0 ,

consider the following elliptic second-order boundary value problem:

(4.1) { div A u + B u = g on Ω , u = h D on Γ D , n A u = h N on Γ N ,

where B L ( H 1 ( Ω ) , L 2 ( Ω ) ) and A ( ) = A ( ) L ( Ω ) d × d satisfies ξ A ( ) ξ ξ 2 ( ξ R d ). We assume that the matrix 𝐴 and the first-order operator 𝐵 are such that[3]

w ( v Ω A w v + B w v d x ) L is ( H 0 , Γ D 1 ( Ω ) , H 0 , Γ D 1 ( Ω ) ) .

4.2 Well-Posed Variational Formulations

In [21, 16], the following variational formulations iiv of (4.1) have been shown to be well-posed. From the formulations given, one easily derives the expressions for 𝐺, 𝑢, 𝑓, 𝑋 and 𝑌. Implicitly we will assume that the data 𝑔, h D and h N are such that 𝑓 is in the dual of 𝑌. In the case of homogeneous essential boundary conditions, formulations iiii can be simplified by incorporating such conditions in the definition of the domain 𝑋, an option we do not consider here.

  1. (Second-order weak formulation) Find u H 1 ( Ω ) such that

    Ω A u v 1 + B u v 1 d x + Γ D u v 2 d s = g ( v 1 ) + Γ N h N v 1 d s + Γ D h D v 2 d s

    for all ( v 1 , v 2 ) H 0 , Γ D 1 ( Ω ) × H ̃ 1 2 ( Γ D ) .

  2. (First-order mild formulation) Find ( p , u ) H ( div ; Ω ) × H 1 ( Ω ) such that

    Ω ( p A u ) v 1 + ( B u div p ) v 2 d x + Γ D u v 3 d s + Γ N p n v 4 d s = Ω g v 2 d x + Γ D h D v 3 d s + Γ N h N v 4 d s

    for all ( v 1 , v 2 , v 3 , v 4 ) L 2 ( Ω ) d × L 2 ( Ω ) × H ̃ 1 2 ( Γ D ) × H 00 1 2 ( Γ N ) .

  3. (First-order mild-weak formulation) Find ( p , u ) L 2 ( Ω ) d × H 1 ( Ω ) such that

    Ω ( p A u ) v 1 + p v 2 + B u v 2 d x + Γ D u v 3 d s = g ( v 2 ) + Γ N h N v 2 d s + Γ D h D v 3 d s

    for all ( v 1 , v 2 , v 3 ) L 2 ( Ω ) d × H 0 , Γ D 1 ( Ω ) × H ̃ 1 2 ( Γ D ) .

  4. (First-order ultra-weak formulation) Assuming B w = b w + c w for some b L ( Ω ) d and c L ( Ω ) , find ( p , u ) L 2 ( Ω ) d × L 2 ( Ω ) such that

    Ω A 1 p v 1 + u div v 1 + ( b A 1 p + c u ) v 2 + p v 2 d x = Γ D h D v 1 n d s + g ( v 2 ) + Γ N h N v 2 d s

    for all ( v 1 , v 2 ) H 0 , Γ N ( div ; Ω ) × H 0 , Γ D 1 ( Ω ) .

4.3 Finite Element Discretizations and Verification of the Uniform inf-sup Conditions inf δ γ δ > 0 and inf δ μ δ > 0

We assume that Ω R d is a polytope and let { T δ } δ be a family of conforming, uniformly shape regular partitions of Ω into (closed) 𝑑-simplices. With F ( T δ ) , we denote the set of (closed) facets of K T δ . We assume that Γ D , and thus Γ N , is the union of some e F ( T δ ) . We set F D δ := { e F ( T δ ) : e Γ D } , with a similar definition of F N δ .

For p N 0 and k N 0 { 1 } , with S p k ( T δ ) , we denote the space of all C k -piecewise polynomials of degree 𝑝 with respect to T δ . Spaces S p k ( F D δ ) and S p k ( F N δ ) are defined analogously.

We take A = Id , although the arguments given below apply equally when 𝐴 is piecewise constant with respect to T δ . For convenience, we take B = 0 , but the case of 𝐵 being a PDO of first order with piecewise constant coefficients with respect to T δ poses no additional difficulties.

For examples iiv from Section 4.2 and d 2 , below, we discuss the choice of spaces X δ , Y δ and X ̂ δ for the uniform inf-sup conditions to be satisfied.

  1. (Second-order weak formulation) As shown in [16, § 4.1], for p 1 and X δ := S p 0 ( T δ ) , the choice

    Y δ := ( S p + d 1 0 ( T δ ) H 0 , Γ D 1 ( Ω ) ) × S p 1 ( F D δ )

    gives inf δ γ δ > 0 . Analogous arguments that were used to prove inf δ γ δ > 0 in [16, Proposition 4.1] show that X ̂ δ := S p + 2 d 2 0 ( T δ ) ensures inf δ μ δ > 0 . To guarantee that data oscillation is of higher order, in the definition of Y δ , one has to replace S p + d 1 0 ( T δ ) by S p + d 0 ( T δ ) , and consequently to take X ̂ δ := S p + 2 d 1 0 ( T δ ) .

  2. (First-order mild formulation) As follows from [16, § 4.2], for p 1 and X δ := RT p 1 ( T δ ) × S p 0 ( T δ ) , the choice Y δ := S p 1 ( T δ ) d × S p 1 1 ( T δ ) × S p 1 ( F D δ ) × ( S d + p 1 0 ( F N δ ) H 0 1 ( Γ N ) ) gives inf δ γ δ > 0 , where data oscillation is of higher order. Based on our empirical investigations in [16] of the inf-sup constant γ δ with the first-order ultra-weak formulation, we conjecture that, for d = 2 , with X ̂ δ := RT p + d 1 ( T δ ) × S p + d 1 0 ( T δ ) , it holds that inf δ μ δ > 0 .

  3. (First-order mild-weak formulation) As can be seen from [16, § 4.3], for p 1 and X δ := S p 1 1 ( T δ ) d × S p 0 ( T δ ) , the choice Y δ := S p 1 1 ( T δ ) d × ( S p + d 1 0 ( T δ ) H 0 , Γ D 1 ( Ω ) ) × S p 1 ( F D δ ) gives inf δ γ δ > 0 and a data oscillation that is of higher order. Now, by taking X ̂ δ := S p + d 2 1 ( T δ ) d × S p + d 0 ( T δ ) , one guarantees inf δ μ δ > 0 .

  4. (First-order ultra-weak formulation) As shown in [16, § 4.4], for p = 0 , the choice X δ := S p 1 ( T δ ) d × S p 1 ( T δ ) and Y δ := ( RT p ( T δ ) × S d + p 0 ( T δ ) ) Y , where Y = H 0 , Γ N ( div ; Ω ) × H 0 , Γ D 1 ( Ω ) , gives inf δ γ δ > 0 . Numerical experiments indicate that the same holds true for p { 1 , 2 , 3 , 4 } and d = 2 . This choice of Y δ does not guarantee that data oscillation is of higher order, and it appeared that the a posteriori error estimator is not reliable. This problem was solved by taking Y δ := ( RT p + 1 ( T δ ) × S d + p + 1 0 ( T δ ) ) Y . Since, for this formulation X = L 2 ( Ω ) d × L 2 ( Ω ) X , there is no need to introduce the additional variable θ δ and so to select a space X ̂ δ . The pair ( λ δ , u δ ) X δ × Y δ can be efficiently solved from (3.1). Also, for this example, a remaining difference with [16] is that we use the optimal test norm on 𝑌 instead of the canonical norm.

5 Numerical Experiments

On a rectangular domain Ω = ( 1 , 1 ) × ( 0 , 1 ) with Neumann and Dirichlet boundaries Γ N = [ 1 , 0 ] × { 0 } and Γ D = Ω Γ N ̄ , for g H 0 , Γ D 1 ( Ω ) , h D H 1 2 ( Γ D ) and h N H 1 2 ( Γ N ) , we consider the Poisson problem of finding u H 1 ( Ω ) that satisfies

{ Δ u = g on Ω , u = h D on Γ D , u n = h N on Γ N .

We prescribe the solution u ( r , θ ) := r 1 2 sin θ 2 in polar coordinates and determine the data correspondingly. Then g = 0 , h N = 0 and h D = 0 on [ 0 , 1 ] × { 0 } , but h D 0 on the remaining part of Γ D . It is known that u H 3 2 ε ( Ω ) for all ε > 0 , but u H 3 2 ( Ω ) (see [13]).

We consider above problem in the first-order mild-weak formulation. Compared to the second-order formulation, this first-order formulation does not require additional smoothness conditions on the data. Furthermore, the norms for solution and flux variables are balanced, in the sense that, given some regularity of the solution, both variables can qualitatively be equally well approximated by finite element functions. For this formulation, the Neumann boundary condition is a natural one, but the Dirichlet boundary condition is essential and is therefore imposed by an additional variational equation.

We consider a family of conforming triangulations { T δ } δ of Ω, where each triangulation is created using newest vertex bisections starting from an initial triangulation that consists of eight triangles created by first cutting Ω along the 𝑦-axis and then cutting the resulting two squares along their diagonals. The interior vertices of the initial triangulation of both squares are labelled as the “newest vertex” of all four triangles in both squares. Following Section 4.3, given some polynomial degree p 1 , we set

X δ := S p 1 1 ( T δ ) 2 × S p 0 ( T δ ) , Y δ := S p 1 1 ( T δ ) 2 × ( S p + 1 0 ( T δ ) H 0 , Γ D 1 ( Ω ) ) × S p 1 ( F D δ ) , X ̂ δ := S p 1 ( T δ ) 2 × S p + 2 0 ( T δ ) .

With

( G ( p , u ) ) ( v 1 , v 2 , v 3 ) := Ω ( p u ) v 1 + p v 2 d x + Γ D u v 3 d s ,

our practical MINRES method computes ( θ 1 δ , θ 2 δ , λ 1 δ , λ 2 δ , λ 3 δ , p δ , u δ ) X ̂ δ × Y δ × X δ such that

( θ 1 δ , θ 2 δ ) , ( θ ~ 1 δ , θ ~ 2 δ ) L 2 ( Ω ) 2 × H 1 ( Ω ) ( G ( θ ~ 1 δ , θ ~ 2 δ ) ) ( λ 1 δ , λ 2 δ , λ 3 δ ) ( G ( θ 1 δ , θ 2 δ ) ) ( λ ~ 1 δ , λ ~ 2 δ , λ ~ 3 δ ) ( G ( p δ , u δ ) ) ( λ ~ 1 δ , λ ~ 2 δ , λ ~ 3 δ ) ( G ( p ~ δ , u ~ δ ) ) ( λ 1 δ , λ 2 δ , λ 3 δ ) = g ( λ ~ 2 δ ) Γ N h N λ ~ 2 δ d s Γ D h D λ ~ 3 δ d s = : f ( λ ~ 2 δ , λ ~ 3 δ )

for all ( θ ~ 1 δ , θ ~ 2 δ , λ ~ 1 δ , λ ~ 2 δ , λ ~ 3 δ , p ~ δ , u ~ δ ) X ̂ δ × Y δ × X δ .

The method comes with a built-in a posteriori error estimator

E ( p δ , u δ , f ) := T T δ θ 1 δ L 2 ( T ) 2 2 + θ 2 δ H 1 ( T ) 2

(see Section 3.4), which is efficient and, because the data-oscillation term is of higher order than the best approximation error, in any case asymptotically reliable.

For p { 1 , 2 , 3 } , we performed numerical experiments with uniform and adaptively refined triangulations. Concerning the latter, we have used the element-wise error indicators

θ 1 δ L 2 ( T ) 2 2 + θ 2 δ H 1 ( T ) 2

to drive an AFEM with Dörfler marking with marking parameter θ = 0.6 . The results given in Figure 1 show that, for uniform refinements, increasing 𝑝 does not improve the order of convergence because it is limited by the low regularity of the solution in the Hilbertian Sobolev scale. However, we observe that the adaptive routine attains the best possible rates allowed by the order of approximation of X δ .

Figure 1 
               Number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. 
                     
                        
                           
                              E
                              ⁢
                              
                                 (
                                 
                                    
                                       p
                                       ⃗
                                    
                                    δ
                                 
                                 ,
                                 
                                    u
                                    δ
                                 
                                 ,
                                 f
                                 )
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)
                     
                  : left-upper: 
                     
                        
                           
                              p
                              =
                              1
                           
                        
                        
                        p=1
                     
                  , right-upper: 
                     
                        
                           
                              p
                              =
                              2
                           
                        
                        
                        p=2
                     
                  , left-bottom: 
                     
                        
                           
                              p
                              =
                              3
                           
                        
                        
                        p=3
                     
                  .
Right-bottom: number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. effectivity index 
                     
                        
                           
                              
                                 E
                                 ⁢
                                 
                                    (
                                    
                                       
                                          p
                                          ⃗
                                       
                                       δ
                                    
                                    ,
                                    
                                       u
                                       δ
                                    
                                    ,
                                    f
                                    )
                                 
                              
                              /
                              
                                 
                                    
                                       
                                          ∥
                                          
                                             
                                                p
                                                ⃗
                                             
                                             −
                                             
                                                
                                                   p
                                                   ⃗
                                                
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             L
                                             2
                                          
                                          ⁢
                                          
                                             
                                                (
                                                Ω
                                                )
                                             
                                             2
                                          
                                       
                                       2
                                    
                                    +
                                    
                                       
                                          ∥
                                          
                                             u
                                             −
                                             
                                                u
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             H
                                             1
                                          
                                          ⁢
                                          
                                             (
                                             Ω
                                             )
                                          
                                       
                                       2
                                    
                                 
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)/\sqrt{\smash{\lVert\vec{p}-\vec{p}^{\delta}\rVert_{L_{2}\smash{(\Omega)^{2}}}^{2}+\lVert u-u^{\delta}\rVert_{\smash{H^{1}}(\Omega)}^{2}}}
                     
                  .
Figure 1 
               Number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. 
                     
                        
                           
                              E
                              ⁢
                              
                                 (
                                 
                                    
                                       p
                                       ⃗
                                    
                                    δ
                                 
                                 ,
                                 
                                    u
                                    δ
                                 
                                 ,
                                 f
                                 )
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)
                     
                  : left-upper: 
                     
                        
                           
                              p
                              =
                              1
                           
                        
                        
                        p=1
                     
                  , right-upper: 
                     
                        
                           
                              p
                              =
                              2
                           
                        
                        
                        p=2
                     
                  , left-bottom: 
                     
                        
                           
                              p
                              =
                              3
                           
                        
                        
                        p=3
                     
                  .
Right-bottom: number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. effectivity index 
                     
                        
                           
                              
                                 E
                                 ⁢
                                 
                                    (
                                    
                                       
                                          p
                                          ⃗
                                       
                                       δ
                                    
                                    ,
                                    
                                       u
                                       δ
                                    
                                    ,
                                    f
                                    )
                                 
                              
                              /
                              
                                 
                                    
                                       
                                          ∥
                                          
                                             
                                                p
                                                ⃗
                                             
                                             −
                                             
                                                
                                                   p
                                                   ⃗
                                                
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             L
                                             2
                                          
                                          ⁢
                                          
                                             
                                                (
                                                Ω
                                                )
                                             
                                             2
                                          
                                       
                                       2
                                    
                                    +
                                    
                                       
                                          ∥
                                          
                                             u
                                             −
                                             
                                                u
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             H
                                             1
                                          
                                          ⁢
                                          
                                             (
                                             Ω
                                             )
                                          
                                       
                                       2
                                    
                                 
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)/\sqrt{\smash{\lVert\vec{p}-\vec{p}^{\delta}\rVert_{L_{2}\smash{(\Omega)^{2}}}^{2}+\lVert u-u^{\delta}\rVert_{\smash{H^{1}}(\Omega)}^{2}}}
                     
                  .
Figure 1 
               Number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. 
                     
                        
                           
                              E
                              ⁢
                              
                                 (
                                 
                                    
                                       p
                                       ⃗
                                    
                                    δ
                                 
                                 ,
                                 
                                    u
                                    δ
                                 
                                 ,
                                 f
                                 )
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)
                     
                  : left-upper: 
                     
                        
                           
                              p
                              =
                              1
                           
                        
                        
                        p=1
                     
                  , right-upper: 
                     
                        
                           
                              p
                              =
                              2
                           
                        
                        
                        p=2
                     
                  , left-bottom: 
                     
                        
                           
                              p
                              =
                              3
                           
                        
                        
                        p=3
                     
                  .
Right-bottom: number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. effectivity index 
                     
                        
                           
                              
                                 E
                                 ⁢
                                 
                                    (
                                    
                                       
                                          p
                                          ⃗
                                       
                                       δ
                                    
                                    ,
                                    
                                       u
                                       δ
                                    
                                    ,
                                    f
                                    )
                                 
                              
                              /
                              
                                 
                                    
                                       
                                          ∥
                                          
                                             
                                                p
                                                ⃗
                                             
                                             −
                                             
                                                
                                                   p
                                                   ⃗
                                                
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             L
                                             2
                                          
                                          ⁢
                                          
                                             
                                                (
                                                Ω
                                                )
                                             
                                             2
                                          
                                       
                                       2
                                    
                                    +
                                    
                                       
                                          ∥
                                          
                                             u
                                             −
                                             
                                                u
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             H
                                             1
                                          
                                          ⁢
                                          
                                             (
                                             Ω
                                             )
                                          
                                       
                                       2
                                    
                                 
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)/\sqrt{\smash{\lVert\vec{p}-\vec{p}^{\delta}\rVert_{L_{2}\smash{(\Omega)^{2}}}^{2}+\lVert u-u^{\delta}\rVert_{\smash{H^{1}}(\Omega)}^{2}}}
                     
                  .
Figure 1 
               Number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. 
                     
                        
                           
                              E
                              ⁢
                              
                                 (
                                 
                                    
                                       p
                                       ⃗
                                    
                                    δ
                                 
                                 ,
                                 
                                    u
                                    δ
                                 
                                 ,
                                 f
                                 )
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)
                     
                  : left-upper: 
                     
                        
                           
                              p
                              =
                              1
                           
                        
                        
                        p=1
                     
                  , right-upper: 
                     
                        
                           
                              p
                              =
                              2
                           
                        
                        
                        p=2
                     
                  , left-bottom: 
                     
                        
                           
                              p
                              =
                              3
                           
                        
                        
                        p=3
                     
                  .
Right-bottom: number of DoFs in 
                     
                        
                           
                              X
                              δ
                           
                        
                        
                        X^{\delta}
                     
                   vs. effectivity index 
                     
                        
                           
                              
                                 E
                                 ⁢
                                 
                                    (
                                    
                                       
                                          p
                                          ⃗
                                       
                                       δ
                                    
                                    ,
                                    
                                       u
                                       δ
                                    
                                    ,
                                    f
                                    )
                                 
                              
                              /
                              
                                 
                                    
                                       
                                          ∥
                                          
                                             
                                                p
                                                ⃗
                                             
                                             −
                                             
                                                
                                                   p
                                                   ⃗
                                                
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             L
                                             2
                                          
                                          ⁢
                                          
                                             
                                                (
                                                Ω
                                                )
                                             
                                             2
                                          
                                       
                                       2
                                    
                                    +
                                    
                                       
                                          ∥
                                          
                                             u
                                             −
                                             
                                                u
                                                δ
                                             
                                          
                                          ∥
                                       
                                       
                                          
                                             H
                                             1
                                          
                                          ⁢
                                          
                                             (
                                             Ω
                                             )
                                          
                                       
                                       2
                                    
                                 
                              
                           
                        
                        
                        \mathcal{E}(\vec{p}^{\delta},u^{\delta},f)/\sqrt{\smash{\lVert\vec{p}-\vec{p}^{\delta}\rVert_{L_{2}\smash{(\Omega)^{2}}}^{2}+\lVert u-u^{\delta}\rVert_{\smash{H^{1}}(\Omega)}^{2}}}
                     
                  .
Figure 1

Number of DoFs in X δ vs. E ( p δ , u δ , f ) : left-upper: p = 1 , right-upper: p = 2 , left-bottom: p = 3 . Right-bottom: number of DoFs in X δ vs. effectivity index E ( p δ , u δ , f ) / p p δ L 2 ( Ω ) 2 2 + u u δ H 1 ( Ω ) 2 .

Award Identifier / Grant number: DMS ID 1720297

Funding statement: This research has been supported by the NSF Grant DMS ID 1720297.

Acknowledgements

The author is indebted to Harald Monsuur for the computation of the numerical results.

References

[1] M. Aurada, M. Feischl, J. Kemetmüller, M. Page and D. Praetorius, Each H 1 / 2 -stable projection yields convergence and quasi-optimality of adaptive FEM with inhomogeneous Dirichlet data in R d , ESAIM Math. Model. Numer. Anal. 47 (2013), no. 4, 1207–1235. 10.1051/m2an/2013069Search in Google Scholar

[2] J. W. Barrett and K. W. Morton, Approximate symmetrization and Petrov–Galerkin methods for diffusion-convection problems, Comput. Methods Appl. Mech. Engrg. 45 (1984), no. 1–3, 97–122. 10.1016/0045-7825(84)90152-XSearch in Google Scholar

[3] P. B. Bochev and M. D. Gunzburger, Least-Squares Finite Element Methods, Appl. Math. Sci. 166, Springer, New York, 2009. 10.1007/b13382Search in Google Scholar

[4] J. H. Bramble, R. D. Lazarov and J. E. Pasciak, Least-squares for second-order elliptic problems, Comput. Methods Appl. Mech. Engrg. 152 (1998), no. 1–2, 195–210. 10.1016/S0045-7825(97)00189-8Search in Google Scholar

[5] P. Bringmann, C. Carstensen and G. Starke, An adaptive least-squares FEM for linear elasticity with optimal convergence rates, SIAM J. Numer. Anal. 56 (2018), no. 1, 428–447. 10.1137/16M1083797Search in Google Scholar

[6] D. Broersen and R. Stevenson, A robust Petrov–Galerkin discretisation of convection-diffusion equations, Comput. Math. Appl. 68 (2014), no. 11, 1605–1618. 10.1016/j.camwa.2014.06.019Search in Google Scholar

[7] C. Carstensen, L. Demkowicz and J. Gopalakrishnan, A posteriori error control for DPG methods, SIAM J. Numer. Anal. 52 (2014), no. 3, 1335–1353. 10.1137/130924913Search in Google Scholar

[8] W. Dahmen, C. Huang, C. Schwab and G. Welper, Adaptive Petrov–Galerkin methods for first order transport equations, SIAM J. Numer. Anal. 50 (2012), no. 5, 2420–2445. 10.1137/110823158Search in Google Scholar

[9] A. Ern and J.-L. Guermond, Finite Elements II—Galerkin Approximation, Elliptic and Mixed PDEs, Texts Appl. Math. 73, Springer, Cham, 2021. 10.1007/978-3-030-56923-5Search in Google Scholar

[10] G. J. Fix, M. D. Gunzburger and J. S. Peterson, On finite element approximations of problems having inhomogeneous essential boundary conditions, Comput. Math. Appl. 9 (1983), no. 5, 687–700. 10.1016/0898-1221(83)90126-8Search in Google Scholar

[11] T. Führer, Multilevel decompositions and norms for negative order Sobolev spaces, Math. Comp. 91 (2021), no. 333, 183–218. 10.1090/mcom/3674Search in Google Scholar

[12] G. Gantner and R. Stevenson, Further results on a space-time FOSLS formulation of parabolic PDEs, ESAIM Math. Model. Numer. Anal. 55 (2021), no. 1, 283–299. 10.1051/m2an/2020084Search in Google Scholar

[13] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Monogr. Stud. Math. 24, Pitman, Boston, 1985. Search in Google Scholar

[14] M. Karkulik, G. Of and D. Praetorius, Convergence of adaptive 3D BEM for weakly singular integral equations based on isotropic mesh-refinement, Numer. Methods Partial Differential Equations 29 (2013), no. 6, 2081–2106. 10.1002/num.21792Search in Google Scholar

[15] T. Kato, Estimation of iterated matrices, with application to the von Neumann condition, Numer. Math. 2 (1960), 22–29. 10.1007/BF01386205Search in Google Scholar

[16] H. Monsuur, R. P. Stevenson and J. Storn, Minimal residual methods in negative or fractional Sobolev norms, preprint (2023), https://arxiv.org/abs/2301.10484. Search in Google Scholar

[17] J. Schöberl, C++11 implementation of finite elements in ngsolve, Technical report, Institute for Analysis and Scientific Computing, Vienna University of Technology, Vienna, 2014. Search in Google Scholar

[18] G. Starke, Multilevel boundary functionals for least-squares mixed finite element methods, SIAM J. Numer. Anal. 36 (1999), no. 4, 1065–1077. 10.1137/S0036142997329803Search in Google Scholar

[19] R. Stevenson and R. van Venetië, Uniform preconditioners of linear complexity for problems of negative order, Comput. Methods Appl. Math. 21 (2021), no. 2, 469–478. 10.1515/cmam-2020-0052Search in Google Scholar

[20] R. Stevenson and J. Westerdiep, Minimal residual space-time discretizations of parabolic equations: Asymmetric spatial operators, Comput. Math. Appl. 101 (2021), 107–118. 10.1016/j.camwa.2021.09.014Search in Google Scholar

[21] R. P. Stevenson, First-order system least squares with inhomogeneous boundary conditions, IMA J. Numer. Anal. 34 (2014), no. 3, 863–878. 10.1093/imanum/drt042Search in Google Scholar

[22] J. Xu and L. Zikatanov, Some observations on Babuška and Brezzi theories, Numer. Math. 94 (2003), no. 1, 195–202. 10.1007/s002110100308Search in Google Scholar

[23] J. Zitelli, I. Muga, L. Demkowicz, J. Gopalakrishnan, D. Pardo and V. M. Calo, A class of discontinuous Petrov-Galerkin methods. Part IV: The optimal test norm and time-harmonic wave propagation in 1D, J. Comput. Phys. 230 (2011), no. 7, 2406–2432. 10.1016/j.jcp.2010.12.001Search in Google Scholar

Received: 2023-03-22
Revised: 2023-05-26
Accepted: 2023-07-06
Published Online: 2023-07-29

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 21.11.2023 from https://www.degruyter.com/document/doi/10.1515/cmam-2023-0072/html
Scroll to top button