Eigenvalue Problems for Exponential-Type Kernels

  • 1 Department of Mathematics, Purdue University, IN 47907-2067, West Lafayette, USA
  • 2 Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, OR 97207; and Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA 94550, Portland, USA
Difeng CaiORCID iD: https://orcid.org/0000-0001-9482-6425 and Panayot S. Vassilevski
  • Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, Portland, OR 97207; and Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA 94550, USA
  • Email
  • Search for other articles:
  • degruyter.comGoogle Scholar

Abstract

We study approximations of eigenvalue problems for integral operators associated with kernel functions of exponential type. We show convergence rate |λk-λk,h|Ckh2 in the case of lowest order approximation for both Galerkin and Nyström methods, where h is the mesh size, λk and λk,h are the exact and approximate kth largest eigenvalues, respectively. We prove that the two methods are numerically equivalent in the sense that |λk,h(G)-λk,h(N)|Ch2, where λk,h(G) and λk,h(N) denote the kth largest eigenvalues computed by Galerkin and Nyström methods, respectively, and C is a eigenvalue independent constant. The theoretical results are accompanied by a series of numerical experiments.

1 Introduction

In this paper we are interested in the eigenvalue problem associated with integral operators

Af:=DK(x,y)f(y)𝑑y

defined from kernel functions K(x,y) of exponential type (cf. (1.1)) and D is a bounded Lipschitz domain in d.

Our approach is general, but driven by practical applications we focus on kernels K(x,y), x,yd, of the following particular (exponential) form:

K(x,y)=e-ρ(x-y)with ρ(x):=(|x1|sω1s++|xd|sωds)γ,

where s{1,2}, γ=1 or 1s, ωi>0, i=1,,d. Examples of such kernel functions include e-|x-y|2, e-|x-y|, etc. The kernel defines the integral operator

Af(x):=DK(x,y)f(y)𝑑y,xD.

Of our main interest is the numerical approximation of the eigenvalue problem associated with A, namely, Aϕ=λϕ, for some λ,ϕ.

Eigenproblems of the above type arise frequently in various research areas such as geology [8, 7], uncertainty quantification [3, 10, 19], machine learning [16], etc. The analysis of the underlying eigenvalue problem is beneficial in the derivation of the error control, algorithm design, and overall numerical practice, etc.

Mathematically, the problem is usually formulated in either the space of continuous functions or the space of L2-integrable functions. The corresponding discretizations are the Nyström method and Galerkin method, respectively.

For the Nyström method, though various error estimates were derived, e.g., in [12, 25, 2, 18, 21, 22], it is not known if they are consistent with actual numerical results, especially when the kernel function is not smooth enough, for example, not continuously differentiable. Additionally, the proofs require the mesh size to be sufficiently small.

For the Galerkin method, which generally requires to use certain quadrature rule to evaluate the double integrals when assembling the matrix, the impact of the quadrature error to the computed eigenvalues is of practical interest and needs special investigation especially when the integrand is not sufficiently smooth (for example, functions with unbounded derivatives). This is the case of the kernels of the form (1.1) studied in the present paper.

1.1 Contributions

Our aim is to present a comprehensive study of the eigenvalue problems for integral operators associated with kernel functions of exponential type as defined in (1.1). Those kernel functions are not necessarily smooth, i.e., they may not have continuous (partial) derivatives. Theoretically, we focus on the analysis of two formulations of the operator eigenvalue problem in terms of L2-integrable functions and continuous functions. In the first case we use the Galerkin method whereas in the second case the Nyström method is used. We utilize piecewise constant approximation in the Galerkin method and midpoint rule in the Nyström method. Numerical experiments were conducted to illustrate and sometimes to complement the theoretical results.

The contributions are listed below (see Section 4 for details).

Firstly, we present a new framework to analyze the Nyström discretization. To obtain the Nyström discretization error, we show that it is numerically equivalent to the Galerkin discretization, and thus the error estimate for Galerkin discretization immediately carries over, which reads

|λk-λk,h|Ckh2,

where h is the mesh size, λk and λk,h denote the kth largest exact and approximate eigenvalues (counted with multiplicity), respectively. To the best of our knowledge, it is the first result that captures the O(h2) convergence rate of the Nyström method when the kernel function is not continuously differentiable, while existing results (cf. [12, 25, 2, 18, 21, 22]) can only yield O(h) convergence rate (see Section 5.3). For the nonsmooth kernel function considered in (1.1), for example, when ρ(x-y)=|x-y|, the O(h2) convergence rate is known as superconvergence in [5]. Moreover, unlike existing results, our proof does not require the mesh size to be sufficiently small.

Secondly, to the best of our knowledge, we prove for the first time that the Galerkin method and Nyström method are numerically equivalent in the sense that

|λk,h(G)-λk,h(N)|Ch2,

where λk,h(G) and λk,h(N) denote the kth largest eigenvalues (counted with multiplicity) computed by Galerkin and Nyström discretizations, respectively, and C is a constant independent of any eigenvalue. The estimate indicates that the convergence rates for two methods are the same up to a generic constant independent of the eigenvalues. Also, the result guarantees that the error induced by numerical integration in the practical implementation of the Galerkin method does not affect the final convergence rate, and it provides a theoretical foundation for the use of the more implementation-friendly Nyström discretization, while maintaining the same rate of convergence. Numerical results are presented to confirm the claim.

Thirdly, we perform several numerical experiments to examine various theoretical estimates, including the convergence rate, dependence of the asymptotic constant on λ, approximation of eigenfunctions, etc. Our numerical results indicate that the eigenvalue convergence rate is quadratic with respect to the mesh size and for different eigenvalues, the approximation error is roughly independent of the eigenvalue magnitude. A detailed discussion relating our estimates and the ones from [12, 2, 18, 21] is presented.

1.2 Outline

The rest of the paper is organized as follows. In Section 2, we study the integral operator associated with kernel functions of exponential type and state the positive (semi-)definiteness of the operator as well as the related matrices. Section 3 presents abstract estimates for the Galerkin approximation to the underlying eigenvalue problem. The main results are presented in Section 4, including convergence rates of Galerkin and Nyström discretizations, the equivalence between the two discretizations, etc. Section 5 provides a numerical study of various theoretical results in Section 4 and in existing literature [12, 2, 18, 21]. The proof for the positive (semi-)definiteness of the operator and the related matrices is given in the appendix (Section 7).

2 Integral Operators with Kernel Functions of Exponential Type

For notational convenience, for any given bounded Lipschitz domain in d, when working with Sobolev spaces L2(D),H1(D):={fL2(D):fL2(D)d}, etc., we assume D is open; while for C(D), the Banach space of continuous functions with the supremum norm fsup:=supxD|f(x)|, D is assumed to be closed. In this section, unless otherwise stated, we use without subscript to denote the usual L2 norm.

2.1 Some Auxiliary Estimates

The following result is immediate using straightforward calculation.

Proposition 2.1.

The kernel function K(x,y) defined in (1.1) satisfies

|K(x,y)-K(x,y)|CK|y-y|with CK=(i=1dωi-2)12.

Consequently, if A is the integral operator defined in (1.2), then for each fL1(D), Af is Lipschitz continuous over Rd with |Af(x)-Af(x)|CKfL1(D)|x-x|. In particular, AfH1(D) and AfxKL2(D×D)f for all fL2(D).

Next we estimate the second derivatives of the kernels of our interest, which is needed in the error analysis that we provide later on. Let

K(x,y)=e-ρ(x-y)with ρ=(i=1dxi2ωi2)12.

A direct calculation shows that the second order partial derivatives of K are unbounded at x=y. More specifically, we have

2xi2K(x,y)=-ρ2(x-y)-(xi-yi)2ωi2ωi2ρ3(x-y)K(x,y)+(xi-yi)2ωi4ρ2(x-y)K(x,y),
2xjxiK(x,y)=(xi-yi)(xj-yj)ωi2ωj2ρ3(x-y)K(x,y)+(xi-yi)(xj-yj)ωi2ωj2ρ2(x-y)K(x,y),ij,

and

|αK(x,y)|Cmax{1,1ρ(x-y)},|α|=2,

where α is a multi-index and C is a generic constant depending only on ωi.

2.2 Mapping Properties

The following well-known mapping properties of integral operators associated with continuous kernel functions are collected below (e.g., [14]).

Proposition 2.2.

Let A be defined in (1.2). Then:

  1. (1)A:L2(D)L2(D) is compact,
  2. (2)A:C(D)C(D) is compact,
  3. (3)A:L2(D)C(D) is compact.

The above proposition ensures that the theoretical results presented in Section 3 apply to our particular case of kernels of exponential type.

2.3 Positive Definiteness

Define Φ(x)=e-ρ(x) with ρ in (1.1). Then

K(x,y)=Φ(x-y).

The main result in Theorem 2.1 asserts that the function Φ(x) is positive definite in the sense below (cf. [26]).

Definition 2.1 (Positive Definite Functions).

A continuous function Φ:d is positive (semi-)definite if for any n distinct points x1,,xnd (n=1,2,), the matrix ai,j=Φ(xi-xj) is positive (semi-)definite.

For bounded continuous functions, the positive semi-definiteness is equivalent to that of the associated integral operator (cf. [26]).

Proposition 2.3.

A bounded continuous function Φ:RdR is positive semi-definite if and only if

ddΦ(x-y)v(x)v(y)𝑑x𝑑y0

for all functions v in the Schwartz space

{vC(d):supxd(1+|v|)M|α|m|αv(x)|<for any integersm,M0}.

Theorem 2.1.

For ωi>0 (i=1,,d) and xRd, let ρ take one of the following forms:

ρ(x)=i=1d|xi|ωi,
ρ(x)=i=1dxi2ωi2,
ρ(x)=(i=1dxi2ωi2)12.

Then Φ(x)=e-ρ(x) is positive definite. Namely, for any distinct points x1,,xnRd, the matrix ai,j=Φ(xi-xj) is positive definite.

The proof of Theorem 2.1 is given in the appendix (see Section 7). The result below follows immediately from Proposition 2.3 and Theorem 2.1.

Corollary 2.1.

Let K(x,y) be the kernel function defined in (1.1) and let A be the corresponding integral operator defined in (1.2). Then

(Av,v)0

for all vL2(D).

3 Abstract Results

In this section, 𝒱 will be assumed to be a complex Hilbert space with inner product denoted by (,). The theoretical results in this section are developed for the Galerkin discretization. We use boldface symbols to denote matrices. The norm on 𝒱 is denoted by and 𝑴2 denotes the L2 operator norm for a matrix 𝑴. We use Ker and Ran to denote the kernel(or nullspace) and the range of an operator, respectively.

We use A to denote a positive compact operator on 𝒱, where the definition of a positive operator (cf. [20, 15]) is given below.

Definition 3.1.

An operator A on a Hilbert space 𝒱 is called positive if (Av,v)0 for all v𝒱.

Note that a positive operator is necessarily self-adjoint, i.e., A=A* (cf. [20, Theorem 12.32]).

A crucial tool we use in the estimate of eigenvalues is the Courant-Fischer min-max (or max-min) principle (cf. [15]).

Theorem 3.1 (Min-Max Principle).

Let A be a compact, self-adjoint operator on V with nonnegative eigenvalues listed in decreasing order (counted with multiplicity), i.e., λ1λk0. Then

λk=maxSkminvSkv=1(Av,v),

where Sk is any linear subspace of V of dimension k.

In this paper, we are interested in positive eigenvalues of A, and the eigenvalue problem is to find (λ,ϕ) such that

λ+,ϕ𝒱{0},Aϕ=λϕ.

Let Vh be a finite-dimensional subspace of 𝒱. The Galerkin method for (3.1) is to find (λh,ϕh) such that

λh+,ϕhVh{0},(Aϕh,vh)=λh(ϕh,vh)for all vhVh.

Let Ph:𝒱Vh be the projection from 𝒱 onto Vh. Then (3.2) is essentially the eigenvalue problem of the operator PhAPh on 𝒱. Since A is a positive compact operator, so is PhAPh. We can then list the eigenvalues of the Galerkin approximation in (3.2) in decreasing order (counted with multiplicity): λ1,hλn,h. Theorem 3.1 implies that the eigenvalues have the following characterization:

λk,h=maxSkminvSkv=1(PhAPhv,v)=maxSk,hminvSk,hv=1(PhAPhv,v)=maxSk,hminvSk,hv=1(Av,v),

where Sk and Sk,h are k-dimensional subspaces of 𝒱 and Vh, respectively. The min-max characterizations of λk and λk,h immediately yield the following.

Proposition 3.1.

We have λk,hλk. Consequently, 0λk-λk,hλk.

Eigenvalue approximations with Galerkin methods have been studied over the past few decades (cf. [5, 13]). The following result can be easily derived using the min-max principle (cf. [24]).

Theorem 3.2.

Let ϕ1,,ϕk be orthonormal eigenfunctions associated with eigenvalues λ1,,λk, respectively. Then

|λk-λk,h|2maxv𝒲kv=1(I-Ph)Av(I-Ph)v,

where Wk:=span{ϕ1}span{ϕk}.

Corollary 3.1.

Under the assumptions in Theorem 3.2,

|λk-λk,h|2(i=1k(I-Ph)ϕi2)12(i=1kλi2(I-Ph)ϕi2)12.

Proof.

By writing v=i=1kαiϕi𝒲k in (3.3), we can obtain the estimate above via the triangle inequality and the Cauchy–Schwarz inequality. ∎

The scaling of |λk-λk,h| and (I-Ph)ϕk will be investigated via numerical experiments in Section 5.

Remark 3.1.

Note that if the multiplicity of λk is greater than 1, then the subspace 𝒲k may be different for a different choice/ordering of basis functions in Ker(A-λkI).

4 Eigenvalue Problems

For the integral operator A defined in (1.2), we present two formulations for its eigenvalue problem based on 𝒱=L2(D) and 𝒱=C(D), respectively. It can be seen later that the two formulations are actually equivalent. Corresponding the two formulations at the continuous level, two discretizations are discussed, and it is shown later (in Section 4.4) that the two discretizations are also numerically equivalent.

4.1 Two Formulations: 𝒱=L2(D) and 𝒱=C(D)

Recall that A is compact on both 𝒱=L2(D) and 𝒱=C(D). For the Hilbert space 𝒱=L2(D), the eigenvalue problem reads:

Find (λ,ϕ) such that Aϕ=λϕϕL2(D){0}.

For the Banach space 𝒱=C(D) with supremum norm, the eigenvalue problem reads:

Find (λ,ϕ) such that Aϕ=λϕϕC(D){0}.

From the mapping property of A in Proposition 2.2, it is easy to see that the two formulations in (4.1) and (4.2) are equivalent in the sense below.

Proposition 4.1.

An eigenpair (λ,ϕ) satisfies (4.1) if and only if it satisfies (4.2).

In addition to being compact on L2(D), it was shown in Corollary 2.1 that A is a positive operator on L2(D). Therefore, we know from Proposition 4.1 and the spectral theory of self-adjoint compact operators that:

Proposition 4.2.

The eigenvalues of A in (4.1) or (4.2) are nonnegative and can be listed in decreasing order (counted with multiplicity)

λ1λk0with limk+λk=0,

where the algebraic multiplicity is equal to geometric multiplicity for any λk>0.

4.2 Galerkin Discretization for 𝒱=L2(D)

In this subsection, we consider the Galerkin discretization of the eigenvalue problem in (4.1). Let 𝒯={τi}i=1n be a subdivision of D of maximum mesh size h:=maxτ𝒯diam(τ), where diam(τ) denotes the diameter of τ. Introduce the space of piecewise constant functions

Vh:={vL2(D):v|τ is a constant for all τ𝒯}

and let the projection Ph:L2(D)Vh be given by

Phf|τ=1|τ|τf𝑑xfor all fL2(D).

The result below is standard.

Proposition 4.3.

Let Ph be the projection defined in (4.3). Then

(I-Ph)fCPhffor all fH1(D),
(I-Ph)AfCPxKL2(D×D)hffor all fL2(D),

where CP comes from the Poincaré constant, depending only on the shape regularity of T. In particular, if (λ,ϕ) is an eigenpair of A with λ>0, then

(I-Ph)ϕCPxKL2(D×D)λ-1hϕ.

Applying Proposition 4.3 to Corollary 3.1 yields the following estimate of the eigenvalue convergence rate with respect to the mesh size h.

Theorem 4.1.

Assume Ph is defined in (4.3) and Ah=PhAPh. Let λk and λk,h be the kth largest positive eigenvalues of A and Ah(counted with multiplicity), respectively. Then

|λk-λk,h|2CP2xKL2(D×D)2Ckh2,

where Ck=k(i=1kλi-2)12 and CP is the constant in Proposition 4.3, depending only on the shape regularity of T.

Remark 4.1.

In addition to (4.4), an O(h2) error bound can also be found in [5, Chapter 7], but it is an asymptotic estimate valid only for small enough mesh size h. In [13, Section 18] a non-asymptotic estimate is provided but will result in an O(h) error bound.

4.2.1 The Matrix Eigenvalue Problem

Given a subdivision 𝒯={τi}i=1n, let χτi(x) denote the characteristic function on τi. The Galerkin method seeks a nonzero function ϕh(x)=i=1nciχτi(x)Vh such that

(Aϕh,vh)=(λh(G)ϕh,vh)for all vhVh

for some λh(G)>0. This yields the matrix eigenvalue problem below:

𝑴h(G)𝒄=λh(G)𝑫h𝒄,

where

𝑴h(G)=[τiτjK(x,y)𝑑y𝑑x]i,j=1n,𝑫h=diag(|τ1|,,|τn|),𝒄=[c1,,cn]T.

Here diag() denotes a diagonal matrix with diagonal entries (). By introducing 𝒒=𝑫h12𝒄 and multiplying both sides of (4.5) by 𝑫h-12 on the left, we convert (4.5) into a standard eigenvalue problem below:

𝑨h(G)𝒒=λh(G)𝒒with 𝑨h(G)=𝑫h-12𝑴h(G)𝑫h-12.

In practice, for the ease of implementation, we use certain quadrature rule to compute the double integral of the kernel function K(x,y). For example, since K(x,y)C(d×d), we may simply use the mid-point rule to evaluate the integral on each element, i.e.,

τiτjK(x,y)𝑑y𝑑xK(xi,xj)|τi||τj|,

where xi is the centroid of τi. The resulting linear system of c1,,cn reads

𝑴h(N)𝒄=λh(N)𝑫h𝒄with𝑴h(N)=[K(xi,xj)|τi||τj|]i,j=1n.

Again, using 𝒒=𝑫h12𝒄, system (4.7) can be transformed into

𝑨h(N)𝒒=λh(N)𝒒with 𝑨h(N)=𝑫h-12𝑴h(N)𝑫h-12.

The approximation error introduced by the quadrature in (4.6) will be analyzed in Section 4.4.

4.3 Nyström Discretization for 𝒱=C(D)

In this subsection, we consider Nyström discretization for the eigenvalue problem in (4.2) with 𝒱=C(D). Based on the mid-point rule applied to the integral in (1.2), we define the finite-rank operator

Ahϕ(x):=j=1nK(x,xj)|τj|ϕ(xj),xD,ϕC(D),

where xj is the centroid of τj. The eigenvalue problem for Ah is to find λh(N) and ϕhC(D){0} such that Ahϕh=λh(N)ϕh. Evaluating the equation at quadrature nodes x1,,xn yields the following equivalent matrix eigenvalue problem (cf. [17, 2]), which is identical to (4.7):

𝑫h-1𝑴h(N)ϕh=λh(N)ϕh.

The eigenfunction can then be recovered from nodal values ϕh=(ϕh(x1),,ϕh(xn)) by using (4.9).

The substitution

𝒒=𝑫h12ϕh

transforms the above matrix problem into a standard symmetric eigenvalue problem identical to (4.8):

𝑨h(N)𝒒=λh(N)𝒒.

Therefore, we see that, the Galerkin method coincides with the Nyström method up to quadrature errors from (4.6). We will show in Section 4.4 (see Theorem 4.3) that the quadrature error does not dominate the discretization error in the eigenvalue computation. Therefore, the convergence result for the Nyström method below can be obtained with the help of Theorem 4.1 for the Galerkin method.

Theorem 4.2.

Let

K(x,y)=e-(|x1-y1|2ω12++|xd-yd|2ωd2)γ(γ=12 or 1)

and let T be a quasi-uniform subdivision in DRd (d=1,2,3) with maximum mesh size h. With quadrature approximation Ah defined in (4.9), let λk and λk,h denote the kth largest positive eigenvalues of A and Ah(counted with multiplicity) , respectively. Then

|λk-λk,h|CxKL2(D×D)2Ckh2,

where C is a constant that only depends on the shape parameter of the mesh and Ck is the constant defined in Theorem 4.1.

4.4 Equivalence of Galerkin and Nyström Discretizations

We have shown in Proposition 4.1 that at the continuous level the two formulations in (4.1) and (4.2) are equivalent. In this subsection, we build the discrete counterpart of such an equivalence. Namely, we estimate the error in computed eigenvalues from two discretizations discussed in Section 4.2 and Section 4.3. The main result is stated below.

Theorem 4.3.

Let

K(x,y)=e-(|x1-y1|2ω12++|xd-yd|2ωd2)γ(γ=12 or 1)

and let T={τi}i=1n be a quasi-uniform subdivision in DRd (d=1,2,3) with maximum mesh size h. If λh(G) and λh(N) are the ith largest eigenvalues (counted with multiplicity) of 𝐀h(G) and 𝐀h(N), respectively. Then

|λh(G)-λh(N)|Ch2,

where the constant C is independent of any eigenvalue.

To prove Theorem 4.3, we first analyze the quadrature error in (4.6).

Lemma 4.1.

Let T={τi}i=1n be a quasi-uniform mesh with maximum mesh size h.

  1. If K(x,y)C2(τi×τj), then
    |τiτjK(x,y)𝑑y𝑑x-K(x*,y*)|τi||τj||C1|τi||τj|h2max|α|=2maxτi×τj|αK|.
  2. If K(x,y)C(D×D) is Lipschitz continuous, then
    |τiτjK(x,y)𝑑y𝑑x-K(x*,y*)|τi||τj||C2|τi||τj|h.

Here α is a multi-index, C1,C2 are generic constants independent of i,j, x* and y* are centroids of τi and τj, i.e., |τi|x*=τix𝑑x and |τj|y*=τiy𝑑y.

Proof.

The Taylor expansion of K(x,y) over τi×τj reads

K(x,y)=K(x*,y*)+xK(x*,y*)(x-x*)+yK(x*,y*)(y-y*)+R(x,y),

where the remainder satisfies

|R(x,y)|C1h2max|α|=2max(x,y)τi×τj|αK(x,y)|.

Since x*,y* are centroids, it follows that

τixK(x*,y*)(x-x*)𝑑x=0andτjyK(x*,y*)(y-y*)𝑑y=0.

Hence (4.11) can be obtained by taking double integrals of the equation in (4.13) over τi×τj. Inequality (4.12) can be proved similarly by integrating

K(x,y)=K(x*,y*)+[(K(x,y)-K(x*,y))+(K(x*,y)-K(x*,y*))],

where the summands in the bracket are estimated using the Lipschitz condition. ∎

The error 𝑨h(G)-𝑨h(N)2 can be estimated as below.

Proposition 4.4.

Under the assumptions in Theorem 4.3, 𝐄h:=𝐀h(G)-𝐀h(N) satisfies

𝑬h2=O(h2),d=1,2,3.

Proof.

Without loss of generality, assume D=[0,1]d. It suffices to prove (4.14) for the following three cases:

  1. (1)d2,
  2. (2)d=1 and K(x,y)=e-(x-y)2ω2,
  3. (3)d=1 and K(x,y)=e-|x-y|ω.

Case (1). In this case, we illustrate the proof for a uniform rectangular mesh and the same idea applies to the general case. For 𝑬h=[ei,j]i,j with

ei,j=|τi|-12|τj|-12(τiτjK(x,y)𝑑y𝑑x-K(xi,xj)|τi||τj|),

we estimate for each fixed i the quantity j=1n|ei,j|. By first partitioning the elements into consecutive layers centered at τi, we can evaluate the contribution layer by layer.

The 0th layer is τi itself. The first layer contains elements that share a vertex with τi. In general, the kth (k1) layer is composed of elements outside layer k-1 that share a vertex with layer k-1. See Figure 1 for an illustration in one and two dimensions.

Figure 1
Figure 1

Partition of D into layers with respect to τi (left: 1D; right: 2D).

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

Next we estimate |ei,j| layer by layer. We use C to denote a generic constant independent of i,j. The assumption on 𝒯={τi}i=1n yields

n=O(h-d)and|τ|=O(hd)for all τ𝒯.

Note that for each τj in layer k2, K(x,y)C2(τi×τj) and (2.1) implies that

max|α|=2maxτi×τj|αK|C(kh)-1for all τj in layer k2.

Together with (4.15) and Lemma 4.1, it can be deduced that

|ei,j|Chd+1for all τj in layer 0,1,
|ei,j|Ck-1hd+1for all τj in layer k2.

The number of elements in layer k1 is (2k+1)d-(2k-1)d26kd-1. Therefore,

j=1n|ei,j|Chd+1+Clayer k=1L26kd-1k-1hd+1=O(h2),d=2,3,

where L denotes the maximal number of layers and obviously L1h.

Since 𝑬h is symmetric, it follows from Gershgorin’s Circle Theorem that

𝑬h2=maxi|λi(𝑬h)|maxij=1n|ei,j|=O(h2),

which completes the proof of Case (1). The inequality above can also be shown via the following argument: since 𝑬h is symmetric, 𝑬h2 is equal to its spectral radius, which is bounded by any matrix norm (cf. [11, Theorem 5.6.9]), and the quantity maxij=1n|ei,j| is the l matrix norm of 𝑬h.

Case (2). In this case, KC(D×D) and there exists a constant C such that

max|α|=2αKL(D×D)C.

Hence Lemma 4.1 implies

|ei,j|Ch3for all i,j.

Then the same argument as in Case (1) yields the desired estimate:

𝑬h2maxij=1n|ei,j|=O(h2).

Case (3). In this case, K(x,y)=e-|x-y|ω, x,y[0,1]. Let hi denote the length of the ith interval τi=[ti-1,ti] and recall that xi is the center of τi. We estimate |ei,j| as follows. When i=j, it can be computed that

ti-1titj-1tje-|x-y|ω𝑑y𝑑x=ω(2hi+2ωe-hiω-2ω)=hi2+O(h3),

where the last identity follows from the Taylor expansion:

e-hiω=1-hiω+hi22ω2+O(h3).

Then

|ei,i|=hi-1(hi2+O(h3)-K(xi,xi)|τi|2)=O(h2).

When ij, since K(x,y)=K(y,x), assume without loss of generality that ti-1tj. Then K(x,y)=ey-xω for (x,y)τi×τj and we deduce that

ti-1titj-1tjey-xω𝑑y𝑑x=-ω2(ehiω-1)(e-hjω-1)etj-tiω
=-ω2(hiω+hi22ω2+O(h3))(-hjω+hj22ω2+O(h3))etj-tiω
=hihj(1+hi-hj2ω)etj-tiω+O(h4)

and

K(xi,xj)|τi||τj|=hihjehi-hj2ωetj-tiω=hihj(1+hi-hj2ω)etj-tiω+O(h4).

Therefore,

|ei,j|=hi-12hj-12O(h4)=O(h3)for all ij.

We conclude from (4.16) and (4.17) that

𝑬h2maxij=1n|ei,j|=O(h2).

The proof of the theorem is complete. ∎

Theorem 4.3 follows readily from Proposition 4.4 and Weyl’s inequality [27, 4, 23].

Lemma 4.2 (Weyl’s Inequality).

Let 𝐀 and 𝐁 be n-by-n Hermitian matrices with eigenvalues λ1(A)λn(A) and λ1(B)λn(B), respectively. Then

maxi=1,,n|λi(A)-λi(B)|𝑨-𝑩2.

4.4.1 Numerical Illustration

To show that the O(h2) error bounds in Proposition 4.4 and Theorem 4.3 are attainable, we perform a numerical experiment with K(x,y)=e-|x-y|, x,yD=(0,1), so the integrals in the Galerkin matrix 𝑨h(G) can be evaluated exactly. Uniform meshes are used and by varying the mesh size h, we compute the corresponding eigenvalue errors measured by maxk1000|λk,h(G)-λk,h(N)|. The matrix errors 𝑨h(G)-𝑨h(N) are also computed. It can be seen from Figure 2 that both errors are O(h2).

Figure 2
Figure 2

The errors maxk1000|λk,h(G)-λk,h(N)| (blue line) and 𝑨h(G)-𝑨h(N) (red line) v.s. h.

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

5 Numerical Experiments

We perform various numerical tests for the integral operator Af:=DK(x,y)f(y)𝑑y. We use piecewise constant approximation in the Galerkin method and midpoint rule in the Nyström method. Also, uniform triangular meshes are used in the two-dimensional case. In Section 5.1, the actual eigenvalue convergence rates computed by Nyström method are shown. Section 5.2 investigates the eigenfunction approximation. Section 5.3 presents a comparison of our error bounds with the ones from [12, 2, 18, 21], etc.

Example 1.

We first consider an example with known eigenpairs from [9]

K(x,y)=e-x-y1,x,yD=(0,1)d(d=1,2).

If d=1, the exact eigenpairs of the integral operator A are given by

λk=2wk2+1,ϕk(x)=Bk(sin(wkx)+wkcos(wkx)),

where wk (k=1,2,) are positive solutions of the equation tan(w)=2ww2-1 and Bk is chosen such that

ϕkL2(D)=1.

The decay rate of eigenvalues is known to be λk=O(k-2). If d=2, the exact eigenvalues/eigenfunctions are the tensor products of eigenvalues/eigenfunctions in one dimension, i.e.,

λk1,k2=λk1λk2,ϕk1,k2(x)=ϕk1(x1)ϕk2(x2),x=(x1,x2)2.

Example 2.

We consider in this example the kernel function associated with the L2 norm in two dimensions.

K(x,y)=e-x-y2,x,yD=(0,1)2.

Since the exact eigenvalues are not known, we use the computed eigenvalues over a finer mesh with mesh size h=2200 as reference eigenvalues to evaluate the errors of approximate eigenvalues derived from much coarser meshes(h=225,250).

5.1 Rate of Convergence

The results for Example 1 are shown in Figures 34. The results for Example 2 are shown in Figures 56, which are similar to those in Example 1.

The log-log plots in Figure 3 and Figure 6 indicate the convergence rate

|λ-λh|=O(h2).

From Figure 4 and Figure 5 (with fixed mesh size in each plot), we see that (for leading eigenvalues) the error |λ-λh| is roughly independent of λ. Hence we deduce that there is a constant C independent of λ such that

|λ-λh|h2Cfor all λ.

We then examine the magnitude of the constant C. For the four problems shown in Figure 4 and Figure 5, the maximal approximation errors maxk|λk-λk,h| are bounded by 5×10-8,3×10-4,3×10-5,2×10-4, respectively. Hence it can be computed that the constant C0.1. That is to say, we have |λ-λh|0.1h2 for all λ for the above four experiments.

Figure 3
Figure 3Figure 3

Example 1: Rate of convergence. Left: 1D; right: 2D.

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

Figure 4
Figure 4Figure 4

Example 1: λk and |λk-λk,h| for 1km. Left: 1D, m=1000, h=12000; right: 2D, m=500, h=225.

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

Figure 5
Figure 5Figure 5

Example 2: λk and |λk-λk,h| for 1k500. Left: h=250; right: h=225.

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

Figure 6
Figure 6

Example 2: Rate of convergence.

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

5.2 Eigenfunction Approximation

Since all theoretical error bounds for eigenvalues are expressed in terms of certain approximation errors of eigenfunctions, we compute the actual approximation error of the eigenfunctions in this subsection using Example 1 with d=1. In Section 5.3, we insert it into the eigenvalue estimates (in [2, 18] and (4.4)) to investigate the scalings with respect to λ (exact eigenvalue) and h (mesh size).

Figure 7
Figure 7Figure 7

ϕk-Phϕk with respect to k (top) and h (bottom).

Citation: Computational Methods in Applied Mathematics 20, 1; 10.1515/cmam-2018-0186

Numerical Observations.

Equation (5.1) and Figure 7 imply that

ϕksup=O(λk-1),ϕk=O(λk-1)and(I-Ph)ϕk=O(kh)=O(λk-12h).

Theoretical Estimates.

Proposition 4.3 implies that (I-Ph)ϕkCλk-1h, which differs from the numerical observation in (5.3). This may indicate that using Poincaré’s inequality to estimate the approximation error (I-Ph)ϕ is not accurate enough.

5.3 Comparison of Existing Theoretical Estimates

Using the exact eigenpairs in Example 1, we compare different error estimates, e.g., in [12, 2, 18, 21] and (4.10), to true errors in the eigenvalue computations. It will be seen that all theoretical error bounds overestimate the true error by a large margin of various degrees and the error bound in (4.10) is more accurate.

Estimates in (4.10).

We compute the error bound in Theorem 4.2 (or Theorem 4.1). With λk=O(k-2), it can be computed that the constant Ck in Theorem 4.1 is Ck=O(k3)=O(λk-32). Hence the error estimate is

|λ-λh|=O(λ-32h2),

where the scaling h2 is correct while the factor λ-32 is redundant compared to (5.2).

Estimates from [12, 2, 18, 21].

Existing estimates for the Nyström discretization are all asymptotic and more or less of the form |λ-λh|=O (quadrature error). For example, the estimates in [12, 2, 18] roughly say that

|λ-λh|C*maxϕAϕ-Ahϕsupif h is sufficiently small,

where Ah is the quadrature operator in (4.9), ϕKer(A-λI) is an eigenfunction of unit length, C* is a constant that may (in [2, 18]) or may not (in [12]) depend on λ. The quadrature error A-Ah corresponds to the operator Qn in [21] and was used in [21] to obtain the convergence rate. In Example 1, we deduce that Aϕ-Ahϕsup=O(ϕsuph)=O(λ-1h). Hence those estimates give rise to the convergence rate

|λ-λh|C*λ-1hif h is sufficiently small,

which is inconsistent with the O(h2) scaling observed in Section 5.1. Moreover, in [2, 18], due to the use of spectral projection operator and an estimate of approximate resolvent in [1, Theorem 1], it can be deduced for Example 1 that C*=O(λ-72), which gives |λ-λh|=O(λ-92h).

Remark 5.1.

For smooth kernel functions like e-|x-y|2 and the piecewise constant Galerkin discretization, the numerical results in [24] indicate that |λ-λh|=O(λh2).

5.4 A Conjecture of a Sharp Bound

Following the investigation in Section 5.2 on the actual approximation error of eigenfunctions, we derive a similar estimate in two dimensions and then propose a conjecture concerning the actual convergence rate. With ϕk1,k2(x1,x2)=ϕk1(x1)ϕk2(x2) in Example 1, for simplicity, we consider a tensor product mesh in [0,1]2. Let Ph1D and Ph2D denote the projections defined in (4.3) over [0,1] and [0,1]2, respectively. It follows that

Ph2Dϕk1,k2=Ph1Dϕk1Ph1Dϕk2

and

Ph2Dϕk1,k2[0,1]22=Ph1Dϕk1[0,1]2Ph1Dϕk2[0,1]2.

Using the one-dimensional result in (5.3), we deduce that

(I-Ph2D)ϕk1,k2[0,1]22=ϕk1[0,1]2ϕk2[0,1]2-Ph1Dϕk1[0,1]2Ph1Dϕk2[0,1]2
=(I-Ph1D)ϕk12ϕk22+Ph1Dϕk12(I-Ph1D)ϕk22
=O((λk1-1+λk2-1)h2)Cλk1,k2-1h2

in accordance with the one-dimensional counterpart in (5.3).

The numerical results lead us to the following conjecture:

Conjecture.

Let λ and λh denote the exact and approximate kth largest eigenvalue, respectively. Then

(I-Ph)ϕC1λ-12hfor all ϕKer(A-λI),ϕ=1,

and

|λ-λh|C2λmaxϕKer(A-λI)ϕ=1(I-Ph)ϕ2,

where the constants C1,C2 are independent of λ or h.

6 Conclusion

We obtain eigenvalue error estimates of second order for the lowest order Galerkin and Nyström discretizations. The equivalence between the two discretizations is established, which makes the analysis of the Nyström method a consequence of the Galerkin one. The resulting estimates appear more accurate than the previously available ones. Numerical experiments illustrate and complement the new and previously existing theoretical results.

7 Appendix

To prove Theorem 2.1, some technical tools are needed, where the Schoenberg Interpolation Theorem relates positive definiteness to completely monotone functions (cf. [6]).

Definition 7.1 (Completely Monotone Functions).

A function f is called completely monotone on [0,) if fC[0,+)C(0,+) and (-1)kf(k)(t)0 for all t>0, k=0,1,.

Theorem 7.1 (Schoenberg Interpolation Theorem).

Let denote a norm induced by an inner product on Rd. If f is completely monotone but not constant on [0,+), then for any n distinct points x1,,xnRd, the matrix ai,j=f(xi-xj2) is symmetric positive definite.

The Bernstein–Widder Theorem shows that the Laplace transform of a nonnegative L1(+) function is completely monotone (cf. [6]).

Theorem 7.2 (Bernstein–Widder Theorem).

A function f:[0,+)[0,+) is completely monotone if and only if there is a nondecreasing bounded function ξ such that

f(t)=0+e-st𝑑ξ(s).

Proposition 7.1 lists two completely monotone functions that are needed in the proof.

Proposition 7.1.

The following two functions are completely monotone on [0,):

  1. (1)f(t)=e-t,
  2. (2)f(t)=e-t.

Proof.

The function f(t)=e-t is completely monotone from the definition. For f(t)=e-t, we show that f(t) satisfies the assumption in Theorem 7.2 with ξ(s)=-erf(12s), where erf(x) denotes the error function. In fact, it can be computed that

0e-st𝑑ξ(s)=e-t=f(t).

Hence f(t) is completely monotone according to Theorem 7.2 and the proof is complete. ∎

Now we are in a position to carry out the proof of Theorem 2.1.

Proof of Theorem 2.1.

If ρ is the weighted L1 norm, then Φ(x)=e-ρ(x) can be written as the inverse Fourier transform of a positive function in L1(d). In fact, we have

Φ(x)=Cd(Πk=1d1ωkyk2+ωk-1)eixy𝑑y=CdΦ^(y)eixy𝑑y,

where C>0 and Φ^(y)=Πk=1d1ωkyk2+ωk-1. For n points x1,,xn in d and a nonzero vector (c1,,cn), we have

k=1nj=1ncjck¯Φ(xj-xk)=CdΦ^(y)|j=1ncjeixjy|2𝑑y>0.

Thus the matrix ai,j=Φ(xi-xj) is positive definite.

For the rest two forms of ρ, the result follows from the Schoenberg Interpolation Theorem and Proposition 7.1. ∎

References

  • [1]

    K. E. Atkinson, The numerical solutions of the eigenvalue problem for compact integral operators, Trans. Amer. Math. Soc. 129 (1967), 458–465.

  • [2]

    K. E. Atkinson, Convergence rates for approximate eigenvalues of compact integral operators, SIAM J. Numer. Anal. 12 (1975), 213–222.

    • Crossref
    • Export Citation
  • [3]

    I. Babuška, F. Nobile and R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM Rev. 52 (2010), no. 2, 317–355.

    • Crossref
    • Export Citation
  • [4]

    R. Bhatia, Perturbation Bounds for Matrix Eigenvalues, Class. Appl. Math. 53, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 2007.

  • [5]

    F. Chatelin, Spectral Approximation of Linear Operators, Class. Appl. Math. 65, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 2011.

  • [6]

    W. Cheney and W. Light, A Course in Approximation Theory, Grad. Stud. Math. 101, American Mathematical Society, Providence, 2009.

  • [7]

    G. Christakos, Modern Spatiotemporal Geostatistics. Vol. 6, Oxford University Press, Oxford, 2000.

  • [8]

    L. W. Gelhar, Stochastic subsurface hydrology from theory to applications, Water Resources Res. 22 (1986), 1–9.

  • [9]

    R. G. Ghanem and P. D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer, New York, 1991.

  • [10]

    M. D. Gunzburger, C. G. Webster and G. Zhang, Stochastic finite element methods for partial differential equations with random input data, Acta Numer. 23 (2014), 521–650.

    • Crossref
    • Export Citation
  • [11]

    R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1990.

  • [12]

    H. B. Keller, On the accuracy of finite difference approximations to the eigenvalues of differential and integral operators, Numer. Math. 7 (1965), 412–419.

    • Crossref
    • Export Citation
  • [13]

    M. A. Krasnosel’skii, G. M. Vainikko, R. P. Zabreyko, Y. B. Ruticki and V. Y. Stetsenko, Approximate Solution of Operator Equations, Springer, Cham, 2012.

  • [14]

    R. Kress, Linear Integral Equations, 3rd ed., Appl. Math. Sci. 82, Springer, New York, 2014.

  • [15]

    P. D. Lax, Functional Analysis, Pure Appl. Math. (New York), Wiley-Interscience, New York, 2002.

  • [16]

    N. M. Nasrabadi, Pattern recognition and machine learning, J. Electron. Imag. 16 (2007), no. 4, Article ID 049901.

  • [17]

    E. J. Nyström, Über Die Praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben, Acta Math. 54 (1930), no. 1, 185–204.

    • Crossref
    • Export Citation
  • [18]

    J. E. Osborn, Spectral approximation for compact operators, Math. Comput. 29 (1975), 712–725.

    • Crossref
    • Export Citation
  • [19]

    S. Osborn, P. S. Vassilevski and U. Villa, A multilevel, hierarchical sampling technique for spatially correlated random fields, SIAM J. Sci. Comput. 39 (2017), no. 5, S543–S562.

    • Crossref
    • Export Citation
  • [20]

    W. Rudin, Functional Analysis, 2nd ed., Int. Ser. Pure Appl. Math., McGraw-Hill, New York, 1991.

  • [21]

    A. Spence, Error bounds and estimates for eigenvalues of integral equations, Numer. Math. 29 (1977/78), no. 2, 133–147.

  • [22]

    A. Spence, Error bounds and estimates for eigenvalues of integral equations. II, Numer. Math. 32 (1979), no. 2, 139–146.

    • Crossref
    • Export Citation
  • [23]

    G. W. Stewart, Matrix Algorithms. Vol. II. Eigensystems, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 2001.

  • [24]

    R. A. Todor, Robust eigenvalue computation for smoothing operators, SIAM J. Numer. Anal. 44 (2006), no. 2, 865–878.

    • Crossref
    • Export Citation
  • [25]

    G. M. Vainikko, On the speed of convergence of approximate methods in the eigenvalue problem, Zh. Vychisl. Mat. Mat. Fiz. 7 (1967), 977–987.

  • [26]

    H. Wendland, Scattered Data Approximation, Cambridge Monogr. Appl. Comput. Math. 17, Cambridge University Press, Cambridge, 2005.

  • [27]

    H. Weyl, Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung), Math. Ann. 71 (1912), no. 4, 441–479.

    • Crossref
    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1]

    K. E. Atkinson, The numerical solutions of the eigenvalue problem for compact integral operators, Trans. Amer. Math. Soc. 129 (1967), 458–465.

  • [2]

    K. E. Atkinson, Convergence rates for approximate eigenvalues of compact integral operators, SIAM J. Numer. Anal. 12 (1975), 213–222.

    • Crossref
    • Export Citation
  • [3]

    I. Babuška, F. Nobile and R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM Rev. 52 (2010), no. 2, 317–355.

    • Crossref
    • Export Citation
  • [4]

    R. Bhatia, Perturbation Bounds for Matrix Eigenvalues, Class. Appl. Math. 53, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 2007.

  • [5]

    F. Chatelin, Spectral Approximation of Linear Operators, Class. Appl. Math. 65, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 2011.

  • [6]

    W. Cheney and W. Light, A Course in Approximation Theory, Grad. Stud. Math. 101, American Mathematical Society, Providence, 2009.

  • [7]

    G. Christakos, Modern Spatiotemporal Geostatistics. Vol. 6, Oxford University Press, Oxford, 2000.

  • [8]

    L. W. Gelhar, Stochastic subsurface hydrology from theory to applications, Water Resources Res. 22 (1986), 1–9.

  • [9]

    R. G. Ghanem and P. D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer, New York, 1991.

  • [10]

    M. D. Gunzburger, C. G. Webster and G. Zhang, Stochastic finite element methods for partial differential equations with random input data, Acta Numer. 23 (2014), 521–650.

    • Crossref
    • Export Citation
  • [11]

    R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1990.

  • [12]

    H. B. Keller, On the accuracy of finite difference approximations to the eigenvalues of differential and integral operators, Numer. Math. 7 (1965), 412–419.

    • Crossref
    • Export Citation
  • [13]

    M. A. Krasnosel’skii, G. M. Vainikko, R. P. Zabreyko, Y. B. Ruticki and V. Y. Stetsenko, Approximate Solution of Operator Equations, Springer, Cham, 2012.

  • [14]

    R. Kress, Linear Integral Equations, 3rd ed., Appl. Math. Sci. 82, Springer, New York, 2014.

  • [15]

    P. D. Lax, Functional Analysis, Pure Appl. Math. (New York), Wiley-Interscience, New York, 2002.

  • [16]

    N. M. Nasrabadi, Pattern recognition and machine learning, J. Electron. Imag. 16 (2007), no. 4, Article ID 049901.

  • [17]

    E. J. Nyström, Über Die Praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben, Acta Math. 54 (1930), no. 1, 185–204.

    • Crossref
    • Export Citation
  • [18]

    J. E. Osborn, Spectral approximation for compact operators, Math. Comput. 29 (1975), 712–725.

    • Crossref
    • Export Citation
  • [19]

    S. Osborn, P. S. Vassilevski and U. Villa, A multilevel, hierarchical sampling technique for spatially correlated random fields, SIAM J. Sci. Comput. 39 (2017), no. 5, S543–S562.

    • Crossref
    • Export Citation
  • [20]

    W. Rudin, Functional Analysis, 2nd ed., Int. Ser. Pure Appl. Math., McGraw-Hill, New York, 1991.

  • [21]

    A. Spence, Error bounds and estimates for eigenvalues of integral equations, Numer. Math. 29 (1977/78), no. 2, 133–147.

  • [22]

    A. Spence, Error bounds and estimates for eigenvalues of integral equations. II, Numer. Math. 32 (1979), no. 2, 139–146.

    • Crossref
    • Export Citation
  • [23]

    G. W. Stewart, Matrix Algorithms. Vol. II. Eigensystems, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 2001.

  • [24]

    R. A. Todor, Robust eigenvalue computation for smoothing operators, SIAM J. Numer. Anal. 44 (2006), no. 2, 865–878.

    • Crossref
    • Export Citation
  • [25]

    G. M. Vainikko, On the speed of convergence of approximate methods in the eigenvalue problem, Zh. Vychisl. Mat. Mat. Fiz. 7 (1967), 977–987.

  • [26]

    H. Wendland, Scattered Data Approximation, Cambridge Monogr. Appl. Comput. Math. 17, Cambridge University Press, Cambridge, 2005.

  • [27]

    H. Weyl, Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung), Math. Ann. 71 (1912), no. 4, 441–479.

    • Crossref
    • Export Citation
FREE ACCESS

Journal + Issues

CMAM considers original mathematical contributions to computational methods and numerical analysis with applications mainly related to PDEs. The journal is interdisciplinary while retaining the common thread of numerical analysis, readily readable and meant for a wide circle of researchers in applied mathematics.

Search