Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Computational Methods in Applied Mathematics

Editor-in-Chief: Carstensen, Carsten

Managing Editor: Matus, Piotr

4 Issues per year


IMPACT FACTOR 2017: 0.658

CiteScore 2017: 1.05

SCImago Journal Rank (SJR) 2017: 1.291
Source Normalized Impact per Paper (SNIP) 2017: 0.893

Mathematical Citation Quotient (MCQ) 2017: 0.76

Online
ISSN
1609-9389
See all formats and pricing
More options …
Volume 18, Issue 2

Issues

Spectral Analysis, Properties and Nonsingular Preconditioners for Singular Saddle Point Problems

Na Huang
  • Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, P. R. China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Chang-Feng Ma / Jun Zou
  • Corresponding author
  • Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, P. R. China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2017-05-31 | DOI: https://doi.org/10.1515/cmam-2017-0006

Abstract

We first derive some explicit bounds on the spectra of generalized non-symmetric singular or nonsingular saddle point matrices. Then we propose two new nonsingular preconditioners for solving generalized singular saddle point problems, and show that GMRES determines a solution without breakdown when applied to the resulting preconditioned systems with any initial guess. Furthermore, the detailed spectral properties of the preconditioned systems are analyzed. The nonsingular preconditioners are also applied to solve the singular finite element saddle point systems arising from the discretization of the Stokes problems to test their performance.

Keywords: Singular Saddle Point Problems; Preconditioners; Spectral Estimates

MSC 2010: 65F10; 65F50

1 Introduction

In this work we shall mainly study the spectral behavior and preconditioners of the generalized singular or nonsingular saddle point problems of the form

(ABT-BC)(xy)=(f-g),(1.1)

where An×n is a symmetric positive definite matrix, Cm×m is a symmetric positive semi-definite matrix, and B is a general matrix in m×n, with mn (possibly mn); fn and gm are two given vectors.

Systems of the form (1.1) arise from a variety of scientific and engineering applications, such as computational fluid dynamics, optimal control problems, constrained optimizations, weighted least-squares formulations, mixed or hybrid finite element approximations of partial differential equations, electronic networks and computer graphics, boundary element discretization based on domain decomposition, and so on; see, e.g., [14, 11, 29, 20, 23, 21, 10, 22, 6] and the references therein.

Although the linear system (1.1) can be equivalently and easily transformed into a symmetric linear system, the coefficient matrix of non-symmetric form in (1.1) is semi-positive real and positive semi-stable (see [5] for more details). So the use of the non-symmetric form (1.1) can be advantageous when using certain Krylov subspace methods, like restarted GMRES [5], and has motivated many studies on the numerical solution of the generalized saddle point systems of the form (1.1), see, e.g., [18, 7, 8, 2, 5, 3, 31, 32, 36]. This will be also the central focus of the current work, namely, we shall mainly study the spectral properties of the generalized non-symmetric saddle point system (1.1), especially for the case that it is singular, and propose some effective nonsingular preconditioners for the singular systems and analyze the spectral behavior of the preconditioned systems. When the generalized saddle point systems of the form (1.1) are large and sparse, iterative methods appear to be more attractive than direct methods. However, most existing iterative methods, including Krylov subspace-type methods, converge quite slowly, or even fail to converge if they are ineffectively preconditioned. Therefore, great attention has been paid to the preconditioned iterative methods and the analysis on their convergence behavior in the last decade; see, e.g., [16, 4, 30, 17, 25, 26, 27]. Quite many preconditioners have been proposed and studied recently for the saddle point systems of the form (1.1), such as symmetric indefinite preconditioners [7, 8], inexact constraint preconditioners [8, 28, 3, 36], block triangular preconditioners [13, 34, 21, 33], shift-splitting preconditioners [5, 15, 35] and primal-based penalty preconditioners [19]. In particular, when the saddle point system (1.1) is generated by some special discretizations, e.g., finite element or boundary element methods based on non-overlapping domain decomposition, effective preconditioners can be constructed by making full use of the special structure of discretizations [14, 9]. But we emphasize that most of the above studies were done for the cases when the saddle point systems (1.1) are nonsingular, and few analytical results on preconditioning effects are available for the cases when the saddle point systems (1.1) are singular, which have wide applications in many areas as we mentioned earlier. This will be the central task of our current work, and several new analysis tools and techniques are introduced for the analytical purpose. As it is widely known, an effective and efficient preconditioner is mostly expected to have the clustered eigenvalues for its resultant preconditioned system. In order to see such clustering behavior, one needs some careful and sharp estimates of the spectral bounds for the preconditioned system. For non-symmetric generalized saddle point matrices of the form

𝒜=(ABT-BC)(1.2)

in (1.1), there are already quite many investigations available in literature; see [7, 8, 32, 34, 2, 3, 24]. Interesting estimates were established in [7] on spectral bounds of the complex eigenvalues of 𝒜, and further improved in [32] from the previous lower bound (ρ1+μ1)/2 to the lower bound max{(ρ1+μ1)/2,μ1-δm}, where ρ1 and μ1 are the minimum eigenvalues of A and C, respectively, and δm is the maximum eigenvalue of BBT. But the matrix C is often singular or even vanishes in many applications, so we have μ1=0 and the improved estimates of [32] reduce to the original ones in [7]. The first results of this work are to improve the estimates in [32] for the important cases when matrix C is singular or zero. With the help of those improved results, we shall derive some explicit and sharp bounds on the spectra of the generalized non-symmetric singular saddle point matrices of the form (1.1). Then we propose new nonsingular preconditioners for solving the generalized singular saddle point problems, and establish the spectral bounds of the preconditioned systems.

The arrangement of this work is in the following order. We first present some improved estimates of the spectra for the generalized singular or nonsingular saddle point matrix 𝒜 in Section 2, then study more specific spectral properties of singular matrices 𝒜 in Section 3. We will then propose a new nonsingular preconditioner for the singular saddle point system in (1.1) in Section 4, and derive the spectral bounds of the preconditioned systems in Section 5. Finally, in Section 6, the nonsingular preconditioners are applied to solve the singular finite element saddle point systems arising from the discretization of the Stokes problems to demonstrate their effectiveness and efficiencies.

Notation.

We introduce some notation that will be used in the subsequent analysis. For any matrix Hl×l, we shall often write its inverse, transpose, minimum and maximum eigenvalues, rank and null space as H-1, HT, λmin[H], λmax[H], rank[H] and null[H], respectively. Moreover, we write H1H2 if H1 is similar to H2 (i.e., there exists an invertible matrix Tl×l such that H1=TH2T-1), and H>0 (resp. H0) if H is symmetric positive definite (resp. positive semi-definite). In addition, we use Il to denote the identity matrix of order l, and Re(λ) and Im(λ) the real and imaginary parts of any λ. And we shall denote the eigenvalues of matrices A, C, BBT, and S respectively by ρ1ρ2ρn, μ1μ2μm, δ1δ2δm, and ζ1ζ2ζm, where S=C+BA-1BT is the Schur complement matrix.

2 Estimates of Complex Eigenvalues of Singular or Nonsingular Saddle Point Matrices

In this section, we make an effort to improve the following estimates of eigenvalues of matrix 𝒜 in [32], which will be important to our subsequent studies of the spectral distributions of matrix 𝒜 and its preconditioned system.

Theorem 2.1 ([32]).

Let A be the saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric positive semi-definite. Then any complex eigenvalue λ of A with Im(λ)0 meets the following estimates:

max{12(ρ1+μ1),μ1-δm}Re(λ)min{12(ρn+μm),μm+δm},(2.1)|Im(λ)|{δm-(ρ1+μ1-2μm)24if μm<12(ρ1+μ1),δm-(ρn+μm-2μ1)24if μ1>12(ρn+μm),δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.(2.2)

It is often encountered in many applications that the matrix C is singular. In this case, Theorem 2.1 implies that the lower bound of the real part of any complex eigenvalue of 𝒜 reduces to (ρ1+μ1)/2, which deteriorates to the same result in [7]. In the next theorem we present some estimates that improve the ones in Theorem 2.1 for the important case that the matrix C is singular. There are no existing analytical tools to apply for this singular case. Instead, we choose to consider the eigensystem of the saddle point equation (1.1) directly, from where we solve the variable x in terms of the variable y and substitute into the second equation of the eigensystem. Then we separate the real and imaginary part of the quadratic form of the resultant equation, and achieve the desired estimates by using the important relation between the real and imaginary parts.

Theorem 2.2.

Let A be the saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric positive semi-definite. Then any complex eigenvalue λ of A with Im(λ)0 meets the following estimates:

max{12(ρ1+μ1),ρ1-δm}Re(λ)min{12(ρn+μm),ρn+δm},|Im(λ)|{δm-(ρ1+μ1-2ρn)24if ρn<12(ρ1+μ1),δm-(ρn+μm-2ρ1)24if ρ1>12(ρn+μm),δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

Proof.

Let λ=a+bi with a, 0b and i=-1, and let z=(xT,yT)T be an eigenvector corresponding to λ. Then 𝒜z=λz, or in other words,

Ax+BTy=λx,-Bx+Cy=λy.(2.3)

Clearly, ynull[BT], otherwise (2.3) shows Ax=λx. Then λ is an eigenvalue of A if x0. If x=0, then y0, and the second equality in (2.3) leads to Cy=λy. Therefore λ if ynull[BT]. This contradicts the fact b0. By the first equality in (2.3), we have

x=(λI-A)-1BTy.

Together with the second equality in (2.3), this implies

-B(λI-A)-1BTy+Cy=λy.(2.4)

Let A=PDPT be the eigenvalue decomposition of A, where Pn×n is an orthogonal matrix and D=diag(ρ1,ρ2,,ρn). Then (2.4) results in

-BP(λI-D)-1PTBTy+Cy=λy.

Multiplying the above equality from the left with y* leads to

-y*BP(λI-D)-1PTBTy+y*Cy=λy*y.(2.5)

For any 1jn, we have

(λ-ρj)-1=1a+bi-ρj=a-ρj-bi(a-ρj)2+b2.

This leads to

(λI-D)-1=diag(a-ρ1-bi(a-ρ1)2+b2,,a-ρn-bi(a-ρn)2+b2).

The combination with (2.5) leads to

-y*BPdiag(a-ρ1(a-ρ1)2+b2,,a-ρn(a-ρn)2+b2)PTBTy+y*Cy=ay*y

and

y*BPdiag(1(a-ρ1)2+b2,,1(a-ρn)2+b2)PTBTy=y*y.(2.6)

Then we have

ay*y(ρn-a)y*BPdiag(1(a-ρ1)2+b2,,1(a-ρn)2+b2)PTBTy+y*Cy=(ρn-a)y*y+y*Cy,

which leads to

a12(ρn+y*Cyy*y)12(ρn+μm).(2.7)

Similarly, we deduce

a12(ρ1+μ1).(2.8)

On the other hand, from (2.6), we have

min1jn{(a-ρj)2}<y*BBTyy*yδm,

which implies

ρ1-δm<a<ρn+δm.

The combination with (2.7)–(2.8) concludes the proof of the first result.

In the subsequence, we show the upper bounds for |Im(λ)|. It follows from (2.6) that

min1jn{(a-ρj)2}+b2y*BBTyy*yδm,

which implies

|b|δm-min1jn{(a-ρj)2}δm.(2.9)

If 12(ρ1+μ1)>ρn, then, for any 1jn, the first result implies

a-ρj12(ρ1+μ1)-ρj12(ρ1+μ1)-ρn>0.

Then

min1jn(a-ρj)2(12(ρ1+μ1)-ρn)2=(ρ1+μ1-2ρn)24.

Along with (2.9), this shows that

|b|δm-(ρ1+μ1-2ρn)24.(2.10)

If 12(ρn+μm)<ρ1, then, for any 1jn, the first result implies

a-ρj12(ρn+μm)-ρj12(ρn+μm)-ρ1<0.

Then

min1jn{(a-ρj)2}(12(ρn+μm)-ρ1)2=(ρn+μm-2ρ1)24.

Along with (2.9), this shows that

|b|δm-(ρn+μm-2ρ1)24.(2.11)

The combination of (2.9), (2.10), and (2.11) concludes the proof of the second result. ∎

Since A is symmetric and positive definite, we have ρ1>0. Theorem 2.2 then shows that our new estimates improve the previous estimates in Theorem 2.1 for the cases that C is singular, provided ρ1-δm>0. The combination of Theorem 2.2 and Theorem 2.1 leads to the following estimates.

Theorem 2.3.

Under the same settings and conditions as in Theorem 2.2, any complex eigenvalue λ of A with Im(λ)0 meets the following estimates:

max{12(ρ1+μ1),ρ1-δm,μ1-δm}Re(λ)min{12(ρn+μm),ρn+δm,μm+δm},(2.12)|Im(λ)|{δm-(ρ1+μ1-2min{ρn,μm})24if min{ρn,μm}<12(ρ1+μ1),δm-(ρn+μm-2max{ρ1,μ1})24if max{ρ1,μ1}>12(ρn+μm),δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.(2.13)

Proof.

The estimates of the real part of eigenvalues of 𝒜 follow directly from Theorems 2.1 and 2.2. The upper bound of |Im(λ)| follows in four different cases.

Case I: max{ρn,μm}<12(ρ1+μ1). Then ρn<12(ρ1+μ1) and μm<12(ρ1+μ1). It follows from Theorem 2.2 and Theorem 2.1 that

|Im(λ)|δm-(ρ1+μ1-2ρn)24and|Im(λ)|δm-(ρ1+μ1-2μm)24.

This shows that

|Im(λ)|min{δm-(ρ1+μ1-2ρn)24,δm-(ρ1+μ1-2μm)24}=δm-max{(ρ1+μ1-2ρn)2,(ρ1+μ1-2μm)2}4=δm-(ρ1+μ1-2min{ρn,μm})24.

Case II: min{ρn,μm}<12(ρ1+μ1)max{ρn,μm}. If ρn>μm, then μm<12(ρ1+μ1). Theorem 2.1 implies

|Im(λ)|δm-(ρ1+μ1-2μm)24=δm-(ρ1+μ1-2min{ρn,μm})24.

If ρnμm, then ρn<12(ρ1+μ1). Theorem 2.2 implies

|Im(λ)|δm-(ρ1+μ1-2ρn)24=δm-(ρ1+μ1-2min{ρn,μm})24.

The combination of Case I and Case II leads to

|Im(λ)|δm-(ρ1+μ1-2min{ρn,μm})24,if min{ρn,μm}<12(ρ1+μ1).

Case III: min{ρ1,μ1}>12(ρn+μm). Then ρ1>12(ρn+μm) and μ1>12(ρn+μm). Theorem 2.2 and Theorem 2.1 imply

|Im(λ)|δm-(ρn+μm-2ρ1)24and|Im(λ)|δm-(ρn+μm-2μ1)24.

This is rewritten as

|Im(λ)|δm-(ρn+μm-2max{ρ1,μ1})24.

Case VI: max{ρ1,μ1}>12(ρn+μm)min{ρ1,μ1}. By an analogous proof, we can prove the same result as that of Case III. The combination with Case III leads to

|Im(λ)|δm-(ρn+μm-2max{ρ1,μ1})24,if max{ρ1,μ1}>12(ρn+μm).

Collecting the results of the four cases above, we complete the proof. ∎

Remark 2.4.

In many cases, the estimates in (2.12) and (2.13) of Theorem 2.3 are sharper than the estimates in (2.1) and (2.2) of Theorem 2.1, respectively. Consider the following example of a non-symmetric saddle point matrix:

𝒜=(600100600100600-100500-1000).

In this example, we have m=2 and n=3, and we can compute that ρ1=ρn=6, μ1=0, μm=5 and δm=1. Using MATLAB, we can find all the eigenvalues of 𝒜 as follows:

5.5000+0.8660i,5.5000-0.8660i,0.1716,5.8284,6.0000.

But the bounds (2.1)–(2.2) give the following estimates of an eigenvalue λ of 𝒜 with Im(λ)0:

3Re(λ)5.5,|Im(λ)|1,

while the bounds (2.12)–(2.13) provide indeed clearly sharper estimates:

5Re(λ)5.5,|Im(λ)|0.8660.

3 Spectral Behavior of Singular Saddle Point Matrices

In recent years, several studies have been carried out to analyze the spectral properties of the saddle point matrix 𝒜 of form (1.2) when 𝒜 is nonsingular; see, e.g., [7, 8, 1]. But it is often encountered in many applications that matrix 𝒜 is singular. To the best of our knowledge, not much effort has been made yet in literature to study this important singular case. This is our major focus of this section. We shall first present some estimates of the rank of the singular saddle point matrix 𝒜, which help to determine the multiplicity of the eigenvalue 0. Then we will establish some important estimates of both real and complex parts of all eigenvalues of the singular matrix 𝒜. Those estimates are the essential tools for our analysis in Section 4 on the spectral behavior of the preconditioned systems for singular saddle point matrices by nonsingular preconditioners.

Theorem 3.1.

Let A be a saddle point matrix of form (1.2), where A is symmetric positive definite, C is symmetric positive semi-definite, and rank[B]=p and rank[C]=r. Then the rank of matrix A can be estimated by

n+max{r,p}rank[𝒜]n+min{m,p+r}.

Proof.

A direct calculation verifies the identity

𝒜=(ABT-BC)=(In0-BA-1Im)(A00S)(InA-1BT0Im).

This shows that

rank[𝒜]=rank[(A00S)]=rank[A]+rank[S]=n+rank[S].(3.1)

Since SC and SBA-1BT, we have

rank[S]max{rank[C],rank[BA-1BT]}=max{r,p}.

Since A is symmetric and positive definite, it follows

rank[S]=rank[C+BA-1BT]rank[C]+rank[BA-1BT]=r+p.

This and (3.1) complete the proof. ∎

By Theorem 3.1, we know that rank[𝒜]=n+m if rank[C]=m or rank[B]=m. This implies immediately that rank[C]<m, μ1=0 and rank[B]<m if 𝒜 is singular. This leads to the following estimate of the multiplicity of the eigenvalue 0 of 𝒜.

Theorem 3.2.

Under the same settings for the matrices A, A and C as in Theorem 3.1, except that A is singular, the multiplicity of eigenvalue 0 of A can be estimated by

max{1,m-p-r}m-max{r,p}.

For the real eigenvalues of the singular saddle point matrix 𝒜, we can get the following results by a similar argument as that used in [1, Proposition 2.2] and [31, Theorem 2.1].

Theorem 3.3.

Let A, A and C be matrices as in Theorem 3.1, and S the Schur complement S=C+BA-1BT, with 0<q=rank[S]<m. Then any non-zero real eigenvalue λ of A can be estimated by

min{ρ1,ζm-q+1}λmax{ρn,μm}.

The following results are direct consequences of Theorems 3.2 and 3.3.

Corollary 3.4.

Let A be the singular saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric positive semi-definite. Assume that rank[B]=p, rank[C]=r and 0<q=rank[S]. Then we have the following estimates of the eigenvalues λ of A:

  • (1)

    𝒜 has at least max{1,m-p-r} and at most m-max{r,p} eigenvalues of 0.

  • (2)

    If λ>0 , then it has the bounds

    min{ρ1,ζm-q+1}λmax{ρn,μm}.

  • (3)

    If Im(λ)0 , then

    max{12ρ1,ρ1-δm}Re(λ)min{12(ρn+μm),ρn+δm,μm+δm},|Im(λ)|{δm-(ρ1-2μm)24if 2μm<ρ1,δm-(ρn+μm-2ρ1)24if 2ρ1>ρn+μm,δm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

If C=0, then Corollary 3.4 can be much simplified as stated in the following corollary.

Corollary 3.5.

Under the same assumptions as in Corollary 3.4, except that C=0, the following estimates of the eigenvalues λ of A hold:

  • (1)

    𝒜 has m-p eigenvalues of 0.

  • (2)

    If λ>0 and we set δm+1=+ , then

    min{ρ1,δm-p+1ρn}λρn.

  • (3)

    If Im(λ)0 , then

    |Im(λ)|δm-ρ124𝑎𝑛𝑑max{12ρ1,ρ1-δm}Re(λ)min{12ρn,δm}.

4 Nonsingular Preconditioners for Singular Saddle Point Matrices

Intensive efforts have been made for the studies of preconditioners for nonsingular saddle point systems of form (1.1); see, e.g., [7, 8, 28, 3, 13, 34, 21, 33, 5, 35, 19]. On the other hand, few results are available for the analysis on preconditioners for singular saddle point systems, and we are only aware of the results in [36], which considered the simple case with C=0 in (1.1). Moreover, only singular preconditioners were analyzed in [36] for the singular saddle point system (1.1), but which are not always reliable and reasonable, as the preconditioned systems may not be equivalent to the original system (1.1) if singular preconditioners are used. In this section, we present and study some nonsingular preconditioners 𝒦 for the saddle point matrix 𝒜 in (1.1) when 𝒜 is singular. We shall consider the preconditioners of the general form

𝒦=(PABT-BPC),(4.1)

where PA and PC are symmetric positive definite approximations of A and C, respectively. Then we shall solve the following preconditioned system:

𝒦-1𝒜z=𝒦-1b,(4.2)

or

𝒜𝒦-1z^=b,z^=𝒦z,(4.3)

where z=(xT,yT)T and b=(fT,-gT)T. Since

𝒦-1=(In-PA-1BT(PC+BPA-1BT)-10(PC+BPA-1BT)-1)(PA-10BPA-1Im),

we need only to solve two subsystems PAx=rx and (PC+BPA-1BT)y=ry when applying the GMRES method coupled with the preconditioner 𝒦 to the saddle point problem (1.1). In addition, we may easily see that B is not of full row rank when 𝒜 is singular, so it is natural to require a symmetric positive definite PC in (4.1). By Theorem 3.1, we know that 𝒦 is nonsingular. Some simple choices of PA and PC can be, e.g., PA=diag(A) and PC=αI with a positive parameter α.

The preconditioners of form (4.1) were considered in [8] for the case when 𝒜 is nonsingular, and PC was taken to be the special matrix PC=BPA-1BT-PS, where PS is a symmetric positive definite approximation of the Schur complement S. But we would emphasize that the singular case considered here is much more technical than the nonsingular system. One main technical difficulty one may encounter in solving a singular system by GMRES is that the iteration process may break down at some step. Some conditions were derived in [12] under which the GMRES iterates converge safely to a solution of a singular system. For our subsequent study of the convergence of the GMRES method for solving the preconditioned system (4.2), we introduce the following convergence result from [12, Theorem 2.4].

Theorem 4.1 ([12]).

GMRES determines a least-squares solution of the singular linear system Az=b without breakdown for all b and initial guess z0 if and only if null[A]=null[AT]. Furthermore, if Az=b is consistent and z0R(A), then the least-squares solution is the pseudoinverse solution, i.e., the minimizer in the Euclidean norm 2.

Using the above convergence theory, we can derive the following result when GMRES is applied for solving the singular saddle point problem (1.1).

Theorem 4.2.

Let K be the preconditioner defined in (4.1) and let the saddle point problem (1.1) be consistent. If null[BT]null[C] is the invariant subspace of PC, then GMRES applied for the preconditioned system (4.2) with any initial guess z0 determines a solution of the saddle point problem (1.1) without breakdown. Furthermore, if z0R(K-1A), then the GMRES solution is the pseudoinverse solution.

Proof.

Firstly, we prove that null[𝒦-1𝒜]=null[𝒜T𝒦-T]. Since the preconditioner 𝒦 is nonsingular, we have null[𝒦-1𝒜]=null[𝒜]. For any (xT,yT)Tnull[𝒜], it holds that

Ax+BTy=0,-Bx+Cy=0.(4.4)

This shows x=-A-1BTy. Together with the second equality in (4.4), this implies yTBA-1BTy+yTCy=0. Since both of the matrices BA-1BT and C are symmetric semi-definite, we have BTy=0 and Cy=0. Since x=-A-1BTy=0, the null space of 𝒜 can be expressed as

null[𝒜]={(0T,yT)T,ynull[BT]null[C]}.(4.5)

On the other hand, for any z=(xT,yT)Tnull[𝒜T𝒦-T], it follows 𝒜T𝒦-Tz=0. This shows that 𝒦-Tznull[𝒜T]. Using the same argument as that for deriving null[𝒜] in (4.5), we deduce

null[𝒜T]={(0T,yT)T,ynull[BT]null[C]}.

Then there exists a vector y^null[BT]null[C] such that 𝒦-Tz=(0T,y^T)T; in other words, z=𝒦T(0T,y^T)T. Noticing the form (4.1) and the condition that null[BT]null[C] is the invariant subspace of PC, we deduce

(xy)=(PA-BTBPC)(0y^)=(-BTy^PCy^)=(0PCy^)null[𝒜].

This implies null[𝒜T𝒦-T]null[𝒜]. Since rank[𝒜T𝒦-T]=rank[𝒜], we derive

null[𝒜T𝒦-T]=null[𝒜]=null[𝒦-1𝒜].

Together with Theorem 4.1 and the fact that (1.1) is consistent, this yields that GMRES determines a solution z*n+m of the preconditioned system (4.2) without breakdown for all b and z0. It is easy to see that z* is also a solution of the saddle point problem (1.1). Furthermore, if the initial vector z0(𝒦-1𝒜), then z* is the pseudoinverse solution, which completes the proof of Theorem 4.2. ∎

Remark 4.3.

The condition that null[BT]null[C] is the invariant subspace of PC is not restrictive. Actually, if we set PC as a scalar matrix, i.e., PC=αIm with α>0, then this condition follows naturally.

Similarly to the results in Theorem 4.2, we have the following convergence for GMRES applied for the preconditioned system (4.3).

Theorem 4.4.

Let K be the preconditioner defined as in (4.1). If null[BT]null[C] is the invariant subspace of PC, then GMRES applied for the preconditioned system (4.3) with any initial guess z0 determines a solution of the saddle point problem (1.1) without breakdown.

Proof.

Using a similar argument as that for Theorem 4.2, we can prove null[𝒜𝒦-1]=null[𝒦-T𝒜T]. Then we know from Theorem 4.1 that GMRES determines a least-squares solution z^*n+m of the preconditioned system (4.3) without breakdown for all b and z0. Since

0=(𝒜𝒦-1)T(b-𝒜𝒦-1z^*)=𝒦-T𝒜T(b-𝒜𝒦-1z^*),

we see 𝒜T(b-𝒜𝒦-1z^*)=0. This implies that z*=𝒦-1z^* is a least-squares solution of the original system (1.1). ∎

Remark 4.5.

Unlike Theorem 4.2, Theorem 4.4 can be applied also to the case where system (1.1) is inconsistent.

Although Theorems 4.2 and 4.4 guarantee the convergence of GMRES when applied to the preconditioned system (4.2) and (4.3), respectively, its convergence rate depends strongly on the spectral properties of the preconditioned matrix. It will be our central task in the subsequent section to study the spectral properties of the preconditioned matrix. For these studies, we remark that the general case C0 considered here is much more technical than the simple case C=0 studied in [36], and the analysis of [36] clearly does not apply to the current case with C0. Moreover, unlike the results in [36], which gave only the estimates of the real part of the eigenvalues of the preconditioned system in terms of the spectrum of the preconditioned system for A in (1.1), we shall derive the lower and upper bounds of both the real and complex eigenvalues of the preconditioned matrix.

5 Spectral Estimates of the Preconditioned Matrix

There are two major analysis techniques that enable the study of the spectral behavior of the preconditioned matrix 𝒜𝒦-1 in (4.3). First, we manage to work out two effective similar transformations, which transform our preconditioned system 𝒜𝒦-1 into a matrix, see in (5.2) below, that has exactly the same block form as the original saddle point matrix 𝒜 in (1.2). Then we can apply our results of Section 3 for the saddle point matrix 𝒜 to derive the lower and upper bounds of the real and complex eigenvalues of the preconditioned system 𝒜𝒦-1. But this application is not straightforward. We thus have to establish the spectral estimates for each of the matrices QQT, H and the Schur complement W=:H+QAp-1QT (see (5.2)–(5.4) below) associated with the new saddle point matrix in (5.2), and several delicate new techniques are introduced here.

In the remainder of this section, we will investigate the spectral properties of 𝒜𝒦-1 by using the results we developed in Section 3. To this end, we first define a matrix

𝒫=(PA1/200PC1/2).

We can directly verify that 𝒜𝒦-1 is similar to (𝒫-1𝒜𝒫-1)(𝒫𝒦-1𝒫), i.e.,

(PA-1/2APA-1/2PA-1/2BTPC-1/2-PC-1/2BPA-1/2PC-1/2CPC-1/2)(InPA-1/2BTPC-1/2-PC-1/2BPA-1/2Im)-1.

With the definitions

Ap=PA-1/2APA-1/2,Cp=PC-1/2CPC-1/2,R=PC-1/2BPA-1/2,(5.1)

the above observation is recast as

𝒜𝒦-1(ApRT-RCp)(InRT-RIm)-1.

This and the factorization

(InRT-RIm)=(In0-RIm+RRT)(InRT0Im)

lead to

𝒜𝒦-1(ApRT-RCp)(In-RT0Im)(In0(Im+RRT)-1R(Im+RRT)-1)(In0(Im+RRT)-1R(Im+RRT)-1)(ApRT-RCp)(In-RT0Im)=(Ap(In-Ap)RT-(Im+RRT)-1R(In-Ap)(Im+RRT)-1(2RRT-RApRT+Cp))(ApQT-QH)=:,(5.2)

where Q and H are given by

Q=(Im+RRT)-1/2R(In-Ap),(5.3)H=(Im+RRT)-1/2(2RRT-RApRT+Cp)(Im+RRT)-1/2.(5.4)

Since 𝒜𝒦-1 and have the same eigenvalues, it suffices to estimate the eigenvalues of . As has the same structure as the original saddle point matrix 𝒜 in (1.2), we can apply our results developed in Section 3 to investigate the spectral properties of .

To do so, we write the eigenvalues of Ap, Cp, RRT and QQT, respectively, as

0<ρ^1ρ^2ρ^nand0μ^1μ^2μ^m,0ζ^1ζ^2ζ^mand0δ^1δ^2δ^m.

Next, we will derive the spectral estimates for each of the matrices QQT, H, and Schur complement W, so that we can apply the spectral bounds of Section 3 to the new saddle point matrix in (5.2).

Without loss of generality, we assume that ρ^n<1. If ρ^n1, a proper scaling of PA will bring us back to the case that ρ^n<1. Under this assumption, In-Ap is symmetric positive definite. Since R(2In-Ap)RT0 and Cp0, we have

(Im+RRT)1/2H(Im+RRT)1/2Cp0

and

(Im+RRT)1/2H(Im+RRT)1/2R(2In-Ap)RT0.

This shows

rank[H]max{rank[R(2In-Ap)RT],rank[Cp]}=max{rank[B],rank[C]}.

Hence H0 if B0 or C0.

It is easy to verify that

rank[Q]=rank[PC-1/2BPA-1/2]=rank[B].(5.5)

The combination of (5.1), (5.3), and (5.4) leads to

W=(Im+RRT)-1/2[2RRT-RApRT+Cp+R(In-Ap)Ap-1(In-Ap)RT](Im+RRT)-1/2=(Im+RRT)-1/2(Cp+RAp-1RT)(Im+RRT)-1/2

and

Cp+RAp-1RT=PC-1/2(C+BA-1BT)PC-1/2=PC-1/2SPC-1/2.

This implies

rank[W]=rank[Cp+RAp-1RT]=rank[S].

Together with (5.5), we are ready to establish the first major result in this section.

Lemma 5.1.

Let A be the singular saddle point matrix of form (1.2), where A is symmetric positive definite and C is symmetric and positive semi-definite, and let Q and H be the two matrices given in (5.3) and (5.4), respectively. Assume that rank[B]=p, rank[C]=r and 0<q=rank[S]. Then rank[Q]=p and max{p,r}rank[W]=qp+r, and the following estimates hold:

0<ζ^m-p+1(1-ρ^n)21+ζ^m-p+1δ^m-p+1δ^mζ^m(1-ρ^1)21+ζ^m,λmax[H]min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}},λm-q+1[W]11+ζ^mλm-q+1[Cp+RAp-1RT]>0.

In particular, if C=0, then rank[W]=p and

λm-p+1[W]ζ^m-p+1ρ^n(1+ζ^m-p+1).

Proof.

It is easy to verify that the eigenvalues of (Im+RRT)-1/2RRT(Im+RRT)-1/2 are

ζ^j1+ζ^j,j=1,2,,m.

Since the function x1+x is monotonically increasing at x0, we have

λmax[(Im+RRT)-1/2RRT(Im+RRT)-1/2]=ζ^m1+ζ^m.(5.6)

This, ρ^n<1 and (5.3) lead to

QQT=(Im+RRT)-1/2R(In-Ap)2RT(Im+RRT)-1/2(1-ρ^1)2(Im+RRT)-1/2RRT(Im+RRT)-1/2ζ^m(1-ρ^1)21+ζ^mIm.(5.7)

Therefore,

δ^mζ^m(1-ρ^1)21+ζ^m.

On the other hand, (5.7) implies

QQT(1-ρ^n)2(Im+RRT)-1/2RRT(Im+RRT)-1/20.

This gives an estimate of the smallest positive eigenvalue of QQT, namely

δ^m-p+1(1-ρ^n)2λm-p+1[(Im+RRT)-1/2RRT(Im+RRT)-1/2].

Together with the fact that

λm-p+1[(Im+RRT)-1/2RRT(Im+RRT)-1/2]=ζ^m-p+11+ζ^m-p+1>0,

this leads to

δ^m-p+1ζ^m-p+1(1-ρ^n)21+ζ^m-p+1.

The second result is proved in the sequel. From (5.4) and (5.6) we obtain

H=(Im+RRT)-1/2R(2In-Ap)RT(Im+RRT)-1/2+(Im+RRT)-1/2Cp(Im+RRT)-1/2(2-ρ^1)(Im+RRT)-1/2RRT(Im+RRT)-1/2+μ^m(Im+RRT)-1[ζ^m(2-ρ^1)1+ζ^m+μ^m]Im

and

Hmax{(2-ρ^1),μ^m}[(Im+RRT)-1/2RRT(Im+RRT)-1/2+(Im+RRT)-1]=max{(2-ρ^1),μ^m}Im.

This implies

λmax[H]min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}}.

For any matrix Lm×m, we have rank[LLT]=rank[LTL], and LLT and LTL have the same eigenvalues. This leads to

λm-q+1[W]=λm-q+1[(Im+RRT)-1/2(Cp+RAp-1RT)(Im+RRT)-1/2]=λm-q+1[(Cp+RAp-1RT)1/2(Im+RRT)-1(Cp+RAp-1RT)1/2]11+ζ^mλm-q+1[Cp+RAp-1RT],

which gives our desired estimate.

Particularly, if C=0, then Cp=0 leads to

W=(Im+RRT)-1/2RAp-1RT(Im+RRT)-1/21ρ^n(Im+RRT)-1/2RRT(Im+RRT)-1/2.

Therefore,

λm-p+1[W]1ρ^nλm-p+1[(Im+RRT)-1/2RRT(Im+RRT)-1/2]=ζ^m-p+1ρ^n(1+ζ^m-p+1),

which completes the proof. ∎

Introduce two constants

ξm=ζ^m(1-ρ^1)21+ζ^mandϑm=min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}}.

The factorization

=(In0-QAp-1Im)(Ap00W)(InAp-1QT0Im)

leads to the following estimates directly from Corollary 3.4 and Lemma 5.1.

Theorem 5.2.

Let A and K be two saddle point matrices of the forms (1.2) and (4.1), respectively, where A is symmetric and positive definite and C is symmetric positive semi-definite, with rank[B]=p and rank[C]=r. Then any eigenvalue λ of the preconditioning matrix AK-1 fulfills the following estimates:

  • (1)

    𝒜𝒦-1 has at least max{1,m-p-r} and at most m-max{p,r} eigenvalues of 0.

  • (2)

    If λ>0 and η^ is the minimum non-zero eigenvalue of Cp+RAp-1RT , then

    min{ρ^1,η^1+ζ^m}λmax{ρ^n,ϑm},

  • (3)

    If Im(λ)0 , then

    max{ρ^12,ρ^1-ξm}Re(λ)min{ρ^n+ϑm2,ρ^n+ξm,ϑm+ξm},|Im(λ)|{(ξm2-(ρ^1-2ϑm)24)1/2if 2ϑm<ρ^1,(ξm2-(ρ^n+ϑm-2ρ^1)24)1/2if 2ρ^1>ρ^n+ϑm,ξm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

The two simple inequalities

ϑm=min{ζ^m(2-ρ^1)1+ζ^m+μ^m,max{(2-ρ^1),μ^m}}min{(2-ρ^1)+μ^m,max{(2-ρ^1),μ^m}}=max{(2-ρ^1),μ^m}

and

2-ρ^1>ρ^n

simplify the estimates of Theorem 5.2 as below.

Corollary 5.3.

Under the conditions of Theorem 5.2 on the singular saddle point matrix A and its nonsingular preconditioner K, any eigenvalue λ of the preconditioning matrix AK-1 fulfills the following estimates:

  • (1)

    If λ>0 and η^ is the minimum non-zero eigenvalue of Cp+RAp-1RT , then

    min{ρ^1,η^1+ζ^m}λmax{2-ρ^1,μ^m}.

  • (2)

    If Im(λ)0 , then |Im(λ)|1-ρ^1 and

    max{12ρ^1,2ρ^1-1}Re(λ)min{12(ρ^n+max{(2-ρ^1),μ^m}),1+ρ^n-ρ^1}.

Corollary 5.3 shows that all eigenvalues of 𝒜𝒦-1 are located in a square with a side length of 2 if PC is chosen such that μ^m2.

Moreover, if we have C=0, then Cp=0, and μ^m=0 and r=0 accordingly. In this case,

ϑm=min{ζ^m(2-ρ^1)1+ζ^m,max{(2-ρ^1),0}}=min{ζ^m(2-ρ^1)1+ζ^m,(2-ρ^1)}=ζ^m(2-ρ^1)1+ζ^m.

An application of this expression of ϑm in Theorem 5.2 verifies the following corollary.

Corollary 5.4.

Under the same conditions of Theorem 5.2 on the singular saddle point matrix A and its nonsingular preconditioner K, except that we now assume C=0, the following holds:

  • (1)

    𝒜𝒦-1 has m-p eigenvalues at 0.

Moreover, any eigenvalue λ of the preconditioning matrix AK-1 fulfills the following estimates:

  • (2)

    If λ>0 , then

    min{ρ^1,ζ^m-p+1ρ^n(1+ζ^m-p+1)}λmax{ρ^n,ζ^m(2-ρ^1)1+ζ^m}.

  • (3)

    If Im(λ)0 , then

    max{12ρ^1,ρ^1-ξm}Re(λ)min{12(ρ^n+ζ^m(2-ρ^1)1+ζ^m),ρ^n+ξm,ζ^m(2-ρ^1)1+ζ^m+ξm},2|Im(λ)|{4ξm2-(ρ^1-2ζ^m(2-ρ^1)1+ζ^m)2if ρ^1>4ζ^m1+3ζ^m,4ξm2-(ρ^n+ζ^m(2-ρ^1)1+ζ^m-2ρ^1)2if 2ρ^1-ρ^n2-ρ^1>ζ^m1+ζ^m,2ξm𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.

It can be seen from Corollary 5.4 that the smaller the maximum eigenvalue ζ^m of RRT is, the more clustered all the eigenvalues of the preconditioned matrix 𝒜𝒦-1 are. And as C=0, we can choose PC=αIm with α>0. Then RRT=1αBPA-1BT, which shows that ζ^m will be small for sufficiently large α. On the other hand, we can infer from Corollary 5.3 that the distribution of the eigenvalues of 𝒜𝒦-1 will be clustered even for large ζ^m.

6 Numerical Experiments

In this section, we present a few numerical examples to test the performance of some nonsingular preconditioners in combination with the preconditioned restarted GMRES(50) method for solving the singular saddle point problems of form (1.1). All experiments were run in MATLAB R2013b on a PC with Intel(R) Core(TM) i5-5257U, CPU @2.70 GHz, 8.00 GB. For each example, we report the number of iterations, the CPU time and the relative residual, which are respectively denoted by “Iter”, “CPU” and “RES”. Let zk be the k-th approximate solution of the saddle point problem (1.1), then “RES” is defined by the relative residuals

RES:=b-𝒜zk2b2.

The initial guess z0 is always set to 0 and the iteration is terminated when RES10-6.

Let L=ichol(A) be the incomplete Cholesky decomposition of the matrix A with droptol = 1e-02. We set PA=LLT. Then it is easy to see that

𝒦=(L0-BL-TIm)(LTL-1BT0PC+BL-TL-1BT).

Since L is a sparse lower triangular matrix, then in each step, we just need to solve some lower triangular linear equations. In our computations, we shall compare the performance of the restarted GMRES(50) method, to be denoted by GMRES, and the preconditioned restarted GMRES(50) method coupled with the nonsingular preconditioner 𝒦 defined as in (4.1) with PC=Im/10,Im, 10Im, to be denoted by Im/10, Im, 10Im, respectively.

Example 6.1.

This example arises from the discretization of the Stokes problem (see [37, Example 4.1]), where the block matrices in the saddle point system (1.1) take the form

A=(IT+TI00IT+TI)2l2×2l2,B=(E,b1,b2)T(l2+2)×2l2,C=0(l2+2)×(l2+2),

where the matrices T and E and the vectors b1 and b2 are given by

T=1h2tridiag(-1,2,-1)l×l,E=(IFFI)2l2×l2,b1=E(e0),b2=E(0e),e=(1,1,,1)Tl2/2,

and F=1htridiag(-1,1,0)l×l, h=1/(l+1), and p=2l2, q=l2+2 and n=p+q=3l2+2. As the matrix B is not of full row rank, the saddle point matrix 𝒜 in (1.2) is singular.

In this example, we choose the right-hand-side vector of the saddle point problem (1.1) so that its exact solution is given by the vector (1,1,,1)TRn+m. The numerical results for Example 6.1 are listed in Table 1.

Table 1

Numerical results for Example 6.1.

Example 6.2 ([20]).

Consider the following Stokes system:

-Δu+p=fin Ω,(6.1)u=0in Ω,(6.2)

where Ω is the square domain (-1,1)×(0,1) in 2, the vector field u represents the velocity in Ω, p represents pressure, and the scalar ν>0 is the kinematic viscosity. A Dirichlet no-flow condition is applied on the side and bottom boundaries. The non-zero horizontal velocity is set on the lid, namely u/x=1 on [-1,1]×{1}.

This example is a model of the flow in a square cavity with the lid moving from left to right. By the setting of the non-zero horizontal velocity, we know that the test problem is a leaky two-dimensional lid-driven cavity problem in the square domain. We discretize the Stokes system (6.1)–(6.2) respectively by the Q2-P1 and Q2-Q1 finite elements on some uniform grids and apply the IFISS software package developed by Elman, Ramage, and Silvester [20] to generate the discretized linear systems for the meshes of size 16×16, 32×32, 64×64 and 128×128. The resulting numerical results are listed in Tables 2 and 3.

Table 2

Numerical results for Example 6.2 discretized by the Q2-P1 finite elements.

Table 3

Numerical results for Example 6.2 discretized by the Q2-Q1 finite elements.

Example 6.3 ([20]).

This example still considers the Stokes system (6.1)–(6.2) as in Example 6.2. We set the non-zero horizontal velocity on the top part of the domain, namely u/x=1 on (-1,1)×{1}. The test problem describes a watertight cavity problem.

In this example, we still discretize the Stokes system (6.1)–(6.2) respectively by the Q2-P1 and Q2-Q1 finite elements on some uniform grids and apply the IFISS software package developed by Elman, Ramage, and Silvester [20] to generate the discretized linear systems for the meshes of size 16×16, 32×32, 64×64 and 128×128. The resulting numerical results are listed in Tables 4 and 5.

Table 4

Numerical results for Example 6.3 discretized by the Q2-P1 finite elements.

Table 5

Numerical results for Example 6.3 discretized by the Q2-Q1 finite elements.

Example 6.4 ([20]).

This example still considers the Stokes system (6.1)–(6.2) as in Example 6.2, but with the non-zero horizontal velocity on the top part of the domain, namely u/x=1-x4 on [-1,1]×{1}. This test problem is a regularized cavity.

In this example, we still discretize the Stokes system (6.1)–(6.2) respectively by the Q2-P1 and Q2-Q1 finite elements on some uniform grids and apply the IFISS software package developed by Elman, Ramage, and Silvester [20] to generate the discretized linear systems for the meshes of size 16×16, 32×32, 64×64 and 128×128. The resulting numerical results are listed in Tables 6 and 7.

Table 6

Numerical results for Example 6.4 discretized by the Q2-P1 finite elements.

Table 7

Numerical results for Example 6.4 discretized by the Q2-Q1 finite elements.

We may observe from the numerical results shown in Tables 17 that our new preconditioner 𝒦 in (4.1) is effective and stable. In our computation, we set the symmetric and positive definite matrix PC as a scalar matrix, i.e., PC=αIm with α=110, 1, 10. By comparing the numerical results of the three different cases, we can find that the performance of the preconditioner 𝒦 is not sensitive to the change of the parameter α or even the matrix PC. Furthermore, with the dimension increased, the advantages on the required CPU time of the preconditioned GMRES(50) coupled with 𝒦 have become prominent gradually.

References

  • [1]

    O. Axelsson, Unified analysis of preconditioning methods for saddle point matrices, Numer. Linear Algebra Appl. 22 (2015), 233–253.  CrossrefWeb of ScienceGoogle Scholar

  • [2]

    O. Axelsson and M. G. Neytcheva, Eigenvalue estimates for preconditioned saddle point matrices, Numer. Linear Algebra Appl. 13 (2006), 339–360.  CrossrefGoogle Scholar

  • [3]

    Z.-Z. Bai, M. K. Ng and Z.-Q. Wang, Constraint preconditioners for symmetric indefinite matrices, SIAM J. Matrix Anal. Appl. 31 (2009), 410–433.  Web of ScienceCrossrefGoogle Scholar

  • [4]

    R. E. Bank, B. D. Welfert and H. Yserentant, A class of iterative methods for solving saddle point problems, Numer. Math. 56 (1990), 645–666.  Google Scholar

  • [5]

    M. Benzi and G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004), 20–41.  CrossrefGoogle Scholar

  • [6]

    M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005), 1–137.  CrossrefGoogle Scholar

  • [7]

    M. Benzi and V. Simoncini, On the eigenvalues of a class of saddle point matrices, Numer. Math. 103 (2006), 173–196.  CrossrefGoogle Scholar

  • [8]

    L. Bergamaschi, On eigenvalue distribution of constraint-preconditioned symmetric saddle point matrices, Numer. Linear Algebra Appl. 19 (2012), 754–772.  CrossrefWeb of ScienceGoogle Scholar

  • [9]

    S. Beuchler, T. Eibner and U. Langer, Primal and dual interface concentrated iterative substructuring methods, SIAM J. Numer. Anal. 46 (2008), 2818–2842.  Web of ScienceCrossrefGoogle Scholar

  • [10]

    A. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996.  Google Scholar

  • [11]

    F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods, Springer, New York, 1991.  Google Scholar

  • [12]

    P. N. Brown and H. F. Walker, GMRES on (nearly) singular systems, SIAM J. Matrix Anal. Appl. 18 (1997), 37–51.  CrossrefGoogle Scholar

  • [13]

    Z.-H. Cao, Positive stable block triangular preconditioners for symmetric saddle point problems, Appl. Numer. Math. 57 (2007), 899–910.  CrossrefWeb of ScienceGoogle Scholar

  • [14]

    C. Carstensen, M. Kuhn and U. Langer, Fast parallel solvers for symmetric boundary element domain decomposition equations, Numer. Math. 79 (1998), 321–347.  CrossrefGoogle Scholar

  • [15]

    C.-R. Chen and C.-F. Ma, A generalized shift-splitting preconditioner for saddle point problems, Appl. Math. Lett. 43 (2015), 49–55.  CrossrefWeb of ScienceGoogle Scholar

  • [16]

    J. Chen, Y. Xu and J. Zou, An adaptive edge element method and its convergence for a saddle-point problem from magnetostatics, Numer. Methods Partial Differential Equations 28 (2012), 1643–1666.  CrossrefWeb of ScienceGoogle Scholar

  • [17]

    X. Chen, On preconditioned Uzawa methods and SOR methods for saddle-point problems, J. Comput. Appl. Math. 100 (1998), 207–224.  CrossrefGoogle Scholar

  • [18]

    P. Ciarlet, Jr., J. Huang and J. Zou, Some observations on generalized saddle-point problems, SIAM J. Matrix Anal. Appl. 25 (2003), 224–236.  CrossrefGoogle Scholar

  • [19]

    C. R. Dohrmann and R. B. Lehoucq, A primal-based penalty preconditioner for elliptic saddle point systems, SIAM J. Numer. Anal. 44 (2006), 270–282.  CrossrefGoogle Scholar

  • [20]

    H. C. Elman, A. Ramage and D. J. Silvester, Algorithm 866: IFISS, a MATLAB toolbox for modelling incompressible flow, ACM Trans. Math. Software 33 (2007), 1–18.  Web of ScienceGoogle Scholar

  • [21]

    H. C. Elman, D. J. Silvester and A. J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier–Stokes equations, Numer. Math. 90 (2002), 665–688.  CrossrefGoogle Scholar

  • [22]

    V. Girault and P. A. Raviart, Finite Element Approximation of the Navier Stokes Equations, Springer, New York, 1979.  Google Scholar

  • [23]

    R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer, New York, 1984.  Google Scholar

  • [24]

    N. I. M. Gould and V. Simoncini, Spectral analysis of saddle point matrices with indefinite leading blocks, SIAM J. Matrix Anal. Appl. 31 (2009), 1152–1171.  Web of ScienceGoogle Scholar

  • [25]

    Q. Hu and J. Zou, An iterative method with variable relaxation parameters for saddle-point problems, SIAM J. Matrix Anal. Appl. 23 (2001), 317–338.  CrossrefGoogle Scholar

  • [26]

    Q. Hu and J. Zou, Two new variants of nonlinear inexact Uzawa algorithms for saddle-point problems, Numer. Math. 93 (2002), 333–359.  CrossrefGoogle Scholar

  • [27]

    Q. Hu and J. Zou, Nonlinear inexact Uzawa algorithms for linear and nonlinear saddle-point problems, SIAM J. Optim. 16 (2006), 798–825.  CrossrefGoogle Scholar

  • [28]

    C. Keller, N. I. M. Gould and A. J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl. 21 (2000), 1300–1317.  CrossrefGoogle Scholar

  • [29]

    I. Perugia and V. Simoncini, Block-diagonal and indefinite symmetric preconditioners for mixed finite element formulations, Numer. Linear Algebra Appl. 7 (2000), 585–616.  CrossrefGoogle Scholar

  • [30]

    T. Rusten and R. Winther, A preconditioned iterative method for saddle point problems, SIAM J. Matrix Anal. Appl. 13 (1992), 887–904.  CrossrefGoogle Scholar

  • [31]

    S.-Q. Shen, T.-Z. Huang and J. Yu, Eigenvalue estimates for preconditioned nonsymmetric saddle point matrices, SIAM J. Matrix Anal. Appl. 31 (2010), 2453–2476.  Web of ScienceCrossrefGoogle Scholar

  • [32]

    S.-Q. Shen, L. Jian, W.-D. Bao and T.-Z. Huang, On the eigenvalue distribution of preconditioned nonsymmetric saddle point matrices, Numer. Linear Algebra Appl. 21 (2014), 557–568.  Web of ScienceCrossrefGoogle Scholar

  • [33]

    D. Silvester, H. Elman, D. Kay and A. Wathen, Efficient preconditioning of the linearized Navier–Stokes equations for incompressible flow, J. Comput. Appl. Math. 128 (2001), 261–279.  CrossrefGoogle Scholar

  • [34]

    D. J. Silvester and A. J. Wathen, Fast iterative solution of stabilised Stokes systems, Part II: Using general block preconditioners, SIAM J. Numer. Anal. 31 (1994), 1352–1367.  CrossrefGoogle Scholar

  • [35]

    J.-L. Zhang, C.-Q. Gu and K. Zhang, A relaxed positive-definite and skew-Hermitian splitting preconditioner for saddle point problems, Appl. Math. Comput. 249 (2014), 468–479.  Web of ScienceGoogle Scholar

  • [36]

    N.-M. Zhang and P. Shen, Constraint preconditioners for solving singular saddle point problems, J. Comput. Appl. Math. 238 (2013), 116–125.  CrossrefWeb of ScienceGoogle Scholar

  • [37]

    B. Zheng, Z.-Z. Bai and X. Yang, On semi-convergence of parameterized Uzawa methods for singular saddle point problems, Linear Algebra Appl. 431 (2009), 808–817.  Web of ScienceCrossrefGoogle Scholar

About the article

Received: 2017-03-30

Revised: 2017-04-01

Accepted: 2017-04-04

Published Online: 2017-05-31

Published in Print: 2018-04-01


Funding Source: National Natural Science Foundation of China

Award identifier / Grant number: 11071041

Award identifier / Grant number: N_CUHK437/16

The work of the first author was supported by National Postdoctoral Program for Innovative Talents (Grant No. BX201600182), China Postdoctoral Science Foundation (Grant No. 2016M600141) and partially by a Direct Grant for Research from CUHK. The work of the second author was supported by National Natural Science Foundation of China (Grant No. 11071041), Fujian Natural Science Foundation (Grant No. 2016J01005). The work of the third author was substantially supported by Hong Kong RGC grant (project 14306814) and National Natural Science Foundation (NSFC)/Research Grants Council (RGC) Joint Research Scheme 2016/17 (project N_CUHK437/16).


Citation Information: Computational Methods in Applied Mathematics, Volume 18, Issue 2, Pages 237–256, ISSN (Online) 1609-9389, ISSN (Print) 1609-4840, DOI: https://doi.org/10.1515/cmam-2017-0006.

Export Citation

© 2018 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in