Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

IMPACT FACTOR 2017: 0.831
5-year IMPACT FACTOR: 0.836

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2017: 0.32

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

# A relaxed block splitting preconditioner for complex symmetric indefinite linear systems

Yunying Huang
• School of Mathematical Sciences, Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, East China Normal University, Dongchuan RD 500, Shanghai 200241, China
• Email
• Other articles by this author:
/ Guoliang Chen
• Corresponding author
• School of Mathematical Sciences, Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, East China Normal University, Dongchuan RD 500, Shanghai 200241, China
• Email
• Other articles by this author:
Published Online: 2018-06-07 | DOI: https://doi.org/10.1515/math-2018-0051

## Abstract

In this paper, we propose a relaxed block splitting preconditioner for a class of complex symmetric indefinite linear systems to accelerate the convergence rate of the Krylov subspace iteration method and the relaxed preconditioner is much closer to the original block two-by-two coefficient matrix. We study the spectral properties and the eigenvector distributions of the corresponding preconditioned matrix. In addition, the degree of the minimal polynomial of the preconditioned matrix is also derived. Finally, some numerical experiments are presented to illustrate the effectiveness of the relaxed splitting preconditioner.

MSC 2010: 65F10; 75F50

## 1 Introduction

Consider the following large sparse nonsingular complex symmetric linear systems

$Au=b, A∈Cn×n and u, b∈Cn,$(1)

where A = W + iT, W, T ∈ ℝn×n are symmetric matrices and i = $\begin{array}{}\sqrt{-1}\end{array}$ denotes the imaginary unit. Such complex linear system (1) arise in many areas of scientific computing and engineering applications, such as FFT-based solution of certain time-dependent PDEs [1], distributed control problems [2], diffuse optical tomography [3], PDE-constrained optimization problems [4], quantum mechanics [5] and so on; see [6,7,8] and references therein for other applications.

Let u = x + iy and b = f + ig with x, y, f, g ∈ ℝn, then the complex linear system (1) can be equivalently rewritten as the real block two-by-two linear system as follows

$A~u=W−TTWxy=fg=b,$(2)

or the following two-by-two block real equivalent formulation

$Au=T−WWTx−y=gf=b.$(3)

It can be seen that the linear systems (2) and (3) both avoid using complex arithmetic, but the coefficient matrix in (2) and (3) have become doubled in size. Because it is more attractive to use iterative methods rather than direct methods for solving (1) or (2), then many efficient iterative methods as well as their numerical properties have been studied in the literature, see [9,10]. When both W and T are symmetric positive semi-definite with at least one of them being positive definite, some efficient iterative methods have been proposed. Recently, based on the Hermitian and skew-Hermitian splitting (HSS) method [11], Bai et al. [12] developed the modified HSS (MHSS) iteration method to solve the complex linear system (1), which is convergent for any positive constant α. To accelerate the convergence rate of the MHSS iteration method, Bai et al. [13] introduced the preconditioned MHSS (PMHSS) which is unconditionally convergent and shows h-independent convergence behavior. In [14], Bai further analyzed algebraic and convergence properties of the PMHSS iteration method for solving complex linear systems. Moreover, there are also some other effective iterative methods, such as the lopsided PMHSS (LPMHSS) iteration method, the accelerated PMHSS (APMHSS) iteration method, the preconditioned generalized SOR (PGSOR) iterative method, the skew-normal splitting (SNS) method, the double-step scale splitting (DSS) iteration method [15,16,17,18,19], and so on.

If the matrices W and T are symmetric positive semi-definite, some efficient preconditioners have been presented. Bai [20] proposed the rotated block triangular (RBT) preconditioners which is based on the PMHSS preconditioning matrix [21] for the linear system (2). Lang and Ren [22] established the inexact RBT (IRBT) preconditioners. For solving the linear system (2), Liang and Zhang [23] developed the symmetric SOR (SSOR) method.

However, when W ∈ ℝn×n is symmetric indefinite, T ∈ ℝn×n is symmetric positive definite, the matrices αI + W, αV + W and αW + T2 are indefinite or singular which may lead to the slow convergence speeds of the MHSS method, the PMHSS method and the SNS method. To overcome these problems, in [24], the Hermitian normal splitting (HNS) method and its variant simplified HNS (SHNS) method have been introduced by Wu for solving the complex symmetric linear system (1). Zhang and Dai [25] presented a preconditioned SHNS (PSHNS) iteration method and constructed a new preconditioner. In this paper, we solve the linear system (3) with W being symmetric indefinite and T being symmetric positive definite. In order to solve real formulation (3), Zhang and Dai [26] established a new block preconditioner and also analyzed the spectral properties of the corresponding preconditioned matrix. An improved block (IB) splitting preconditioner was proposed by Zhang et al. in [27] and they gave the convergence property of the corresponding iteration method under suitable conditions.

In this paper, by using the relaxing technique, we construct a relaxed block splitting preconditioner for the real block two-by-two linear system (3). The relaxed splitting preconditioner is much closer to the original coefficient matrix than the block preconditioners in [26]. We analyze the spectral properties of the corresponding preconditioned matrix and show that the corresponding preconditioned matrix has an eigenvalue 1 with algebraic multiplicity at least n. The structure of the eigenvectors and the dimension of the Krylov subspace for the preconditioned matrix are also derived.

The organization of the paper is as follows. In Section 2, a relaxed block splitting preconditioner is presented. In Section 3, we study the distribution of eigenvalues, the form of the eigenvectors and the dimension of the Krylov subspace of the corresponding preconditioned matrix. In Section 4, some numerical examples are presented to compare the new preconditioner with the PSHNS preconditioner and the HSS preconditioner. Finally, we give some brief concluding remarks in Section 5.

## 2 A new relaxed splitting preconditioner

In this section, by employing the relaxation techniques, we establish a relaxed block splitting preconditioner for solving the linear system (3).

Pan et al. [28] employed the positive-definite and skew-Hermitian splitting (PSS) iteration method in [29] to develop a PSS preconditioner for saddle point problems. For the linear system (3), the coefficient matrix 𝓐 can be split as follows

$A=J+K,$(4)

where $\begin{array}{}\mathcal{J}=\left(\begin{array}{cc}T& 0\\ 0& 0\end{array}\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathcal{K}=\left(\begin{array}{cc}0& -W\\ W& T\end{array}\right).\end{array}$ Recently, based on the PSS preconditioner, Zhang and Dai presented a preconditioner in [26] as follows

$P1=1α(αI+K)(αI+J)=1ααI−WWαI+TαI+T00αI=αI+T−WW(I+1αT)αI+T.$(5)

Then the difference between the preconditioner 𝓟1 and the coefficient matrix 𝓐 is given as follows

$R1=P1−A=αI01αWTαI.$(6)

A general criterion for an efficient preconditioner is that it should be as close as possible to the coefficient matrix 𝓐 which makes the preconditioned matrix have a clustered spectrum, and hope that it is easy to implement. Inspired by the idea of the relaxation technique in [30,31,32,33], we give a relaxed splitting preconditioner which is defined as follows

$P2=1ααI−WWTT00αI=T−W1αWTT.$(7)

From (3) and (7), we get the difference between the preconditioner 𝓟2 and the coefficient matrix 𝓐

$R2=P2−A=0 01αWT−W 0.$(8)

It is easy to see that the (2, 1)-block matrix in (8) is different from that in (6) and the (1, 1)-block and (2, 2)-block matrix in (8) now vanish, which indicates that the preconditioner 𝓟2 is much closer to the coefficient matrix 𝓐 than the preconditioner 𝓟1.

From (8), we know that the preconditioner 𝓟2 can be established by the following splitting of the coefficient matrix 𝓐

$A=P2−R2=T−W1αWTT−0 01αWT−W 0.$(9)

In the following section, we will analyze the spectral properties of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\end{array}$ 𝓐.

## 3 Spectral analysis of the preconditioned matrix

In this section, we mainly study the spectral properties of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\end{array}$ 𝓐 and give an upper bound of the dimension of the Krylov subspace for this preconditioned matrix. Therefore, firstly we should get the inverse of the matrix 𝓟2 and the explicit form of the matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\end{array}$ will be given in the following lemma. The proof of this lemma consists of straightforward calculations and is omitted here.

#### Lemma 3.1

Let

$P=αI−WWTandP=T00αI.$

Here 𝓟 has the following block-triangular factorization

$P=I01αWIαI 00 T+1αW2I−1αW0I.$

Then we obtain

$P2−1=αP^−1P−1=T−1−1αT−1W(T+1αW2)−1W T−1W(T+1αW2)−1−1α(T+1αW2)−1W(T+1αW2)−1.$(10)

In the following, we will analyze the eigenvalue distributions of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\end{array}$ 𝓐.

#### Theorem 3.2

Assume that the coefficient matrix 𝓐 is nonsingular, W ∈ ℝn×n is symmetric indefinite and T ∈ ℝn×n is symmetric positive definite. Let α be a real positive constant and U = WT−1W. The preconditioner 𝓟2 is defined as in (7). Then the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\end{array}$ 𝓐 has eigenvalues at 1 with multiplicity at least n and the remaining eigenvalues are λi, i = 1, 2, ⋯, n, where λi (i = 1, 2, ⋯, n) are the eigenvalues of the matrix αT−1(αI + U)−1(T + U).

#### Proof

From (9) and Lemma 3.1, we have

$P2−1A=I−P2−1R2=I−T−1W(T+1αW2)−1W(1αT−I) 0(T+1αW2)−1W(1αT−I) 0=I−T−1W(T+1αW2)−1W(1αT−I) 0−(T+1αW2)−1W(1αT−I) I.$

It follows from U = WT−1W that

$I−T−1W(T+1αW2)−1W(1αT−I)=T−1(T−W(T+1αW2)−1W(1αT−I))=T−1(T−W(W(W−1TW−1+1αI)W)−1W(1αT−I))=T−1(T−(1αI+W−1TW−1)−1(1αT−I))=T−1(T−(1αU+I)−1U(1αT−I))=αT−1(αI+U)−1(T+U).$

Hence, we get

$P2−1A=αT−1(αI+U)−1(T+U) 0−(T+1αW2)−1W(1αT−I) I.$(11)

Then, it is obvious from (11) that the results are proved. □

The detail implementation of the preconditioning process will be described as follows. Applying the preconditioner 𝓟2 within a Krylov subspace method, the following system should be solved at each step

$P2z=1ααI−WWTT00αIz=r,$(12)

where $\begin{array}{}z=\left[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}\end{array}$ and $\begin{array}{}r=\left[{r}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{r}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}\end{array}$ are the current and generalized residual vectors, respectively, z1, z2, r1, r2 ∈ ℝn. Based on Lemma 3.1 and (12), we have

$z1z2=αT−1001αII1αW0I1αI00(T+1αW2)−1I0−1αWIr1r2.$(13)

Then, we can derive the implementing process of the preconditioner 𝓟2 as follows.

#### Algorithm 3.3

(The preconditioner 𝓟2). For a given vector $\begin{array}{}r=\left[{r}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{r}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}\end{array}$ , the vector $\begin{array}{}z=\left[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}\end{array}$ can be computed by (13) from the following steps:

1. Compute $\begin{array}{}{t}_{1}={r}_{2}-\frac{1}{\alpha }W{r}_{1};\end{array}$

2. Solve $\begin{array}{}\left(T+\frac{1}{\alpha }{W}^{2}\right){z}_{2}={t}_{1};\end{array}$

3. Compute t2 = r1 + Wz2;

4. Solve Tz1 = t2;

5. Set the generalized residual vector $\begin{array}{}z=\left[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}\end{array}$

It can be seen from Algorithm 3.3 that we need to solve two sub-linear systems with coefficient matrices T + $\begin{array}{}\frac{1}{\alpha }\end{array}$ W2 and T at each iteration. Based on the assumptions of the matrices W and T in Section 1, we know that the two matrices T + $\begin{array}{}\frac{1}{\alpha }\end{array}$ W2 and T are both symmetric positive definite. Hence, we can solve the two sub-linear systems (T + $\begin{array}{}\frac{1}{\alpha }\end{array}$ W2)z2 = t1 and Tz1 = t2 by applying the sparse Cholesky decomposition, the conjugate gradient (CG) method or the preconditioned CG method. In order to prove the effectiveness of the preconditioner 𝓟2, we compare it with the PSHNS preconditioner and the HSS preconditioner which are given as follows:

$PPSHNS=12α(αW+iI)(αT+I) and PHSS=1αI+T00αI+TαI−WWαI.$

Then, the implementing process about the PSHNS preconditioner and the HSS preconditioner with GMRES will be described as follows, respectively.

#### Algorithm 3.4

(The PSHNS Preconditioner). For a given vector r, the vector z can be computed from the following steps:

1. Solve (αV + iW)d = 2αr;

2. Solve (αT + WV–1W)z = Wd.

#### Algorithm 3.5

(The HSS Preconditioner). For a given vector $\begin{array}{}r=\left[{r}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{r}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}},\phantom{\rule{thinmathspace}{0ex}}the\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}vector\phantom{\rule{thinmathspace}{0ex}}z=\left[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}\end{array}$ can be computed from the following steps:

1. Solve (αI + T)ν1 = r1;

2. Solve (αI + T)ν2 = r2;

3. Compute μ1 = αν21;

4. Solve $\begin{array}{}\left(\alpha I+\frac{1}{\alpha }{W}^{2}\right){z}_{2}={\mu }_{1};\end{array}$

5. Compute z1 = $\begin{array}{}{\nu }_{1}+\frac{1}{\alpha }W{z}_{2};\end{array}$

6. Set the generalized residual vector $\begin{array}{}z=\left[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{\right]}^{\mathrm{T}}.\end{array}$

From Algorithm 3.4, we can see that the PSHNS preconditioner iteration involves the complex coefficient matrix αV + iW whose corresponding linear subsystem may be hard to be solved. Meanwhile, it can be seen from Algorithm 3.5 that the implementation of the HSS preconditioner need to solve three symmetric positive definite linear subsystems.

In the following, we will discuss the eigenvector distributions of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ in detail.

#### Theorem 3.6

Let the preconditioner 𝓟2 be defined as in (7), then the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ has n + i + j (0 ≤ i + jn) linearly independent eigenvectors, which are described as follows:

1. n eigenvectors $\begin{array}{}\left(\begin{array}{c}0\\ {v}_{k}\end{array}\right)\text{\hspace{0.17em}}\left(k=1,2,\cdots ,n\right)\end{array}$ that correspond to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.

2. i (0 ≤ in) eigenvectors $\begin{array}{}\left(\begin{array}{c}{u}_{k}^{1}\\ {v}_{k}^{1}\end{array}\right)\text{\hspace{0.17em}}\left(1\le k\le i\right)\end{array}$ that correspond to the eigenvalue 1, where $\begin{array}{}{u}_{k}^{1}\ne 0,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}T{u}_{k}^{1}=\alpha {u}_{k}^{1}\end{array}$ and $\begin{array}{}{v}_{k}^{1}\end{array}$ are arbitrary vectors.

3. j (0 ≤ jn) eigenvectors $\begin{array}{}\left(\begin{array}{c}{u}_{k}^{2}\\ {v}_{k}^{2}\end{array}\right)\text{\hspace{0.17em}}\left(1\le k\le j\right)\end{array}$ that correspond to the eigenvalues λk ≠ 1, where $\begin{array}{}\alpha \left(T+U\right){u}_{k}^{2}={\lambda }_{k}\left(\alpha I+U\right)T{u}_{k}^{2},{u}_{k}^{2}\ne 0\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{v}_{k}^{2}=\frac{1}{1-{\lambda }_{k}}\left(T+\frac{1}{\alpha }{W}^{2}{\right)}^{-1}\left(\frac{1}{\alpha }WT-W\right){u}_{k}^{2}.\end{array}$

#### Proof

Let λ be an eigenvalue of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}u\\ v\end{array}\right)\end{array}$ be the corresponding eigenvector. Then

$αT−1(αI+U)−1(T+U) 0−(T+1αW2)−1W(1αT−I) Iuv=λuv.$

By simple calculation, we have

$α(T+U)u=λ(αI+U)Tu,(T+1αW2)−1W(1αT−I)u=(1−λ)v.$(14)

If λ = 1, the Eq. (14) become

$α(T+U)u=(αI+U)Tu,(T+1αW2)−1W(1αT−I)u=0.$(15)

Then we have the following two situations.

1. When u = 0, the Eq. (15) are always true. Hence, there are n linearly independent eigenvectors $\begin{array}{}\left(\begin{array}{c}0\\ {v}_{k}\end{array}\right)\end{array}$ (k = 1, 2, ⋯, n) corresponding to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.

2. When u ≠ 0, through a simple calculation for (15), we have Tu = αu. Then, there will be i (0 ≤ in) linearly independent eigenvectors $\begin{array}{}\left(\begin{array}{c}{u}_{k}^{1}\\ {v}_{k}^{1}\end{array}\right)\text{\hspace{0.17em}}\left(1\le k\le i\right)\end{array}$ that correspond to the eigenvalue 1, where $\begin{array}{}{u}_{k}^{1}\ne 0,T{u}_{k}^{1}=\alpha {u}_{k}^{1}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{v}_{k}^{1}\end{array}$ are arbitrary vectors.

Next, we consider the case λ ≠ 1. It can be seen from (14) that the results in (3) hold.

Finally, we show that the n + i + j eigenvectors are linearly independent. Let c = [c1, c2, ⋯, cn]T, c1 = $\begin{array}{}\left[{c}_{1}^{1},{c}_{2}^{1},\cdots ,{c}_{i}^{1}{\right]}^{\mathrm{T}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{c}^{2}=\left[{c}_{1}^{2},{c}_{2}^{2},\cdots ,{c}_{j}^{2}{\right]}^{\mathrm{T}}\end{array}$ be three vectors with 0 ≤ i, jn. Then, we have to show that

$0⋯0v1⋯vnc1⋮cn+u11⋯ui1v11⋯vi1c11⋮ci1+u12⋯uj2v12⋯vj2c12⋮cj2=0⋮0$(16)

holds if and only if the vectors c, c1 and c2 are all zero vectors, where the first matrix consists of the eigenvectors corresponding to the eigenvalue 1 for the case (1), the second matrix consists of those for the case (2) and the third matrix consists of the eigenvectors corresponding to λ ≠ 1 for the case (3). By multiplying matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ from left on both sides of Eq. (16), we get

$0⋯0v1⋯vnc1⋮cn+u11⋯ui1v11⋯vi1c11⋮ci1+u12⋯uj2v12⋯vj2λ1c12⋮λjcj2=0⋮0.$(17)

From the difference between (17) and (16), we obtain

$u12⋯uj2v12⋯vj2(λ1−1)c12⋮(λj−1)cj2=0⋮0.$

Because the eigenvalues λk ≠ 1 and $\begin{array}{}\left(\begin{array}{c}{u}_{k}^{2}\\ {v}_{k}^{2}\end{array}\right)\text{\hspace{0.17em}}\left(k=1,\cdots ,j\right)\end{array}$ are linearly independent, we know that $\begin{array}{}{c}_{k}^{2}\end{array}$ = 0 (k = 1, ⋯, j). Thus, the Eq. (16) reduces to

$0⋯0v1⋯vnc1⋮cn+u11⋯ui1v11⋯vi1c11⋮ci1=0⋮0.$

As the vectors $\begin{array}{}{u}_{k}^{1}\end{array}$ (k = 1, ⋯, i) are also linearly independent, then we have $\begin{array}{}{c}_{k}^{1}\end{array}$ = 0 (k = 1, ⋯, i). Hence, the above equation becomes

$0⋯0v1⋯vnc1⋮cn=0⋮0.$

Since vk (k = 1, ⋯, n) are linearly independent, we have ck = 0 (k = 1, ⋯, n). Therefore, the n + i + j eigenvectors are linearly independent. The proof of this theorem is completed  □

Based on the Krylov subspace theory, we know that the iterative method with an optimal property (such as GMRES [34] method) will terminate at the moment when the degree of the minimal polynomial is attained. Next, we will give an upper bound of the degree of the minimal polynomial of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$.

#### Theorem 3.7

Assume that the conditions of Theorem 3.2 are satisfied and let the preconditioner 𝓟2 be defined as in (7). Then the degree of the minimal polynomial of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ is at most n + 1. Thus, the dimension of the Krylov subspace 𝓚( $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$, b) is at most n + 1.

#### Proof

From (11), we know that the preconditioned matrix can be expressed as

$P2−1A=θ10θ2I,$

where θ1 = αT–1(αI + U)–1(T + U) and $\begin{array}{}{\theta }_{2}=-\left(T+\frac{1}{\alpha }{W}^{2}{\right)}^{-1}W\left(\frac{1}{\alpha }T-I\right).\end{array}$ Let μi (i = 1, ⋯, n) be the eigenvalues of matrix θ1. The characteristic polynomial of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ is

$(P2−1A−I)n∏i=1n(P2−1A−μiI).$

Then

$(P2−1A−I)∏i=1n(P2−1A−μiI)=(θ1−I)∏i=1n(θ1−μiI) 0θ2∏i=1n(θ1−μiI) 0.$

Because μi (i = 1, ⋯, n) are the eigenvalues of θ1, then by the Hamilton-Cayley theorem, we have

$∏i=1n(θ1−μiI)=0.$

Therefore, we obtain that the degree of the minimal polynomial of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ is at most n+1. It is well known that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace. So the dimension of the Krylov subspace 𝓚( $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$, b) is also at most n + 1. Thus, the proof of this theorem is completed. □

## 4 Numerical experiments

In this section, some numerical experiments are presented to illustrate the effectiveness of the preconditioner 𝓟2 for the linear system (3). We compare the performance of the preconditioner 𝓟2 with the PSHNS preconditioner and the HSS preconditioner. We denote the number of iteration steps as ”Iter”, the elapsed CPU time in seconds as “CPU”, and the relative residual norm as “RES”. Unless otherwise specified, we use left preconditioning with the GMRES method [34]. All the computations are implemented in MATLAB on a PC computer with Intel (R) Core (TM) i5 CPU 2.50GHz, and 4.00 GB memory.

In our implementations, the zero vector is adopted as the initial vector and the iteration stops when

$RES:=∥g−Txk−Wyk∥22+∥f−Wxk+Tyk∥22∥g∥22+∥f∥22<10−6.$

The maximum number of iteration steps allowed is set to 3000 for the unpreconditioned GMRES method, and to 500 for the preconditioned GMRES method. It should be emphasized that the sub-linear systems arising from the application of the preconditioners are solved by direct methods. In matlab, the sub-systems of linear equations are solved through sparse Cholesky or LU factorization in combination with AMD or column AMD reordering. A symbol “-” is used to indicate that the method does not obtain the required stopping criterion before maximum iterations or out of memory. We test three values of the parameter α, that is, α = 0.001, α = 0.01, α = 1.

#### Example 1

We consider the following complex symmetric linear system which comes from [13, 24, 26]

$[(−ω2M+K)+i(ωCv+CH)]x=b.$

where M and K are the inertia and stiffness matrices, Cv and CH are the viscous and hysteretic damping matrices, respectively, ω is the driving circular frequency, K = IVm + VmI, Vm = h–2 tridiag(–1, 2, n-1) ∈ ℝm×m is a tridiagonal matrix, ω = 2π, $\begin{array}{}h=\frac{1}{m+1},{C}_{v}=\frac{1}{2}M,\text{\hspace{0.17em}}{C}_{H}=\mu K\end{array}$ with μ = 0.02 being a damping coefficient. We choose W = h2(–ω2M + K), T = h2(ωCv + CH) and test M = 10I, 15I, 25I, 35I, 50I. In this example, we set m = 32 and the right-hand side b = 𝓐 * ones(2m2, 1).

By choosing different α and M, we list the numerical results of the unpreconditioned GMRES method, the 𝓟2 preconditioned GMERS method, the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method in Table 1. Table 1 indicates that applying the 𝓟2 preconditioned GMERS method requires less iteration steps and CPU times than applying the other preconditioned GMRES methods by choosing suitable parameters. In Table 1, we find that the PSHNS preconditioner and the HSS preconditioner are both more sensitive to the parameter α than the preconditioner 𝓟2. Figure 1 describes the eigenvalue distributions of the original coefficient matrix 𝓐, the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{\mathrm{H}\mathrm{S}\mathrm{S}}^{-1}\mathcal{A}\left(\text{for}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\alpha =0.06\right),\end{array}$ the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{\mathrm{P}\mathrm{S}\mathrm{H}\mathrm{N}\mathrm{S}}^{-1}\mathcal{A}\left(\text{for}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\alpha =0.06\right)\end{array}$ and the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ (for α = 0.06 and α = 0.006) with M = 35I. From Figure 1, we observe that the eigenvalue distributions of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ are much better than those of the other preconditioned matrices. Moreover, the smaller parameter α generally makes the eigenvalues of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ become more and more clustered.

Fig. 1

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices $\begin{array}{}{\mathcal{P}}_{\mathrm{H}\mathrm{S}\mathrm{S}}^{-1}\mathcal{A},{\mathcal{P}}_{\mathrm{P}\mathrm{S}\mathrm{H}\mathrm{N}\mathrm{S}}^{-1}\mathcal{A}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ for Example 1 with M = 35I.

Table 1

Numerical results of GMRES and Preconditioned GMRES methods for Example 1

#### Example 2

We consider the following complex symmetric linear system [25, 26]

$[(Tm⊗Im+Im⊗Tm−k2h2(Im⊗Im))+iσ2(Im⊗Im)]x=b,$

where Tm = tridiag(–1, 2, –1) is a tridiagonal matrix with order m and k denotes the wavenumber, σ2 = 0.1, $\begin{array}{}h=\frac{1}{m+1}\end{array}$ and b = 𝓐 * ones(2m2, 1). Here, we choose W = TmIm + ImTmk2h2(ImIm) and T = σ2(ImIm).

Table 2 shows that these three kinds preconditioned GMRES methods present much better convergence behavior than the unpreconditioned GMRES method. Meanwhile, the 𝓟2 preconditioned GMERS method gives a better numerical results than the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method with different parameters α in terms of iteration steps and CPU times. From Figure 2, we find that the eigenvalue distributions of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ are much better than those of the other preconditioned matrices and the eigenvalues of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ will be more gathered as α decrease.

Fig. 2

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices $\begin{array}{}{\mathcal{P}}_{\mathrm{H}\mathrm{S}\mathrm{S}}^{-1}\mathcal{A},{\mathcal{P}}_{\mathrm{P}\mathrm{S}\mathrm{H}\mathrm{N}\mathrm{S}}^{-1}\mathcal{A}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ for Example 2 with k = 20, m = 32.

Table 2

Numerical results of GMRES and Preconditioned GMRES methods for Example 2

## 5 Conclusions

In this paper, we have proposed a relaxed block splitting preconditioner 𝓟2 for the real block two-by-two linear system (3) and analyzed the eigenvalue distributions of the corresponding preconditioned matrix. Meanwhile, the structure of the eigenvectors and an upper bound for the degree of the minimal polynomial of the preconditioned matrix $\begin{array}{}{\mathcal{P}}_{2}^{-1}\mathcal{A}\end{array}$ were also derived. Some numerical experiments are given to illustrate the effectiveness of the preconditioner 𝓟2 for solving the linear system (3).

## Acknowledgement

The authors would like to thank the referees for their very detailed comments and suggestions which improved the presentation of this paper greatly.

This work is supported by the National Natural Science Foundation of China No. 11471122 and supported in part by Science and Technology Commission of Shanghai Municipality (No. 18dz2271000).

## References

• [1]

Betts J.-T., Kolmanovsky I., Practical methods foroptimal controlusing nonlinear programming, Appl. Mech. Rev., 2002, 55, 68

• [2]

Lass O., Vallejos M., Borzi A., Douglas C. C., Implementation and analysis of multigrid schemes with finite elements for elliptic optimal control problems, Computing, 2009, 84, 27-48

• [3]

Arridge S. R., Optical tomography in medical imaging, Inverse Problem., 1999, 15, R41-R93

• [4]

Zheng Z., Zhang G.-F., Zhu M.-Z., A note on preconditioners for complex linear systems arising from PDE-constrained optimization problems, Appl. Math. Lett., 2016, 61, 114-121

• [5]

van Dijk W., Toyama F. M., Accurate numerical solutions of the time-dependent Schrödinger equation, Phys. Rev. E., 2007, 75, 036707-1-036707-10 Google Scholar

• [6]

Benzi M., Bertaccini D., Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal., 2008, 28, 598-618

• [7]

Feriani A., Perotti F., Simoncini V., Iterative system solvers for the frequency analysis of linear mechanical systems, Comput. Methods Appl. Mech. Engrg., 2000, 190, 1719-1739

• [8]

Bao G., Sun W.-W., A fast algorithm for the electromagnetic scattering from a large cavity, SIAM J. Sci. Comput., 2005, 27, 553-574

• [9]

Benzi M., Golub G. H., Liesen J., Numerical solution of saddle point problems, Acta Numer., 2005, 14, 1-137

• [10]

Saad Y., Iterative Methods for Sparse Linear Systems, SIAM, 2003 Google Scholar

• [11]

Bai Z.-Z., Golub G. H., Ng M. K., Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 2003, 24, 603-626

• [12]

Bai Z.-Z., Benzi M., Chen F., Modified HSS iteration methods for a class of complex symmetric linear systems, Computing, 2010, 87, 93-111

• [13]

Bai Z.-Z., Benzi M., Chen F., On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms., 2011, 56, 297-317

• [14]

Bai Z.-Z., On preconditioned iteration methods for complex linear systems, J. Eng. Math., 2015, 93, 41-60

• [15]

Li X., Yang A.-L., Wu Y.-J., Lopsided PMHSS iteration method for a class of complex symmetric linear systems, Numer. Algorithms., 2014, 66, 555-568

• [16]

Zheng Q.-Q., Ma C.-F., Accelerated PMHSS iteration methods for complex symmetric linear systems, Numer. Algorithms., 2016, 73, 501-516

• [17]

Hezari D., Edalatpour V., Salkuyeh D. K., Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numer. Linear Algebra Appl., 2014, 22, 761-776

• [18]

Bai Z.-Z., Several splittings for non-Hermitian linear systems, Sci. China Ser. A: Math., 2008, 51, 1339-1348

• [19]

Zheng Z., Huang F.-L., Peng Y.-C., Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett., 2017, 73, 91-97

• [20]

Bai Z.-Z., Rotated block triangular preconditioning based on PMHSS, Sci. China Math. 2013, 56, 2523-2538

• [21]

Bai Z.-Z., Benzi M., Chen F., Wang Z.-Q., Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal., 2013, 33, 343-369

• [22]

Lang C., Ren Z.-R., Inexact rotated block triangular preconditioners for a class of block two-by-two matrices, J. Eng. Math., 2015, 93, 87-98

• [23]

Liang Z.-Z., Zhang G.-F., On SSOR iteration method for a class of block two-by-two linear systems, Numer. Algorithms., 2016, 71, 655-671

• [24]

Wu S.-L., Several variants of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems, Numer. Linear Algebra Appl., 2015, 22, 338-356

• [25]

Zhang J.-H., Dai H., A new splitting preconditioner for the iterative solution of complex symmetric indefinite linear systems, Appl. Math. Lett., 2015, 49, 100-106

• [26]

Zhang J.-H., Dai H., A new block preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2017, 74, 889-903

• [27]

Zhang J.-L., Fan H.-T., Gu C.-Q., An improved block splitting preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2018, 77, 451-478

• [28]

Pan J.-Y., Ng M. K., Bai Z.-Z., New preconditioners for saddle point problems, Appl. Math. Comput., 2006, 172, 762-771

• [29]

Bai Z.-Z., Golub G. H., Lu L.-Z., Yin J.-F., Block triangular and skew-Hermitian splitting method for positive definite linear systems, SIAM J. Sci. Comput., 2005, 26, 844-863

• [30]

Cao Y., Dong J.-L., Wang Y.-M., A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation, J. Comput. Appl. Math., 2015, 273, 41-60

• [31]

Fan H.-T., Zheng B., Zhu X.-Y., A relaxed positive semi-definite and skew-Hermitian splitting preconditioner for non-Hermitian generalized saddle point problems, Numer. Algor., 2016, 72, 813-834

• [32]

Cao Y., Yao L.-Q., Jiang M.-Q., Niu Q., A relaxed HSS preconditioner for saddle point problems from meshfree discretization, J. Comput. Math., 2013, 31, 398-421

• [33]

Zhang K., Zhang J.-L., Gu C.-Q., A new relaxed PSS preconditioner for nonsymmetric saddle point problems, Appl. Math. Comput., 2017, 308, 115-129

• [34]

Saad Y., Schultz M. H., GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 1986, 7, 856-869

Accepted: 2018-03-29

Published Online: 2018-06-07

Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 561–573, ISSN (Online) 2391-5455,

Export Citation