Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo


IMPACT FACTOR 2017: 0.831
5-year IMPACT FACTOR: 0.836

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2017: 0.32

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

Issues

Volume 13 (2015)

A relaxed block splitting preconditioner for complex symmetric indefinite linear systems

Yunying Huang
  • School of Mathematical Sciences, Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, East China Normal University, Dongchuan RD 500, Shanghai 200241, China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Guoliang Chen
  • Corresponding author
  • School of Mathematical Sciences, Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, East China Normal University, Dongchuan RD 500, Shanghai 200241, China
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2018-06-07 | DOI: https://doi.org/10.1515/math-2018-0051

Abstract

In this paper, we propose a relaxed block splitting preconditioner for a class of complex symmetric indefinite linear systems to accelerate the convergence rate of the Krylov subspace iteration method and the relaxed preconditioner is much closer to the original block two-by-two coefficient matrix. We study the spectral properties and the eigenvector distributions of the corresponding preconditioned matrix. In addition, the degree of the minimal polynomial of the preconditioned matrix is also derived. Finally, some numerical experiments are presented to illustrate the effectiveness of the relaxed splitting preconditioner.

Keywords: Complex symmetric linear system; Preconditioner; Krylov subspace method; GMRES

MSC 2010: 65F10; 75F50

1 Introduction

Consider the following large sparse nonsingular complex symmetric linear systems

Au=b,ACn×nandu,bCn,(1)

where A = W + iT, W, T ∈ ℝn×n are symmetric matrices and i = 1 denotes the imaginary unit. Such complex linear system (1) arise in many areas of scientific computing and engineering applications, such as FFT-based solution of certain time-dependent PDEs [1], distributed control problems [2], diffuse optical tomography [3], PDE-constrained optimization problems [4], quantum mechanics [5] and so on; see [6,7,8] and references therein for other applications.

Let u = x + iy and b = f + ig with x, y, f, g ∈ ℝn, then the complex linear system (1) can be equivalently rewritten as the real block two-by-two linear system as follows

A~u=WTTWxy=fg=b,(2)

or the following two-by-two block real equivalent formulation

Au=TWWTxy=gf=b.(3)

It can be seen that the linear systems (2) and (3) both avoid using complex arithmetic, but the coefficient matrix in (2) and (3) have become doubled in size. Because it is more attractive to use iterative methods rather than direct methods for solving (1) or (2), then many efficient iterative methods as well as their numerical properties have been studied in the literature, see [9,10]. When both W and T are symmetric positive semi-definite with at least one of them being positive definite, some efficient iterative methods have been proposed. Recently, based on the Hermitian and skew-Hermitian splitting (HSS) method [11], Bai et al. [12] developed the modified HSS (MHSS) iteration method to solve the complex linear system (1), which is convergent for any positive constant α. To accelerate the convergence rate of the MHSS iteration method, Bai et al. [13] introduced the preconditioned MHSS (PMHSS) which is unconditionally convergent and shows h-independent convergence behavior. In [14], Bai further analyzed algebraic and convergence properties of the PMHSS iteration method for solving complex linear systems. Moreover, there are also some other effective iterative methods, such as the lopsided PMHSS (LPMHSS) iteration method, the accelerated PMHSS (APMHSS) iteration method, the preconditioned generalized SOR (PGSOR) iterative method, the skew-normal splitting (SNS) method, the double-step scale splitting (DSS) iteration method [15,16,17,18,19], and so on.

If the matrices W and T are symmetric positive semi-definite, some efficient preconditioners have been presented. Bai [20] proposed the rotated block triangular (RBT) preconditioners which is based on the PMHSS preconditioning matrix [21] for the linear system (2). Lang and Ren [22] established the inexact RBT (IRBT) preconditioners. For solving the linear system (2), Liang and Zhang [23] developed the symmetric SOR (SSOR) method.

However, when W ∈ ℝn×n is symmetric indefinite, T ∈ ℝn×n is symmetric positive definite, the matrices αI + W, αV + W and αW + T2 are indefinite or singular which may lead to the slow convergence speeds of the MHSS method, the PMHSS method and the SNS method. To overcome these problems, in [24], the Hermitian normal splitting (HNS) method and its variant simplified HNS (SHNS) method have been introduced by Wu for solving the complex symmetric linear system (1). Zhang and Dai [25] presented a preconditioned SHNS (PSHNS) iteration method and constructed a new preconditioner. In this paper, we solve the linear system (3) with W being symmetric indefinite and T being symmetric positive definite. In order to solve real formulation (3), Zhang and Dai [26] established a new block preconditioner and also analyzed the spectral properties of the corresponding preconditioned matrix. An improved block (IB) splitting preconditioner was proposed by Zhang et al. in [27] and they gave the convergence property of the corresponding iteration method under suitable conditions.

In this paper, by using the relaxing technique, we construct a relaxed block splitting preconditioner for the real block two-by-two linear system (3). The relaxed splitting preconditioner is much closer to the original coefficient matrix than the block preconditioners in [26]. We analyze the spectral properties of the corresponding preconditioned matrix and show that the corresponding preconditioned matrix has an eigenvalue 1 with algebraic multiplicity at least n. The structure of the eigenvectors and the dimension of the Krylov subspace for the preconditioned matrix are also derived.

The organization of the paper is as follows. In Section 2, a relaxed block splitting preconditioner is presented. In Section 3, we study the distribution of eigenvalues, the form of the eigenvectors and the dimension of the Krylov subspace of the corresponding preconditioned matrix. In Section 4, some numerical examples are presented to compare the new preconditioner with the PSHNS preconditioner and the HSS preconditioner. Finally, we give some brief concluding remarks in Section 5.

2 A new relaxed splitting preconditioner

In this section, by employing the relaxation techniques, we establish a relaxed block splitting preconditioner for solving the linear system (3).

Pan et al. [28] employed the positive-definite and skew-Hermitian splitting (PSS) iteration method in [29] to develop a PSS preconditioner for saddle point problems. For the linear system (3), the coefficient matrix 𝓐 can be split as follows

A=J+K,(4)

where J=T000andK=0WWT. Recently, based on the PSS preconditioner, Zhang and Dai presented a preconditioner in [26] as follows

P1=1α(αI+K)(αI+J)=1ααIWWαI+TαI+T00αI=αI+TWW(I+1αT)αI+T.(5)

Then the difference between the preconditioner 𝓟1 and the coefficient matrix 𝓐 is given as follows

R1=P1A=αI01αWTαI.(6)

A general criterion for an efficient preconditioner is that it should be as close as possible to the coefficient matrix 𝓐 which makes the preconditioned matrix have a clustered spectrum, and hope that it is easy to implement. Inspired by the idea of the relaxation technique in [30,31,32,33], we give a relaxed splitting preconditioner which is defined as follows

P2=1ααIWWTT00αI=TW1αWTT.(7)

From (3) and (7), we get the difference between the preconditioner 𝓟2 and the coefficient matrix 𝓐

R2=P2A=001αWTW0.(8)

It is easy to see that the (2, 1)-block matrix in (8) is different from that in (6) and the (1, 1)-block and (2, 2)-block matrix in (8) now vanish, which indicates that the preconditioner 𝓟2 is much closer to the coefficient matrix 𝓐 than the preconditioner 𝓟1.

From (8), we know that the preconditioner 𝓟2 can be established by the following splitting of the coefficient matrix 𝓐

A=P2R2=TW1αWTT001αWTW0.(9)

In the following section, we will analyze the spectral properties of the preconditioned matrix P21 𝓐.

3 Spectral analysis of the preconditioned matrix

In this section, we mainly study the spectral properties of the preconditioned matrix P21 𝓐 and give an upper bound of the dimension of the Krylov subspace for this preconditioned matrix. Therefore, firstly we should get the inverse of the matrix 𝓟2 and the explicit form of the matrix P21 will be given in the following lemma. The proof of this lemma consists of straightforward calculations and is omitted here.

Lemma 3.1

Let

P=αIWWTandP=T00αI.

Here 𝓟 has the following block-triangular factorization

P=I01αWIαI00T+1αW2I1αW0I.

Then we obtain

P21=αP^1P1=T11αT1W(T+1αW2)1WT1W(T+1αW2)11α(T+1αW2)1W(T+1αW2)1.(10)

In the following, we will analyze the eigenvalue distributions of the preconditioned matrix P21 𝓐.

Theorem 3.2

Assume that the coefficient matrix 𝓐 is nonsingular, W ∈ ℝn×n is symmetric indefinite and T ∈ ℝn×n is symmetric positive definite. Let α be a real positive constant and U = WT−1W. The preconditioner 𝓟2 is defined as in (7). Then the preconditioned matrix P21 𝓐 has eigenvalues at 1 with multiplicity at least n and the remaining eigenvalues are λi, i = 1, 2, ⋯, n, where λi (i = 1, 2, ⋯, n) are the eigenvalues of the matrix αT−1(αI + U)−1(T + U).

Proof

From (9) and Lemma 3.1, we have

P21A=IP21R2=IT1W(T+1αW2)1W(1αTI)0(T+1αW2)1W(1αTI)0=IT1W(T+1αW2)1W(1αTI)0(T+1αW2)1W(1αTI)I.

It follows from U = WT−1W that

IT1W(T+1αW2)1W(1αTI)=T1(TW(T+1αW2)1W(1αTI))=T1(TW(W(W1TW1+1αI)W)1W(1αTI))=T1(T(1αI+W1TW1)1(1αTI))=T1(T(1αU+I)1U(1αTI))=αT1(αI+U)1(T+U).

Hence, we get

P21A=αT1(αI+U)1(T+U)0(T+1αW2)1W(1αTI)I.(11)

Then, it is obvious from (11) that the results are proved. □

The detail implementation of the preconditioning process will be described as follows. Applying the preconditioner 𝓟2 within a Krylov subspace method, the following system should be solved at each step

P2z=1ααIWWTT00αIz=r,(12)

where z=[z1T,z2T]T and r=[r1T,r2T]T are the current and generalized residual vectors, respectively, z1, z2, r1, r2 ∈ ℝn. Based on Lemma 3.1 and (12), we have

z1z2=αT1001αII1αW0I1αI00(T+1αW2)1I01αWIr1r2.(13)

Then, we can derive the implementing process of the preconditioner 𝓟2 as follows.

Algorithm 3.3

(The preconditioner 𝓟2). For a given vector r=[r1T,r2T]T , the vector z=[z1T,z2T]T can be computed by (13) from the following steps:

  1. Compute t1=r21αWr1;

  2. Solve (T+1αW2)z2=t1;

  3. Compute t2 = r1 + Wz2;

  4. Solve Tz1 = t2;

  5. Set the generalized residual vector z=[z1T,z2T]T

It can be seen from Algorithm 3.3 that we need to solve two sub-linear systems with coefficient matrices T + 1α W2 and T at each iteration. Based on the assumptions of the matrices W and T in Section 1, we know that the two matrices T + 1α W2 and T are both symmetric positive definite. Hence, we can solve the two sub-linear systems (T + 1α W2)z2 = t1 and Tz1 = t2 by applying the sparse Cholesky decomposition, the conjugate gradient (CG) method or the preconditioned CG method. In order to prove the effectiveness of the preconditioner 𝓟2, we compare it with the PSHNS preconditioner and the HSS preconditioner which are given as follows:

PPSHNS=12α(αW+iI)(αT+I)andPHSS=1αI+T00αI+TαIWWαI.

Then, the implementing process about the PSHNS preconditioner and the HSS preconditioner with GMRES will be described as follows, respectively.

Algorithm 3.4

(The PSHNS Preconditioner). For a given vector r, the vector z can be computed from the following steps:

  1. Solve (αV + iW)d = 2αr;

  2. Solve (αT + WV–1W)z = Wd.

Algorithm 3.5

(The HSS Preconditioner). For a given vector r=[r1T,r2T]T,thevectorz=[z1T,z2T]T can be computed from the following steps:

  1. Solve (αI + T)ν1 = r1;

  2. Solve (αI + T)ν2 = r2;

  3. Compute μ1 = αν21;

  4. Solve (αI+1αW2)z2=μ1;

  5. Compute z1 = ν1+1αWz2;

  6. Set the generalized residual vector z=[z1T,z2T]T.

From Algorithm 3.4, we can see that the PSHNS preconditioner iteration involves the complex coefficient matrix αV + iW whose corresponding linear subsystem may be hard to be solved. Meanwhile, it can be seen from Algorithm 3.5 that the implementation of the HSS preconditioner need to solve three symmetric positive definite linear subsystems.

In the following, we will discuss the eigenvector distributions of the preconditioned matrix P21A in detail.

Theorem 3.6

Let the preconditioner 𝓟2 be defined as in (7), then the preconditioned matrix P21A has n + i + j (0 ≤ i + jn) linearly independent eigenvectors, which are described as follows:

  1. n eigenvectors 0vk(k=1,2,,n) that correspond to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.

  2. i (0 ≤ in) eigenvectors uk1vk1(1ki) that correspond to the eigenvalue 1, where uk10,Tuk1=αuk1 and vk1 are arbitrary vectors.

  3. j (0 ≤ jn) eigenvectors uk2vk2(1kj) that correspond to the eigenvalues λk ≠ 1, where α(T+U)uk2=λk(αI+U)Tuk2,uk20andvk2=11λk(T+1αW2)1(1αWTW)uk2.

Proof

Let λ be an eigenvalue of the preconditioned matrix P21Aanduv be the corresponding eigenvector. Then

αT1(αI+U)1(T+U)0(T+1αW2)1W(1αTI)Iuv=λuv.

By simple calculation, we have

α(T+U)u=λ(αI+U)Tu,(T+1αW2)1W(1αTI)u=(1λ)v.(14)

If λ = 1, the Eq. (14) become

α(T+U)u=(αI+U)Tu,(T+1αW2)1W(1αTI)u=0.(15)

Then we have the following two situations.

  1. When u = 0, the Eq. (15) are always true. Hence, there are n linearly independent eigenvectors 0vk (k = 1, 2, ⋯, n) corresponding to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.

  2. When u ≠ 0, through a simple calculation for (15), we have Tu = αu. Then, there will be i (0 ≤ in) linearly independent eigenvectors uk1vk1(1ki) that correspond to the eigenvalue 1, where uk10,Tuk1=αuk1andvk1 are arbitrary vectors.

Next, we consider the case λ ≠ 1. It can be seen from (14) that the results in (3) hold.

Finally, we show that the n + i + j eigenvectors are linearly independent. Let c = [c1, c2, ⋯, cn]T, c1 = [c11,c21,,ci1]Tandc2=[c12,c22,,cj2]T be three vectors with 0 ≤ i, jn. Then, we have to show that

00v1vnc1cn+u11ui1v11vi1c11ci1+u12uj2v12vj2c12cj2=00(16)

holds if and only if the vectors c, c1 and c2 are all zero vectors, where the first matrix consists of the eigenvectors corresponding to the eigenvalue 1 for the case (1), the second matrix consists of those for the case (2) and the third matrix consists of the eigenvectors corresponding to λ ≠ 1 for the case (3). By multiplying matrix P21A from left on both sides of Eq. (16), we get

00v1vnc1cn+u11ui1v11vi1c11ci1+u12uj2v12vj2λ1c12λjcj2=00.(17)

From the difference between (17) and (16), we obtain

u12uj2v12vj2(λ11)c12(λj1)cj2=00.

Because the eigenvalues λk ≠ 1 and uk2vk2(k=1,,j) are linearly independent, we know that ck2 = 0 (k = 1, ⋯, j). Thus, the Eq. (16) reduces to

00v1vnc1cn+u11ui1v11vi1c11ci1=00.

As the vectors uk1 (k = 1, ⋯, i) are also linearly independent, then we have ck1 = 0 (k = 1, ⋯, i). Hence, the above equation becomes

00v1vnc1cn=00.

Since vk (k = 1, ⋯, n) are linearly independent, we have ck = 0 (k = 1, ⋯, n). Therefore, the n + i + j eigenvectors are linearly independent. The proof of this theorem is completed  □

Based on the Krylov subspace theory, we know that the iterative method with an optimal property (such as GMRES [34] method) will terminate at the moment when the degree of the minimal polynomial is attained. Next, we will give an upper bound of the degree of the minimal polynomial of the preconditioned matrix P21A.

Theorem 3.7

Assume that the conditions of Theorem 3.2 are satisfied and let the preconditioner 𝓟2 be defined as in (7). Then the degree of the minimal polynomial of the preconditioned matrix P21A is at most n + 1. Thus, the dimension of the Krylov subspace 𝓚( P21A, b) is at most n + 1.

Proof

From (11), we know that the preconditioned matrix can be expressed as

P21A=θ10θ2I,

where θ1 = αT–1(αI + U)–1(T + U) and θ2=(T+1αW2)1W(1αTI). Let μi (i = 1, ⋯, n) be the eigenvalues of matrix θ1. The characteristic polynomial of the preconditioned matrix P21A is

(P21AI)ni=1n(P21AμiI).

Then

(P21AI)i=1n(P21AμiI)=(θ1I)i=1n(θ1μiI)0θ2i=1n(θ1μiI)0.

Because μi (i = 1, ⋯, n) are the eigenvalues of θ1, then by the Hamilton-Cayley theorem, we have

i=1n(θ1μiI)=0.

Therefore, we obtain that the degree of the minimal polynomial of the preconditioned matrix P21A is at most n+1. It is well known that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace. So the dimension of the Krylov subspace 𝓚( P21A, b) is also at most n + 1. Thus, the proof of this theorem is completed. □

4 Numerical experiments

In this section, some numerical experiments are presented to illustrate the effectiveness of the preconditioner 𝓟2 for the linear system (3). We compare the performance of the preconditioner 𝓟2 with the PSHNS preconditioner and the HSS preconditioner. We denote the number of iteration steps as ”Iter”, the elapsed CPU time in seconds as “CPU”, and the relative residual norm as “RES”. Unless otherwise specified, we use left preconditioning with the GMRES method [34]. All the computations are implemented in MATLAB on a PC computer with Intel (R) Core (TM) i5 CPU 2.50GHz, and 4.00 GB memory.

In our implementations, the zero vector is adopted as the initial vector and the iteration stops when

RES:=gTxkWyk22+fWxk+Tyk22g22+f22<106.

The maximum number of iteration steps allowed is set to 3000 for the unpreconditioned GMRES method, and to 500 for the preconditioned GMRES method. It should be emphasized that the sub-linear systems arising from the application of the preconditioners are solved by direct methods. In matlab, the sub-systems of linear equations are solved through sparse Cholesky or LU factorization in combination with AMD or column AMD reordering. A symbol “-” is used to indicate that the method does not obtain the required stopping criterion before maximum iterations or out of memory. We test three values of the parameter α, that is, α = 0.001, α = 0.01, α = 1.

Example 1

We consider the following complex symmetric linear system which comes from [13, 24, 26]

[(ω2M+K)+i(ωCv+CH)]x=b.

where M and K are the inertia and stiffness matrices, Cv and CH are the viscous and hysteretic damping matrices, respectively, ω is the driving circular frequency, K = IVm + VmI, Vm = h–2 tridiag(–1, 2, n-1) ∈ ℝm×m is a tridiagonal matrix, ω = 2π, h=1m+1,Cv=12M,CH=μK with μ = 0.02 being a damping coefficient. We choose W = h2(–ω2M + K), T = h2(ωCv + CH) and test M = 10I, 15I, 25I, 35I, 50I. In this example, we set m = 32 and the right-hand side b = 𝓐 * ones(2m2, 1).

By choosing different α and M, we list the numerical results of the unpreconditioned GMRES method, the 𝓟2 preconditioned GMERS method, the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method in Table 1. Table 1 indicates that applying the 𝓟2 preconditioned GMERS method requires less iteration steps and CPU times than applying the other preconditioned GMRES methods by choosing suitable parameters. In Table 1, we find that the PSHNS preconditioner and the HSS preconditioner are both more sensitive to the parameter α than the preconditioner 𝓟2. Figure 1 describes the eigenvalue distributions of the original coefficient matrix 𝓐, the preconditioned matrix PHSS1A(forα=0.06), the preconditioned matrix PPSHNS1A(forα=0.06) and the preconditioned matrix P21A (for α = 0.06 and α = 0.006) with M = 35I. From Figure 1, we observe that the eigenvalue distributions of the preconditioned matrix P21A are much better than those of the other preconditioned matrices. Moreover, the smaller parameter α generally makes the eigenvalues of the preconditioned matrix P21A become more and more clustered.

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices
PHSS−1A,PPSHNS−1AandP2−1A$\begin{array}{}
\displaystyle 
\mathcal{P}^{-1}_{\rm{HSS}}\mathcal{A}, \mathcal{P}^{-1}_{\rm{PSHNS}}\mathcal{A} \,\,\text{and}\,\,
\mathcal{P}^{-1}_{2}\mathcal{A}
\end{array}$ for Example 1 with M = 35I.
Fig. 1

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices PHSS1A,PPSHNS1AandP21A for Example 1 with M = 35I.

Table 1

Numerical results of GMRES and Preconditioned GMRES methods for Example 1

Example 2

We consider the following complex symmetric linear system [25, 26]

[(TmIm+ImTmk2h2(ImIm))+iσ2(ImIm)]x=b,

where Tm = tridiag(–1, 2, –1) is a tridiagonal matrix with order m and k denotes the wavenumber, σ2 = 0.1, h=1m+1 and b = 𝓐 * ones(2m2, 1). Here, we choose W = TmIm + ImTmk2h2(ImIm) and T = σ2(ImIm).

Table 2 shows that these three kinds preconditioned GMRES methods present much better convergence behavior than the unpreconditioned GMRES method. Meanwhile, the 𝓟2 preconditioned GMERS method gives a better numerical results than the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method with different parameters α in terms of iteration steps and CPU times. From Figure 2, we find that the eigenvalue distributions of the preconditioned matrix P21A are much better than those of the other preconditioned matrices and the eigenvalues of the preconditioned matrix P21A will be more gathered as α decrease.

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices
PHSS−1A,PPSHNS−1AandP2−1A$\begin{array}{}
\displaystyle 
\mathcal{P}^{-1}_{\rm{HSS}}\mathcal{A}, \mathcal{P}^{-1}_{\rm{PSHNS}}\mathcal{A} \,\,\text{and}\,\,
\mathcal{P}^{-1}_{2}\mathcal{A}
\end{array}$ for Example 2 with k = 20, m = 32.
Fig. 2

The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices PHSS1A,PPSHNS1AandP21A for Example 2 with k = 20, m = 32.

Table 2

Numerical results of GMRES and Preconditioned GMRES methods for Example 2

5 Conclusions

In this paper, we have proposed a relaxed block splitting preconditioner 𝓟2 for the real block two-by-two linear system (3) and analyzed the eigenvalue distributions of the corresponding preconditioned matrix. Meanwhile, the structure of the eigenvectors and an upper bound for the degree of the minimal polynomial of the preconditioned matrix P21A were also derived. Some numerical experiments are given to illustrate the effectiveness of the preconditioner 𝓟2 for solving the linear system (3).

Acknowledgement

The authors would like to thank the referees for their very detailed comments and suggestions which improved the presentation of this paper greatly.

This work is supported by the National Natural Science Foundation of China No. 11471122 and supported in part by Science and Technology Commission of Shanghai Municipality (No. 18dz2271000).

References

  • [1]

    Betts J.-T., Kolmanovsky I., Practical methods foroptimal controlusing nonlinear programming, Appl. Mech. Rev., 2002, 55, 68 CrossrefGoogle Scholar

  • [2]

    Lass O., Vallejos M., Borzi A., Douglas C. C., Implementation and analysis of multigrid schemes with finite elements for elliptic optimal control problems, Computing, 2009, 84, 27-48 Web of ScienceCrossrefGoogle Scholar

  • [3]

    Arridge S. R., Optical tomography in medical imaging, Inverse Problem., 1999, 15, R41-R93 CrossrefGoogle Scholar

  • [4]

    Zheng Z., Zhang G.-F., Zhu M.-Z., A note on preconditioners for complex linear systems arising from PDE-constrained optimization problems, Appl. Math. Lett., 2016, 61, 114-121 CrossrefWeb of ScienceGoogle Scholar

  • [5]

    van Dijk W., Toyama F. M., Accurate numerical solutions of the time-dependent Schrödinger equation, Phys. Rev. E., 2007, 75, 036707-1-036707-10 Google Scholar

  • [6]

    Benzi M., Bertaccini D., Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal., 2008, 28, 598-618 Web of ScienceGoogle Scholar

  • [7]

    Feriani A., Perotti F., Simoncini V., Iterative system solvers for the frequency analysis of linear mechanical systems, Comput. Methods Appl. Mech. Engrg., 2000, 190, 1719-1739 CrossrefGoogle Scholar

  • [8]

    Bao G., Sun W.-W., A fast algorithm for the electromagnetic scattering from a large cavity, SIAM J. Sci. Comput., 2005, 27, 553-574 CrossrefWeb of ScienceGoogle Scholar

  • [9]

    Benzi M., Golub G. H., Liesen J., Numerical solution of saddle point problems, Acta Numer., 2005, 14, 1-137 CrossrefGoogle Scholar

  • [10]

    Saad Y., Iterative Methods for Sparse Linear Systems, SIAM, 2003 Google Scholar

  • [11]

    Bai Z.-Z., Golub G. H., Ng M. K., Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 2003, 24, 603-626 CrossrefGoogle Scholar

  • [12]

    Bai Z.-Z., Benzi M., Chen F., Modified HSS iteration methods for a class of complex symmetric linear systems, Computing, 2010, 87, 93-111 CrossrefWeb of ScienceGoogle Scholar

  • [13]

    Bai Z.-Z., Benzi M., Chen F., On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms., 2011, 56, 297-317 Web of ScienceCrossrefGoogle Scholar

  • [14]

    Bai Z.-Z., On preconditioned iteration methods for complex linear systems, J. Eng. Math., 2015, 93, 41-60 Web of ScienceCrossrefGoogle Scholar

  • [15]

    Li X., Yang A.-L., Wu Y.-J., Lopsided PMHSS iteration method for a class of complex symmetric linear systems, Numer. Algorithms., 2014, 66, 555-568 CrossrefWeb of ScienceGoogle Scholar

  • [16]

    Zheng Q.-Q., Ma C.-F., Accelerated PMHSS iteration methods for complex symmetric linear systems, Numer. Algorithms., 2016, 73, 501-516 CrossrefWeb of ScienceGoogle Scholar

  • [17]

    Hezari D., Edalatpour V., Salkuyeh D. K., Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numer. Linear Algebra Appl., 2014, 22, 761-776 Web of ScienceGoogle Scholar

  • [18]

    Bai Z.-Z., Several splittings for non-Hermitian linear systems, Sci. China Ser. A: Math., 2008, 51, 1339-1348 Web of ScienceCrossrefGoogle Scholar

  • [19]

    Zheng Z., Huang F.-L., Peng Y.-C., Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett., 2017, 73, 91-97 CrossrefWeb of ScienceGoogle Scholar

  • [20]

    Bai Z.-Z., Rotated block triangular preconditioning based on PMHSS, Sci. China Math. 2013, 56, 2523-2538 CrossrefWeb of ScienceGoogle Scholar

  • [21]

    Bai Z.-Z., Benzi M., Chen F., Wang Z.-Q., Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal., 2013, 33, 343-369 CrossrefWeb of ScienceGoogle Scholar

  • [22]

    Lang C., Ren Z.-R., Inexact rotated block triangular preconditioners for a class of block two-by-two matrices, J. Eng. Math., 2015, 93, 87-98 Web of ScienceCrossrefGoogle Scholar

  • [23]

    Liang Z.-Z., Zhang G.-F., On SSOR iteration method for a class of block two-by-two linear systems, Numer. Algorithms., 2016, 71, 655-671 Web of ScienceCrossrefGoogle Scholar

  • [24]

    Wu S.-L., Several variants of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems, Numer. Linear Algebra Appl., 2015, 22, 338-356 Web of ScienceCrossrefGoogle Scholar

  • [25]

    Zhang J.-H., Dai H., A new splitting preconditioner for the iterative solution of complex symmetric indefinite linear systems, Appl. Math. Lett., 2015, 49, 100-106 CrossrefWeb of ScienceGoogle Scholar

  • [26]

    Zhang J.-H., Dai H., A new block preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2017, 74, 889-903 Web of ScienceCrossrefGoogle Scholar

  • [27]

    Zhang J.-L., Fan H.-T., Gu C.-Q., An improved block splitting preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2018, 77, 451-478 Web of ScienceCrossrefGoogle Scholar

  • [28]

    Pan J.-Y., Ng M. K., Bai Z.-Z., New preconditioners for saddle point problems, Appl. Math. Comput., 2006, 172, 762-771 Web of ScienceGoogle Scholar

  • [29]

    Bai Z.-Z., Golub G. H., Lu L.-Z., Yin J.-F., Block triangular and skew-Hermitian splitting method for positive definite linear systems, SIAM J. Sci. Comput., 2005, 26, 844-863 CrossrefGoogle Scholar

  • [30]

    Cao Y., Dong J.-L., Wang Y.-M., A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation, J. Comput. Appl. Math., 2015, 273, 41-60 CrossrefWeb of ScienceGoogle Scholar

  • [31]

    Fan H.-T., Zheng B., Zhu X.-Y., A relaxed positive semi-definite and skew-Hermitian splitting preconditioner for non-Hermitian generalized saddle point problems, Numer. Algor., 2016, 72, 813-834 CrossrefGoogle Scholar

  • [32]

    Cao Y., Yao L.-Q., Jiang M.-Q., Niu Q., A relaxed HSS preconditioner for saddle point problems from meshfree discretization, J. Comput. Math., 2013, 31, 398-421 CrossrefWeb of ScienceGoogle Scholar

  • [33]

    Zhang K., Zhang J.-L., Gu C.-Q., A new relaxed PSS preconditioner for nonsymmetric saddle point problems, Appl. Math. Comput., 2017, 308, 115-129 Web of ScienceGoogle Scholar

  • [34]

    Saad Y., Schultz M. H., GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 1986, 7, 856-869 CrossrefGoogle Scholar

About the article

Received: 2017-10-08

Accepted: 2018-03-29

Published Online: 2018-06-07


Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 561–573, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2018-0051.

Export Citation

© 2018 Huang and Chen, published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in