Consider the following large sparse nonsingular complex symmetric linear systems
where A = W + iT, W, T ∈ ℝn×n are symmetric matrices and i = denotes the imaginary unit. Such complex linear system (1) arise in many areas of scientific computing and engineering applications, such as FFT-based solution of certain time-dependent PDEs , distributed control problems , diffuse optical tomography , PDE-constrained optimization problems , quantum mechanics  and so on; see [6,7,8] and references therein for other applications.
Let u = x + iy and b = f + ig with x, y, f, g ∈ ℝn, then the complex linear system (1) can be equivalently rewritten as the real block two-by-two linear system as follows
or the following two-by-two block real equivalent formulation
It can be seen that the linear systems (2) and (3) both avoid using complex arithmetic, but the coefficient matrix in (2) and (3) have become doubled in size. Because it is more attractive to use iterative methods rather than direct methods for solving (1) or (2), then many efficient iterative methods as well as their numerical properties have been studied in the literature, see [9,10]. When both W and T are symmetric positive semi-definite with at least one of them being positive definite, some efficient iterative methods have been proposed. Recently, based on the Hermitian and skew-Hermitian splitting (HSS) method , Bai et al.  developed the modified HSS (MHSS) iteration method to solve the complex linear system (1), which is convergent for any positive constant α. To accelerate the convergence rate of the MHSS iteration method, Bai et al.  introduced the preconditioned MHSS (PMHSS) which is unconditionally convergent and shows h-independent convergence behavior. In , Bai further analyzed algebraic and convergence properties of the PMHSS iteration method for solving complex linear systems. Moreover, there are also some other effective iterative methods, such as the lopsided PMHSS (LPMHSS) iteration method, the accelerated PMHSS (APMHSS) iteration method, the preconditioned generalized SOR (PGSOR) iterative method, the skew-normal splitting (SNS) method, the double-step scale splitting (DSS) iteration method [15,16,17,18,19], and so on.
If the matrices W and T are symmetric positive semi-definite, some efficient preconditioners have been presented. Bai  proposed the rotated block triangular (RBT) preconditioners which is based on the PMHSS preconditioning matrix  for the linear system (2). Lang and Ren  established the inexact RBT (IRBT) preconditioners. For solving the linear system (2), Liang and Zhang  developed the symmetric SOR (SSOR) method.
However, when W ∈ ℝn×n is symmetric indefinite, T ∈ ℝn×n is symmetric positive definite, the matrices αI + W, αV + W and αW + T2 are indefinite or singular which may lead to the slow convergence speeds of the MHSS method, the PMHSS method and the SNS method. To overcome these problems, in , the Hermitian normal splitting (HNS) method and its variant simplified HNS (SHNS) method have been introduced by Wu for solving the complex symmetric linear system (1). Zhang and Dai  presented a preconditioned SHNS (PSHNS) iteration method and constructed a new preconditioner. In this paper, we solve the linear system (3) with W being symmetric indefinite and T being symmetric positive definite. In order to solve real formulation (3), Zhang and Dai  established a new block preconditioner and also analyzed the spectral properties of the corresponding preconditioned matrix. An improved block (IB) splitting preconditioner was proposed by Zhang et al. in  and they gave the convergence property of the corresponding iteration method under suitable conditions.
In this paper, by using the relaxing technique, we construct a relaxed block splitting preconditioner for the real block two-by-two linear system (3). The relaxed splitting preconditioner is much closer to the original coefficient matrix than the block preconditioners in . We analyze the spectral properties of the corresponding preconditioned matrix and show that the corresponding preconditioned matrix has an eigenvalue 1 with algebraic multiplicity at least n. The structure of the eigenvectors and the dimension of the Krylov subspace for the preconditioned matrix are also derived.
The organization of the paper is as follows. In Section 2, a relaxed block splitting preconditioner is presented. In Section 3, we study the distribution of eigenvalues, the form of the eigenvectors and the dimension of the Krylov subspace of the corresponding preconditioned matrix. In Section 4, some numerical examples are presented to compare the new preconditioner with the PSHNS preconditioner and the HSS preconditioner. Finally, we give some brief concluding remarks in Section 5.
2 A new relaxed splitting preconditioner
In this section, by employing the relaxation techniques, we establish a relaxed block splitting preconditioner for solving the linear system (3).
Pan et al.  employed the positive-definite and skew-Hermitian splitting (PSS) iteration method in  to develop a PSS preconditioner for saddle point problems. For the linear system (3), the coefficient matrix 𝓐 can be split as follows
where Recently, based on the PSS preconditioner, Zhang and Dai presented a preconditioner in  as follows
Then the difference between the preconditioner 𝓟1 and the coefficient matrix 𝓐 is given as follows
A general criterion for an efficient preconditioner is that it should be as close as possible to the coefficient matrix 𝓐 which makes the preconditioned matrix have a clustered spectrum, and hope that it is easy to implement. Inspired by the idea of the relaxation technique in [30,31,32,33], we give a relaxed splitting preconditioner which is defined as follows
It is easy to see that the (2, 1)-block matrix in (8) is different from that in (6) and the (1, 1)-block and (2, 2)-block matrix in (8) now vanish, which indicates that the preconditioner 𝓟2 is much closer to the coefficient matrix 𝓐 than the preconditioner 𝓟1.
From (8), we know that the preconditioner 𝓟2 can be established by the following splitting of the coefficient matrix 𝓐
In the following section, we will analyze the spectral properties of the preconditioned matrix 𝓐.
3 Spectral analysis of the preconditioned matrix
In this section, we mainly study the spectral properties of the preconditioned matrix 𝓐 and give an upper bound of the dimension of the Krylov subspace for this preconditioned matrix. Therefore, firstly we should get the inverse of the matrix 𝓟2 and the explicit form of the matrix will be given in the following lemma. The proof of this lemma consists of straightforward calculations and is omitted here.
Here 𝓟 has the following block-triangular factorization
Then we obtain
In the following, we will analyze the eigenvalue distributions of the preconditioned matrix 𝓐.
Assume that the coefficient matrix 𝓐 is nonsingular, W ∈ ℝn×n is symmetric indefinite and T ∈ ℝn×n is symmetric positive definite. Let α be a real positive constant and U = WT−1W. The preconditioner 𝓟2 is defined as in (7). Then the preconditioned matrix 𝓐 has eigenvalues at 1 with multiplicity at least n and the remaining eigenvalues are λi, i = 1, 2, ⋯, n, where λi (i = 1, 2, ⋯, n) are the eigenvalues of the matrix αT−1(αI + U)−1(T + U).
It follows from U = WT−1W that
Hence, we get
Then, it is obvious from (11) that the results are proved. □
The detail implementation of the preconditioning process will be described as follows. Applying the preconditioner 𝓟2 within a Krylov subspace method, the following system should be solved at each step
Then, we can derive the implementing process of the preconditioner 𝓟2 as follows.
(The preconditioner 𝓟2). For a given vector , the vector can be computed by (13) from the following steps:
Compute t2 = r1 + Wz2;
Solve Tz1 = t2;
Set the generalized residual vector
It can be seen from Algorithm 3.3 that we need to solve two sub-linear systems with coefficient matrices T + W2 and T at each iteration. Based on the assumptions of the matrices W and T in Section 1, we know that the two matrices T + W2 and T are both symmetric positive definite. Hence, we can solve the two sub-linear systems (T + W2)z2 = t1 and Tz1 = t2 by applying the sparse Cholesky decomposition, the conjugate gradient (CG) method or the preconditioned CG method. In order to prove the effectiveness of the preconditioner 𝓟2, we compare it with the PSHNS preconditioner and the HSS preconditioner which are given as follows:
Then, the implementing process about the PSHNS preconditioner and the HSS preconditioner with GMRES will be described as follows, respectively.
(The PSHNS Preconditioner). For a given vector r, the vector z can be computed from the following steps:
Solve (αV + iW)d = 2αr;
Solve (αT + WV–1W)z = Wd.
(The HSS Preconditioner). For a given vector can be computed from the following steps:
Solve (αI + T)ν1 = r1;
Solve (αI + T)ν2 = r2;
Compute μ1 = αν2 – Wν1;
Compute z1 =
Set the generalized residual vector
From Algorithm 3.4, we can see that the PSHNS preconditioner iteration involves the complex coefficient matrix αV + iW whose corresponding linear subsystem may be hard to be solved. Meanwhile, it can be seen from Algorithm 3.5 that the implementation of the HSS preconditioner need to solve three symmetric positive definite linear subsystems.
In the following, we will discuss the eigenvector distributions of the preconditioned matrix in detail.
Let the preconditioner 𝓟2 be defined as in (7), then the preconditioned matrix has n + i + j (0 ≤ i + j ≤ n) linearly independent eigenvectors, which are described as follows:
n eigenvectors that correspond to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.
i (0 ≤ i ≤ n) eigenvectors that correspond to the eigenvalue 1, where and are arbitrary vectors.
j (0 ≤ j ≤ n) eigenvectors that correspond to the eigenvalues λk ≠ 1, where
Let λ be an eigenvalue of the preconditioned matrix be the corresponding eigenvector. Then
By simple calculation, we have
If λ = 1, the Eq. (14) become
Then we have the following two situations.
When u = 0, the Eq. (15) are always true. Hence, there are n linearly independent eigenvectors (k = 1, 2, ⋯, n) corresponding to the eigenvalue 1, where vk (k = 1, 2, ⋯, n) are arbitrary linearly independent vectors.
When u ≠ 0, through a simple calculation for (15), we have Tu = αu. Then, there will be i (0 ≤ i ≤ n) linearly independent eigenvectors that correspond to the eigenvalue 1, where are arbitrary vectors.
Next, we consider the case λ ≠ 1. It can be seen from (14) that the results in (3) hold.
Finally, we show that the n + i + j eigenvectors are linearly independent. Let c = [c1, c2, ⋯, cn]T, c1 = be three vectors with 0 ≤ i, j ≤ n. Then, we have to show that
holds if and only if the vectors c, c1 and c2 are all zero vectors, where the first matrix consists of the eigenvectors corresponding to the eigenvalue 1 for the case (1), the second matrix consists of those for the case (2) and the third matrix consists of the eigenvectors corresponding to λ ≠ 1 for the case (3). By multiplying matrix from left on both sides of Eq. (16), we get
Because the eigenvalues λk ≠ 1 and are linearly independent, we know that = 0 (k = 1, ⋯, j). Thus, the Eq. (16) reduces to
As the vectors (k = 1, ⋯, i) are also linearly independent, then we have = 0 (k = 1, ⋯, i). Hence, the above equation becomes
Since vk (k = 1, ⋯, n) are linearly independent, we have ck = 0 (k = 1, ⋯, n). Therefore, the n + i + j eigenvectors are linearly independent. The proof of this theorem is completed □
Based on the Krylov subspace theory, we know that the iterative method with an optimal property (such as GMRES  method) will terminate at the moment when the degree of the minimal polynomial is attained. Next, we will give an upper bound of the degree of the minimal polynomial of the preconditioned matrix .
Assume that the conditions of Theorem 3.2 are satisfied and let the preconditioner 𝓟2 be defined as in (7). Then the degree of the minimal polynomial of the preconditioned matrix is at most n + 1. Thus, the dimension of the Krylov subspace 𝓚( , b) is at most n + 1.
From (11), we know that the preconditioned matrix can be expressed as
where θ1 = αT–1(αI + U)–1(T + U) and Let μi (i = 1, ⋯, n) be the eigenvalues of matrix θ1. The characteristic polynomial of the preconditioned matrix is
Because μi (i = 1, ⋯, n) are the eigenvalues of θ1, then by the Hamilton-Cayley theorem, we have
Therefore, we obtain that the degree of the minimal polynomial of the preconditioned matrix is at most n+1. It is well known that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace. So the dimension of the Krylov subspace 𝓚( , b) is also at most n + 1. Thus, the proof of this theorem is completed. □
4 Numerical experiments
In this section, some numerical experiments are presented to illustrate the effectiveness of the preconditioner 𝓟2 for the linear system (3). We compare the performance of the preconditioner 𝓟2 with the PSHNS preconditioner and the HSS preconditioner. We denote the number of iteration steps as ”Iter”, the elapsed CPU time in seconds as “CPU”, and the relative residual norm as “RES”. Unless otherwise specified, we use left preconditioning with the GMRES method . All the computations are implemented in MATLAB on a PC computer with Intel (R) Core (TM) i5 CPU 2.50GHz, and 4.00 GB memory.
In our implementations, the zero vector is adopted as the initial vector and the iteration stops when
The maximum number of iteration steps allowed is set to 3000 for the unpreconditioned GMRES method, and to 500 for the preconditioned GMRES method. It should be emphasized that the sub-linear systems arising from the application of the preconditioners are solved by direct methods. In matlab, the sub-systems of linear equations are solved through sparse Cholesky or LU factorization in combination with AMD or column AMD reordering. A symbol “-” is used to indicate that the method does not obtain the required stopping criterion before maximum iterations or out of memory. We test three values of the parameter α, that is, α = 0.001, α = 0.01, α = 1.
where M and K are the inertia and stiffness matrices, Cv and CH are the viscous and hysteretic damping matrices, respectively, ω is the driving circular frequency, K = I ⊗ Vm + Vm ⊗ I, Vm = h–2 tridiag(–1, 2, n-1) ∈ ℝm×m is a tridiagonal matrix, ω = 2π, with μ = 0.02 being a damping coefficient. We choose W = h2(–ω2M + K), T = h2(ωCv + CH) and test M = 10I, 15I, 25I, 35I, 50I. In this example, we set m = 32 and the right-hand side b = 𝓐 * ones(2m2, 1).
By choosing different α and M, we list the numerical results of the unpreconditioned GMRES method, the 𝓟2 preconditioned GMERS method, the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method in Table 1. Table 1 indicates that applying the 𝓟2 preconditioned GMERS method requires less iteration steps and CPU times than applying the other preconditioned GMRES methods by choosing suitable parameters. In Table 1, we find that the PSHNS preconditioner and the HSS preconditioner are both more sensitive to the parameter α than the preconditioner 𝓟2. Figure 1 describes the eigenvalue distributions of the original coefficient matrix 𝓐, the preconditioned matrix the preconditioned matrix and the preconditioned matrix (for α = 0.06 and α = 0.006) with M = 35I. From Figure 1, we observe that the eigenvalue distributions of the preconditioned matrix are much better than those of the other preconditioned matrices. Moreover, the smaller parameter α generally makes the eigenvalues of the preconditioned matrix become more and more clustered.
where Tm = tridiag(–1, 2, –1) is a tridiagonal matrix with order m and k denotes the wavenumber, σ2 = 0.1, and b = 𝓐 * ones(2m2, 1). Here, we choose W = Tm ⊗ Im + Im ⊗ Tm – k2h2(Im ⊗ Im) and T = σ2(Im ⊗ Im).
Table 2 shows that these three kinds preconditioned GMRES methods present much better convergence behavior than the unpreconditioned GMRES method. Meanwhile, the 𝓟2 preconditioned GMERS method gives a better numerical results than the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method with different parameters α in terms of iteration steps and CPU times. From Figure 2, we find that the eigenvalue distributions of the preconditioned matrix are much better than those of the other preconditioned matrices and the eigenvalues of the preconditioned matrix will be more gathered as α decrease.
In this paper, we have proposed a relaxed block splitting preconditioner 𝓟2 for the real block two-by-two linear system (3) and analyzed the eigenvalue distributions of the corresponding preconditioned matrix. Meanwhile, the structure of the eigenvectors and an upper bound for the degree of the minimal polynomial of the preconditioned matrix were also derived. Some numerical experiments are given to illustrate the effectiveness of the preconditioner 𝓟2 for solving the linear system (3).
The authors would like to thank the referees for their very detailed comments and suggestions which improved the presentation of this paper greatly.
This work is supported by the National Natural Science Foundation of China No. 11471122 and supported in part by Science and Technology Commission of Shanghai Municipality (No. 18dz2271000).
Lass O., Vallejos M., Borzi A., Douglas C. C., Implementation and analysis of multigrid schemes with finite elements for elliptic optimal control problems, Computing, 2009, 84, 27-48 Web of ScienceCrossrefGoogle Scholar
Zheng Z., Zhang G.-F., Zhu M.-Z., A note on preconditioners for complex linear systems arising from PDE-constrained optimization problems, Appl. Math. Lett., 2016, 61, 114-121 CrossrefWeb of ScienceGoogle Scholar
van Dijk W., Toyama F. M., Accurate numerical solutions of the time-dependent Schrödinger equation, Phys. Rev. E., 2007, 75, 036707-1-036707-10 Google Scholar
Saad Y., Iterative Methods for Sparse Linear Systems, SIAM, 2003 Google Scholar
Bai Z.-Z., Golub G. H., Ng M. K., Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 2003, 24, 603-626 CrossrefGoogle Scholar
Hezari D., Edalatpour V., Salkuyeh D. K., Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numer. Linear Algebra Appl., 2014, 22, 761-776 Web of ScienceGoogle Scholar
Zheng Z., Huang F.-L., Peng Y.-C., Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett., 2017, 73, 91-97 CrossrefWeb of ScienceGoogle Scholar
Bai Z.-Z., Benzi M., Chen F., Wang Z.-Q., Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal., 2013, 33, 343-369 CrossrefWeb of ScienceGoogle Scholar
Wu S.-L., Several variants of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems, Numer. Linear Algebra Appl., 2015, 22, 338-356 Web of ScienceCrossrefGoogle Scholar
Zhang J.-H., Dai H., A new splitting preconditioner for the iterative solution of complex symmetric indefinite linear systems, Appl. Math. Lett., 2015, 49, 100-106 CrossrefWeb of ScienceGoogle Scholar
Zhang J.-L., Fan H.-T., Gu C.-Q., An improved block splitting preconditioner for complex symmetric indefinite linear systems, Numer. Algorithms., 2018, 77, 451-478 Web of ScienceCrossrefGoogle Scholar
Cao Y., Dong J.-L., Wang Y.-M., A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation, J. Comput. Appl. Math., 2015, 273, 41-60 CrossrefWeb of ScienceGoogle Scholar
Fan H.-T., Zheng B., Zhu X.-Y., A relaxed positive semi-definite and skew-Hermitian splitting preconditioner for non-Hermitian generalized saddle point problems, Numer. Algor., 2016, 72, 813-834 CrossrefGoogle Scholar
About the article
Published Online: 2018-06-07
Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 561–573, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2018-0051.
© 2018 Huang and Chen, published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0