A relaxed block splitting preconditioner for complex symmetric inde nite linear systems

In this paper, we propose a relaxed block splitting preconditioner for a class of complex symmetric inde nite linear systems to accelerate the convergence rate of the Krylov subspace iteration method and the relaxed preconditioner is much closer to the original block two-by-two coe cient matrix. We study the spectral properties and the eigenvector distributions of the correspondingpreconditionedmatrix. In addition, the degree of the minimal polynomial of the preconditioned matrix is also derived. Finally, some numerical experiments are presented to illustrate the e ectiveness of the relaxed splitting preconditioner.


Introduction
Consider the following large sparse nonsingular complex symmetric linear systems where A = W + iT, W , T ∈ R n×n are symmetric matrices and i = √ − denotes the imaginary unit. Such complex linear system (1) arise in many areas of scienti c computing and engineering applications, such as FFT-based solution of certain time-dependent PDEs [1], distributed control problems [2], di use optical tomography [3], PDE-constrained optimization problems [4], quantum mechanics [5] and so on; see [6][7][8] and references therein for other applications.
Let u = x + iy and b = f + ig with x, y, f , g ∈ R n , then the complex linear system (1) can be equivalently rewritten as the real block two-by-two linear system as follows or the following two-by-two block real equivalent formulation It can be seen that the linear systems (2) and (3) both avoid using complex arithmetic, but the coe cient matrix in (2) and (3) have become doubled in size. Because it is more attractive to use iterative methods rather than direct methods for solving (1) or (2), then many e cient iterative methods as well as their numerical properties have been studied in the literature, see [9,10]. When both W and T are symmetric positive semi-de nite with at least one of them being positive de nite, some e cient iterative methods have been proposed. Recently, based on the Hermitian and skew-Hermitian splitting (HSS) method [11], Bai et al. [12] developed the modi ed HSS (MHSS) iteration method to solve the complex linear system (1), which is convergent for any positive constant α. To accelerate the convergence rate of the MHSS iteration method, Bai et al. [13] introduced the preconditioned MHSS (PMHSS) which is unconditionally convergent and shows h-independent convergence behavior. In [14], Bai further analyzed algebraic and convergence properties of the PMHSS iteration method for solving complex linear systems. Moreover, there are also some other e ective iterative methods, such as the lopsided PMHSS (LPMHSS) iteration method, the accelerated PMHSS (APMHSS) iteration method, the preconditioned generalized SOR (PGSOR) iterative method, the skew-normal splitting (SNS) method, the double-step scale splitting (DSS) iteration method [15][16][17][18][19], and so on.
If the matrices W and T are symmetric positive semi-de nite, some e cient preconditioners have been presented. Bai [20] proposed the rotated block triangular (RBT) preconditioners which is based on the PMHSS preconditioning matrix [21] for the linear system (2). Lang and Ren [22] established the inexact RBT (IRBT) preconditioners. For solving the linear system (2), Liang and Zhang [23] developed the symmetric SOR (SSOR) method.
However, when W ∈ R n×n is symmetric inde nite, T ∈ R n×n is symmetric positive de nite, the matrices αI + W, αV + W and αW + T are inde nite or singular which may lead to the slow convergence speeds of the MHSS method, the PMHSS method and the SNS method. To overcome these problems, in [24], the Hermitian normal splitting (HNS) method and its variant simpli ed HNS (SHNS) method have been introduced by Wu for solving the complex symmetric linear system (1). Zhang and Dai [25] presented a preconditioned SHNS (PSHNS) iteration method and constructed a new preconditioner. In this paper, we solve the linear system (3) with W being symmetric inde nite and T being symmetric positive de nite. In order to solve real formulation (3), Zhang and Dai [26] established a new block preconditioner and also analyzed the spectral properties of the corresponding preconditioned matrix. An improved block (IB) splitting preconditioner was proposed by Zhang et al. in [27] and they gave the convergence property of the corresponding iteration method under suitable conditions. In this paper, by using the relaxing technique, we construct a relaxed block splitting preconditioner for the real block two-by-two linear system (3). The relaxed splitting preconditioner is much closer to the original coe cient matrix than the block preconditioners in [26]. We analyze the spectral properties of the corresponding preconditioned matrix and show that the corresponding preconditioned matrix has an eigenvalue 1 with algebraic multiplicity at least n. The structure of the eigenvectors and the dimension of the Krylov subspace for the preconditioned matrix are also derived.
The organization of the paper is as follows. In Section 2, a relaxed block splitting preconditioner is presented. In Section 3, we study the distribution of eigenvalues, the form of the eigenvectors and the dimension of the Krylov subspace of the corresponding preconditioned matrix. In Section 4, some numerical examples are presented to compare the new preconditioner with the PSHNS preconditioner and the HSS preconditioner. Finally, we give some brief concluding remarks in Section 5.

A new relaxed splitting preconditioner
In this section, by employing the relaxation techniques, we establish a relaxed block splitting preconditioner for solving the linear system (3).
Pan et al. [28] employed the positive-de nite and skew-Hermitian splitting (PSS) iteration method in [29] to develop a PSS preconditioner for saddle point problems. For the linear system (3), the coe cient matrix A can be split as follows where J = T and K = −W W T . Recently, based on the PSS preconditioner, Zhang and Dai presented a preconditioner in [26] as follows Then the di erence between the preconditioner P and the coe cient matrix A is given as follows A general criterion for an e cient preconditioner is that it should be as close as possible to the coe cient matrix A which makes the preconditioned matrix have a clustered spectrum, and hope that it is easy to implement. Inspired by the idea of the relaxation technique in [30][31][32][33], we give a relaxed splitting preconditioner which is de ned as follows From (3) and (7), we get the di erence between the preconditioner P and the coe cient matrix A It is easy to see that the (2, 1)-block matrix in (8) is di erent from that in (6) and the (1, 1)-block and (2, 2)-block matrix in (8) now vanish, which indicates that the preconditioner P is much closer to the coe cient matrix A than the preconditioner P . From (8), we know that the preconditioner P can be established by the following splitting of the coe cient matrix A In the following section, we will analyze the spectral properties of the preconditioned matrix P − A.

Spectral analysis of the preconditioned matrix
In this section, we mainly study the spectral properties of the preconditioned matrix P − A and give an upper bound of the dimension of the Krylov subspace for this preconditioned matrix. Therefore, rstly we should get the inverse of the matrix P and the explicit form of the matrix P − will be given in the following lemma. The proof of this lemma consists of straightforward calculations and is omitted here.
Here P has the following block-triangular factorization Then we obtain In the following, we will analyze the eigenvalue distributions of the preconditioned matrix P − A.

Theorem 3.2. Assume that the coe cient matrix A is nonsingular, W ∈ R n×n is symmetric inde nite and T ∈
R n×n is symmetric positive de nite. Let α be a real positive constant and U = WT − W. The preconditioner P is de ned as in (7). Then the preconditioned matrix P − A has eigenvalues at 1 with multiplicity at least n and the remaining eigenvalues are λ i , i = , , ⋯, n, where λ i (i = , , ⋯, n) are the eigenvalues of the matrix Proof. From (9) and Lemma 3.1, we have Hence, we get Then, it is obvious from (11) that the results are proved.
The detail implementation of the preconditioning process will be described as follows. Applying the preconditioner P within a Krylov subspace method, the following system should be solved at each step where z = [z T , z T ] T and r = [r T , r T ] T are the current and generalized residual vectors, respectively, z , z , r , r ∈ R n . Based on Lemma 3.1 and (12), we have Then, we can derive the implementing process of the preconditioner P as follows. It can be seen from Algorithm 3.3 that we need to solve two sub-linear systems with coe cient matrices T + α W and T at each iteration. Based on the assumptions of the matrices W and T in Section 1, we know that the two matrices T + α W and T are both symmetric positive de nite. Hence, we can solve the two sub-linear systems (T + α W )z = t and Tz = t by applying the sparse Cholesky decomposition, the conjugate gradient (CG) method or the preconditioned CG method. In order to prove the e ectiveness of the preconditioner P , we compare it with the PSHNS preconditioner and the HSS preconditioner which are given as follows: Then, the implementing process about the PSHNS preconditioner and the HSS preconditioner with GMRES will be described as follows, respectively. (2) Solve (αI + T)ν = r ; From Algorithm 3.4, we can see that the PSHNS preconditioner iteration involves the complex coe cient matrix αV + iW whose corresponding linear subsystem may be hard to be solved. Meanwhile, it can be seen from Algorithm 3.5 that the implementation of the HSS preconditioner need to solve three symmetric positive de nite linear subsystems. In the following, we will discuss the eigenvector distributions of the preconditioned matrix P − A in detail.
Theorem 3.6. Let the preconditioner P be de ned as in (7), then the preconditioned matrix P − A has n + i + j ( ≤ i + j ≤ n) linearly independent eigenvectors, which are described as follows: (1) n eigenvectors v k (k = , , ⋯, n) that correspond to the eigenvalue , where v k (k = , , ⋯, n) are arbitrary linearly independent vectors.
and v k are arbitrary vectors.
Proof. Let λ be an eigenvalue of the preconditioned matrix P − A and u v be the corresponding eigenvector. Then By simple calculation, we have If λ = , the Eq. (14) become Then we have the following two situations.
holds if and only if the vectors c, c and c are all zero vectors, where the rst matrix consists of the eigenvectors corresponding to the eigenvalue for the case (1), the second matrix consists of those for the case (2) and the third matrix consists of the eigenvectors corresponding to λ ≠ for the case (3). By multiplying matrix P − A from left on both sides of Eq. (16), we get From the di erence between (17) and (16), we obtain Because the eigenvalues λ k ≠ and u k v k (k = , ⋯, j) are linearly independent, we know that c k = (k = , ⋯, j). Thus, the Eq. (16) reduces to As the vectors u k (k = , ⋯, i) are also linearly independent, then we have c k = (k = , ⋯, i). Hence, the above equation becomes Since v k (k = , ⋯, n) are linearly independent, we have c k = (k = , ⋯, n). Therefore, the n + i + j eigenvectors are linearly independent. The proof of this theorem is completed Based on the Krylov subspace theory, we know that the iterative method with an optimal property (such as GMRES [34] method) will terminate at the moment when the degree of the minimal polynomial is attained. Next, we will give an upper bound of the degree of the minimal polynomial of the preconditioned matrix P − A.
Theorem 3.7. Assume that the conditions of Theorem 3.2 are satis ed and let the preconditioner P be de ned as in (7). Then the degree of the minimal polynomial of the preconditioned matrix P − A is at most n + . Thus, the dimension of the Krylov subspace K(P − A, b) is at most n + .
Proof. From (11), we know that the preconditioned matrix can be expressed as Let µ i (i = , ⋯, n) be the eigenvalues of matrix θ . The characteristic polynomial of the preconditioned matrix P − A is Then Because µ i (i = , ⋯, n) are the eigenvalues of θ , then by the Hamilton-Cayley theorem, we have n i= (θ − µ i I) = .
Therefore, we obtain that the degree of the minimal polynomial of the preconditioned matrix P − A is at most n+ . It is well known that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace. So the dimension of the Krylov subspace K(P − A, b) is also at most n + . Thus, the proof of this theorem is completed.

Numerical experiments
In this section, some numerical experiments are presented to illustrate the e ectiveness of the preconditioner P for the linear system (3). We compare the performance of the preconditioner P with the PSHNS preconditioner and the HSS preconditioner. We denote the number of iteration steps as "Iter", the elapsed CPU time in seconds as "CPU", and the relative residual norm as "RES". Unless otherwise speci ed, we use left preconditioning with the GMRES method [34]. All the computations are implemented in MATLAB on a PC computer with Intel (R) Core (TM) i5 CPU 2.50GHz, and 4.00 GB memory. In our implementations, the zero vector is adopted as the initial vector and the iteration stops when The maximum number of iteration steps allowed is set to for the unpreconditioned GMRES method, and to for the preconditioned GMRES method. It should be emphasized that the sub-linear systems arising from the application of the preconditioners are solved by direct methods. In matlab, the sub-systems of linear equations are solved through sparse Cholesky or LU factorization in combination with AMD or column AMD reordering. A symbol "-" is used to indicate that the method does not obtain the required stopping criterion before maximum iterations or out of memory. We test three values of the parameter α, that is, α = . , α = . , α = .
Example 1. We consider the following complex symmetric linear system which comes from [13,24,26] where M and K are the inertia and sti ness matrices, C v and C H are the viscous and hysteretic damping matrices, respectively, ω is the driving circular frequency,  By choosing di erent α and M, we list the numerical results of the unpreconditioned GMRES method, the P preconditioned GMERS method, the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method in Table 1. Table 1 indicates that applying the P preconditioned GMERS method requires less iteration steps and CPU times than applying the other preconditioned GMRES methods by choosing suitable parameters. In Table 1, we nd that the PSHNS preconditioner and the HSS preconditioner are both more sensitive to the parameter α than the preconditioner P . Figure 1 describes the eigenvalue distributions of the original coe cient matrix A, the preconditioned matrix P − HSS A (for α = . ), the preconditioned matrix P − PSHNS A (for α = . ) and the preconditioned matrix P − A (for α = . and α = . ) with M = I.
From Figure 1, we observe that the eigenvalue distributions of the preconditioned matrix P − A are much better than those of the other preconditioned matrices. Moreover, the smaller parameter α generally makes the eigenvalues of the preconditioned matrix P − A become more and more clustered.

Example 2.
We consider the following complex symmetric linear system [25,26] [   Fig. 2. The eigenvalue distributions of unpreconditioned matrix, the preconditioned matrices P − HSS A, P − PSHNS A and P − A for Example 2 with k = , m = . Table 2 shows that these three kinds preconditioned GMRES methods present much better convergence behavior than the unpreconditioned GMRES method. Meanwhile, the P preconditioned GMERS method gives a better numerical results than the PSHNS preconditioned GMRES method and the HSS preconditioned GMRES method with di erent parameters α in terms of iteration steps and CPU times. From Figure 2, we nd that the eigenvalue distributions of the preconditioned matrix P − A are much better than those of the other preconditioned matrices and the eigenvalues of the preconditioned matrix P − A will be more gathered as α decrease.

Conclusions
In this paper, we have proposed a relaxed block splitting preconditioner P for the real block two-by-two linear system (3) and analyzed the eigenvalue distributions of the corresponding preconditioned matrix. Meanwhile, the structure of the eigenvectors and an upper bound for the degree of the minimal polynomial of the preconditioned matrix P − A were also derived. Some numerical experiments are given to illustrate the e ectiveness of the preconditioner P for solving the linear system (3).