In this section, we mainly study the spectral properties of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}}\end{array}$
𝓐 and give an upper bound of the dimension of the Krylov subspace for this preconditioned matrix. Therefore, firstly we should get the inverse of the matrix 𝓟_{2} and the explicit form of the matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}}\end{array}$
will be given in the following lemma. The proof of this lemma consists of straightforward calculations and is omitted here.

#### Lemma 3.1

*Let*

$$\begin{array}{}{\displaystyle \mathcal{P}=\left(\begin{array}{cc}\alpha I& -W\\ W& T\end{array}\right)\phantom{\rule{1em}{0ex}}\mathrm{a}\mathrm{n}\mathrm{d}\phantom{\rule{1em}{0ex}}\mathcal{P}=\left(\begin{array}{cc}\mathrm{T}& 0\\ 0& \alpha I\end{array}\right).}\end{array}$$

*Here* 𝓟 *has the following block*-*triangular factorization*

$$\begin{array}{}{\displaystyle \mathcal{P}=\left(\begin{array}{cc}I& 0\\ \frac{1}{\alpha}W& I\end{array}\right)\left(\begin{array}{cc}\alpha I& \text{\hspace{0.17em}}0\\ 0& \text{\hspace{0.17em}}T+\frac{1}{\alpha}{W}^{2}\end{array}\right)\left(\begin{array}{cc}I& -\frac{1}{\alpha}W\\ 0& I\end{array}\right).}\end{array}$$

*Then we obtain*

$$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}=\alpha {\hat{\mathcal{P}}}^{-1}{\mathcal{P}}^{-1}=\left(\begin{array}{cc}{T}^{-1}-\frac{1}{\alpha}{T}^{-1}W(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W& \text{\hspace{0.17em}}{T}^{-1}W(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}\\ -\frac{1}{\alpha}(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W& (T+\frac{1}{\alpha}{W}^{2}{)}^{-1}\end{array}\right).}\end{array}$$(10)

In the following, we will analyze the eigenvalue distributions of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}}\end{array}$
𝓐.

#### Theorem 3.2

*Assume that the coefficient matrix* 𝓐 *is nonsingular*, *W* ∈ ℝ^{n×n} *is symmetric indefinite and* *T* ∈ ℝ^{n×n} *is symmetric positive definite*.* Let* *α* *be a real positive constant and* *U* = *WT*^{−1}*W*. *The preconditioner* 𝓟_{2} *is defined as in (7)*. *Then the preconditioned matrix*
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}}\end{array}$
𝓐 *has eigenvalues at* 1 *with multiplicity at least* *n* *and the remaining eigenvalues are* λ_{i}, *i* = 1, 2, ⋯, *n*, *where* λ_{i} (*i* = 1, 2, ⋯, *n*) *are the eigenvalues of the matrix* *αT*^{−1}(*αI* + *U*)^{−1}(*T* + *U*).

#### Proof

From (9) and Lemma 3.1, we have

$$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}=I-{\mathcal{P}}_{2}^{-1}{\mathcal{R}}_{2}=I-\left(\begin{array}{cc}{T}^{-1}W(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)& \text{\hspace{0.17em}}0\\ (T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)& \text{\hspace{0.17em}}0\end{array}\right)}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}=\left(\begin{array}{cc}I-{T}^{-1}W(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)& \text{\hspace{0.17em}}0\\ -(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)& \text{\hspace{0.17em}}I\end{array}\right).}\end{array}$$

It follows from *U* = *WT*^{−1}*W* that

$$\begin{array}{}{\displaystyle I-{T}^{-1}W(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)={T}^{-1}(T-W(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I))}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}={T}^{-1}(T-W(W({W}^{-1}T{W}^{-1}+\frac{1}{\alpha}I)W{)}^{-1}W(\frac{1}{\alpha}T-I))}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}={T}^{-1}(T-(\frac{1}{\alpha}I+{W}^{-1}T{W}^{-1}{)}^{-1}(\frac{1}{\alpha}T-I))}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}={T}^{-1}(T-(\frac{1}{\alpha}U+I{)}^{-1}U(\frac{1}{\alpha}T-I))}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}=\alpha {T}^{-1}(\alpha I+U{)}^{-1}(T+U).}\end{array}$$

Hence, we get

$$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}=\left(\begin{array}{cc}\alpha {T}^{-1}(\alpha I+U{)}^{-1}(T+U)& \text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ -(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)& \text{\hspace{0.17em}}\text{\hspace{0.17em}}I\end{array}\right).}\end{array}$$(11)

Then, it is obvious from (11) that the results are proved. □

The detail implementation of the preconditioning process will be described as follows. Applying the preconditioner 𝓟_{2} within a Krylov subspace method, the following system should be solved at each step

$$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}z=\frac{1}{\alpha}\left(\begin{array}{cc}\alpha I& -W\\ W& T\end{array}\right)\left(\begin{array}{cc}T& 0\\ 0& \alpha I\end{array}\right)z=r,}\end{array}$$(12)

where
$\begin{array}{}{\displaystyle z=[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}}\end{array}$
and
$\begin{array}{}{\displaystyle r=[{r}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{r}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}}\end{array}$
are the current and generalized residual vectors,
respectively, *z*_{1}, *z*_{2}, *r*_{1}, *r*_{2} ∈ ℝ^{n}. Based on Lemma 3.1 and (12), we have

$$\begin{array}{}{\displaystyle \left(\begin{array}{c}{z}_{1}\\ {z}_{2}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}\alpha \left(\begin{array}{cc}{T}^{-1}& 0\\ 0& \frac{1}{\alpha}I\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{cc}I& \frac{1}{\alpha}W\\ 0& I\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{cc}\frac{1}{\alpha}I& 0\\ 0& (T+\frac{1}{\alpha}{W}^{2}{)}^{-1}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{cc}I& 0\\ -\frac{1}{\alpha}W& I\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{r}_{1}\\ {r}_{2}\end{array}\right).}\end{array}$$(13)

Then, we can derive the implementing process of the preconditioner 𝓟_{2} as follows.

#### Algorithm 3.3

(The preconditioner 𝓟_{2}). *For a given vector*
$\begin{array}{}{\displaystyle r=[{r}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{r}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}}\end{array}$
, *the vector*
$\begin{array}{}{\displaystyle z=[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}}\end{array}$
*can be computed by (13) from the following steps*:

*Compute*
$\begin{array}{}{\displaystyle {t}_{1}={r}_{2}-\frac{1}{\alpha}W{r}_{1};}\end{array}$

*Solve*
$\begin{array}{}{\displaystyle (T+\frac{1}{\alpha}{W}^{2}){z}_{2}={t}_{1};}\end{array}$

*Compute* *t*_{2} = *r*_{1} + *Wz*_{2};

*Solve Tz*_{1} = *t*_{2};

*Set the generalized residual vector*
$\begin{array}{}{\displaystyle z=[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}}\end{array}$

It can be seen from Algorithm 3.3 that we need to solve two sub-linear systems with coefficient matrices *T* +
$\begin{array}{}{\displaystyle \frac{1}{\alpha}}\end{array}$
*W*^{2} and *T* at each iteration. Based on the assumptions of the matrices *W* and *T* in Section 1, we know that the two matrices *T* +
$\begin{array}{}{\displaystyle \frac{1}{\alpha}}\end{array}$
*W*^{2} and *T* are both symmetric positive definite. Hence, we can solve the two sub-linear systems (*T* +
$\begin{array}{}{\displaystyle \frac{1}{\alpha}}\end{array}$
*W*^{2})*z*_{2} = *t*_{1} and *Tz*_{1} = *t*_{2} by applying the sparse Cholesky decomposition, the conjugate gradient (CG) method or the preconditioned CG method. In order to prove the effectiveness of the preconditioner 𝓟_{2}, we compare it with the PSHNS preconditioner and the HSS preconditioner which are given as follows:

$$\begin{array}{}{\displaystyle {\mathcal{P}}_{\mathrm{P}\mathrm{S}\mathrm{H}\mathrm{N}\mathrm{S}}\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}\frac{1}{2\alpha}(\alpha W+iI)(\alpha T+I)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{a}\mathrm{n}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathcal{P}}_{\mathrm{H}\mathrm{S}\mathrm{S}}\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}\frac{1}{}\left(\begin{array}{cc}\alpha \mathrm{I}+\mathrm{T}& 0\\ 0& \alpha I\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}T\end{array}\right)\left(\begin{array}{cc}\alpha \mathrm{I}& -W\\ W& \alpha I\end{array}\right).}\end{array}$$

Then, the implementing process about the PSHNS preconditioner and the HSS preconditioner with GMRES will be described as follows, respectively.

#### Algorithm 3.4

(The PSHNS Preconditioner). *For a given vector r*, *the vector z can be computed from the following steps*:

*Solve* (*αV* + *iW*)*d* = 2*αr*;

*Solve* (*αT* + *WV*^{–1}*W*)*z* = *Wd*.

#### Algorithm 3.5

(The HSS Preconditioner). *For a given vector*
$\begin{array}{}{\displaystyle r=[{r}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{r}_{2}^{\mathrm{T}}{]}^{\mathrm{T}},\phantom{\rule{thinmathspace}{0ex}}the\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}vector\phantom{\rule{thinmathspace}{0ex}}z=[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}}\end{array}$ *can be computed from the following steps*:

*Solve* (*αI* + *T*)*ν*_{1} = *r*_{1};

*Solve* (*αI* + *T*)*ν*_{2} = *r*_{2};

*Compute μ*_{1} = *αν*_{2} – *Wν*_{1};

*Solve* $\begin{array}{}{\displaystyle (\alpha I+\frac{1}{\alpha}{W}^{2}){z}_{2}={\mu}_{1};}\end{array}$

*Compute z*_{1} =
$\begin{array}{}{\displaystyle {\nu}_{1}+\frac{1}{\alpha}W{z}_{2};}\end{array}$

*Set the generalized residual vector*
$\begin{array}{}{\displaystyle z=[{z}_{1}^{\mathrm{T}},\text{\hspace{0.17em}}{z}_{2}^{\mathrm{T}}{]}^{\mathrm{T}}.}\end{array}$

From Algorithm 3.4, we can see that the PSHNS preconditioner iteration involves the complex coefficient matrix *αV* + *iW* whose corresponding linear subsystem may be hard to be solved. Meanwhile, it can be seen from Algorithm 3.5 that the implementation of the HSS preconditioner need to solve three symmetric positive definite linear subsystems.

In the following, we will discuss the eigenvector distributions of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$ in detail.

#### Theorem 3.6

*Let the preconditioner* 𝓟_{2} *be defined as in (7)*, *then the preconditioned matrix*
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$ *has n* + *i* + *j* (0 ≤ *i* + *j* ≤ *n*) *linearly independent eigenvectors*, *which are described as follows*:

*n* eigenvectors
$\begin{array}{}{\displaystyle \left(\begin{array}{c}0\\ {v}_{k}\end{array}\right)\text{\hspace{0.17em}}(k=1,2,\cdots ,n)}\end{array}$ that correspond to the eigenvalue 1, where *v*_{k} (*k* = 1, 2, ⋯, *n*) are arbitrary linearly independent vectors.

*i* (0 ≤ *i* ≤ *n*) eigenvectors
$\begin{array}{}{\displaystyle \left(\begin{array}{c}{u}_{k}^{1}\\ {v}_{k}^{1}\end{array}\right)\text{\hspace{0.17em}}(1\le k\le i)}\end{array}$ that correspond to the eigenvalue 1, where
$\begin{array}{}{\displaystyle {u}_{k}^{1}\ne 0,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}T{u}_{k}^{1}=\alpha {u}_{k}^{1}}\end{array}$ and
$\begin{array}{}{\displaystyle {v}_{k}^{1}}\end{array}$ are arbitrary vectors.

*j* (0 ≤ *j* ≤ *n*) eigenvectors
$\begin{array}{}{\displaystyle \left(\begin{array}{c}{u}_{k}^{2}\\ {v}_{k}^{2}\end{array}\right)\text{\hspace{0.17em}}(1\le k\le j)}\end{array}$ that correspond to the eigenvalues λ_{k} ≠ 1, where
$\begin{array}{}{\displaystyle \alpha (T+U){u}_{k}^{2}={\lambda}_{k}(\alpha I+U)T{u}_{k}^{2},{u}_{k}^{2}\ne 0\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{v}_{k}^{2}=\frac{1}{1-{\lambda}_{k}}(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}(\frac{1}{\alpha}WT-W){u}_{k}^{2}.}\end{array}$

#### Proof

Let λ be an eigenvalue of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{c}u\\ v\end{array}\right)}\end{array}$ be the corresponding eigenvector. Then

$$\begin{array}{}{\displaystyle \left(\begin{array}{cc}\alpha {T}^{-1}(\alpha I+U{)}^{-1}(T+U)& \text{\hspace{0.17em}}0\\ -(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)& \text{\hspace{0.17em}}I\end{array}\right)\left(\begin{array}{c}u\\ v\end{array}\right)=\lambda \left(\begin{array}{c}u\\ v\end{array}\right).}\end{array}$$

By simple calculation, we have

$$\begin{array}{}{\displaystyle \left\{\begin{array}{rl}& \alpha (T+U)u=\lambda (\alpha I+U)Tu,\\ & (T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)u=(1-\lambda )v.\end{array}\right.}\end{array}$$(14)

If λ = 1, the Eq. (14) become

$$\begin{array}{}{\displaystyle \left\{\begin{array}{rl}& \alpha (T+U)u=(\alpha I+U)Tu,\\ & (T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I)u=0.\end{array}\right.}\end{array}$$(15)

Then we have the following two situations.

When *u* = 0, the Eq. (15) are always true. Hence, there are *n* linearly independent eigenvectors
$\begin{array}{}{\displaystyle \left(\begin{array}{c}0\\ {v}_{k}\end{array}\right)}\end{array}$ (*k* = 1, 2, ⋯, *n*) corresponding to the eigenvalue 1, where *v*_{k} (*k* = 1, 2, ⋯, *n*) are arbitrary linearly independent vectors.

When *u* ≠ 0, through a simple calculation for (15), we have *Tu* = *αu*. Then, there will be *i* (0 ≤ *i* ≤ *n*) linearly independent eigenvectors
$\begin{array}{}{\displaystyle \left(\begin{array}{c}{u}_{k}^{1}\\ {v}_{k}^{1}\end{array}\right)\text{\hspace{0.17em}}(1\le k\le i)}\end{array}$ that correspond to the eigenvalue 1, where
$\begin{array}{}{\displaystyle {u}_{k}^{1}\ne 0,T{u}_{k}^{1}=\alpha {u}_{k}^{1}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{v}_{k}^{1}}\end{array}$ are arbitrary vectors.

Next, we consider the case λ ≠ 1. It can be seen from (14) that the results in (3) hold.

Finally, we show that the *n* + *i* + *j* eigenvectors are linearly independent. Let *c* = [*c*_{1}, *c*_{2}, ⋯, *c*_{n}]^{T}, *c*^{1} =
$\begin{array}{}{\displaystyle [{c}_{1}^{1},{c}_{2}^{1},\cdots ,{c}_{i}^{1}{]}^{\mathrm{T}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{c}^{2}=[{c}_{1}^{2},{c}_{2}^{2},\cdots ,{c}_{j}^{2}{]}^{\mathrm{T}}}\end{array}$ be three vectors with 0 ≤ *i*, *j* ≤ *n*. Then, we have to show that

$$\begin{array}{}{\displaystyle \left(\begin{array}{ccc}0& \cdots & 0\\ {v}_{1}& \cdots & {v}_{n}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{c}_{1}\\ \vdots \\ {c}_{n}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{ccc}{u}_{1}^{1}& \cdots & {u}_{i}^{1}\\ {v}_{1}^{1}& \cdots & {v}_{i}^{1}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{c}_{1}^{1}\\ \vdots \\ {c}_{i}^{1}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{ccc}{u}_{1}^{2}& \cdots & {u}_{j}^{2}\\ {v}_{1}^{2}& \cdots & {v}_{j}^{2}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{c}_{1}^{2}\\ \vdots \\ {c}_{j}^{2}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}0\\ \vdots \\ 0\end{array}\right)}\end{array}$$(16)

holds if and only if the vectors *c*, *c*^{1} and *c*^{2} are all zero vectors, where the first matrix consists of the eigenvectors corresponding to the eigenvalue 1 for the case (1), the second matrix consists of those for the case (2) and the third matrix consists of the eigenvectors corresponding to λ ≠ 1 for the case (3). By multiplying matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$ from left on both sides of Eq. (16), we get

$$\begin{array}{}{\displaystyle \phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{ccc}0& \cdots & 0\\ {v}_{1}& \cdots & {v}_{n}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{c}_{1}\\ \vdots \\ {c}_{n}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{ccc}{u}_{1}^{1}& \cdots & {u}_{i}^{1}\\ {v}_{1}^{1}& \cdots & {v}_{i}^{1}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{c}_{1}^{1}\\ \vdots \\ {c}_{i}^{1}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}+\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{ccc}{u}_{1}^{2}& \cdots & {u}_{j}^{2}\\ {v}_{1}^{2}& \cdots & {v}_{j}^{2}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}{\lambda}_{1}{c}_{1}^{2}\\ \vdots \\ {\lambda}_{j}{c}_{j}^{2}\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}=\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\left(\begin{array}{c}0\\ \vdots \\ 0\end{array}\right)\phantom{\rule{negativethinmathspace}{0ex}}.}\end{array}$$(17)

From the difference between (17) and (16), we obtain

$$\begin{array}{}{\displaystyle \left(\begin{array}{ccc}{u}_{1}^{2}& \cdots & {u}_{j}^{2}\\ {v}_{1}^{2}& \cdots & {v}_{j}^{2}\end{array}\right)\left(\begin{array}{c}({\lambda}_{1}-1){c}_{1}^{2}\\ \vdots \\ ({\lambda}_{j}-1){c}_{j}^{2}\end{array}\right)=\left(\begin{array}{c}0\\ \vdots \\ 0\end{array}\right).}\end{array}$$

Because the eigenvalues λ_{k} ≠ 1 and
$\begin{array}{}{\displaystyle \left(\begin{array}{c}{u}_{k}^{2}\\ {v}_{k}^{2}\end{array}\right)\text{\hspace{0.17em}}(k=1,\cdots ,j)}\end{array}$ are linearly independent, we know that
$\begin{array}{}{\displaystyle {c}_{k}^{2}}\end{array}$ = 0 (*k* = 1, ⋯, *j*). Thus, the Eq. (16) reduces to

$$\begin{array}{}{\displaystyle \left(\begin{array}{ccc}0& \cdots & 0\\ {v}_{1}& \cdots & {v}_{n}\end{array}\right)\left(\begin{array}{c}{c}_{1}\\ \vdots \\ {c}_{n}\end{array}\right)+\left(\begin{array}{ccc}{u}_{1}^{1}& \cdots & {u}_{i}^{1}\\ {v}_{1}^{1}& \cdots & {v}_{i}^{1}\end{array}\right)\left(\begin{array}{c}{c}_{1}^{1}\\ \vdots \\ {c}_{i}^{1}\end{array}\right)=\left(\begin{array}{c}0\\ \vdots \\ 0\end{array}\right).}\end{array}$$

As the vectors
$\begin{array}{}{\displaystyle {u}_{k}^{1}}\end{array}$ (*k* = 1, ⋯, *i*) are also linearly independent, then we have
$\begin{array}{}{\displaystyle {c}_{k}^{1}}\end{array}$ = 0 (*k* = 1, ⋯, *i*). Hence, the above equation becomes

$$\begin{array}{}{\displaystyle \left(\begin{array}{ccc}0& \cdots & 0\\ {v}_{1}& \cdots & {v}_{n}\end{array}\right)\left(\begin{array}{c}{c}_{1}\\ \vdots \\ {c}_{n}\end{array}\right)=\left(\begin{array}{c}0\\ \vdots \\ 0\end{array}\right).}\end{array}$$

Since *v*_{k} (*k* = 1, ⋯, *n*) are linearly independent, we have *c*_{k} = 0 (*k* = 1, ⋯, *n*). Therefore, the *n* + *i* + *j* eigenvectors are linearly independent. The proof of this theorem is completed □

Based on the Krylov subspace theory, we know that the iterative method with an optimal property (such as GMRES [34] method) will terminate at the moment when the degree of the minimal polynomial is attained. Next, we will give an upper bound of the degree of the minimal polynomial of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$.

#### Theorem 3.7

*Assume that the conditions of Theorem 3.2 are satisfied and let the preconditioner* 𝓟_{2} *be defined as in* (7). *Then the degree of the minimal polynomial of the preconditioned matrix*
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$ is at most *n* + 1. *Thus*, *the dimension of the Krylov subspace* 𝓚(
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$, *b*) *is at most n* + 1.

#### Proof

From (11), we know that the preconditioned matrix can be expressed as

$$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}=\left(\begin{array}{cc}{\theta}_{1}& 0\\ {\theta}_{2}& I\end{array}\right),}\end{array}$$

where *θ*_{1} = *αT*^{–1}(*αI* + *U*)^{–1}(*T* + *U*) and
$\begin{array}{}{\displaystyle {\theta}_{2}=-(T+\frac{1}{\alpha}{W}^{2}{)}^{-1}W(\frac{1}{\alpha}T-I).}\end{array}$ Let *μ*_{i} (*i* = 1, ⋯, *n*) be the eigenvalues of matrix *θ*_{1}. The characteristic polynomial of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$ is

$$\begin{array}{}{\displaystyle ({\mathcal{P}}_{2}^{-1}\mathcal{A}-I{)}^{n}\prod _{i=1}^{n}({\mathcal{P}}_{2}^{-1}\mathcal{A}-{\mu}_{i}I).}\end{array}$$

Then

$$\begin{array}{}{\displaystyle ({\mathcal{P}}_{2}^{-1}\mathcal{A}-I)\prod _{i=1}^{n}({\mathcal{P}}_{2}^{-1}\mathcal{A}-{\mu}_{i}I)=\left(\begin{array}{cc}({\theta}_{1}-I)\prod _{i=1}^{n}({\theta}_{1}-{\mu}_{i}I)& \text{\hspace{0.17em}}0\\ {\theta}_{2}\prod _{i=1}^{n}({\theta}_{1}-{\mu}_{i}I)& \text{\hspace{0.17em}}0\end{array}\right).}\end{array}$$

Because *μ*_{i} (*i* = 1, ⋯, *n*) are the eigenvalues of *θ*_{1}, then by the Hamilton-Cayley theorem, we have

$$\begin{array}{}{\displaystyle \prod _{i=1}^{n}({\theta}_{1}-{\mu}_{i}I)=0.}\end{array}$$

Therefore, we obtain that the degree of the minimal polynomial of the preconditioned matrix
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$ is at most *n*+1. It is well known that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace. So the dimension of the Krylov subspace 𝓚(
$\begin{array}{}{\displaystyle {\mathcal{P}}_{2}^{-1}\mathcal{A}}\end{array}$, *b*) is also at most *n* + 1. Thus, the proof of this theorem is completed. □

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.