## 1 Introduction

*n*linear equations

*A*= (

*a*) ∈

_{ij}*C*

^{n×n}and is nonsingular,

*b, x*∈

*C*and

^{n}*x*unknown.

*A*= (

*A*) ∈

_{ij}*C*

^{n×n}is split into

*M*∈

*C*

^{n×n}is nonsingular and

*N*∈

*C*

^{n×n}. Then, the general form of iterative methods for (1) can be described as follows:

The matrix *H* = *M*^{−1}*N* is called the iterative matrix of the iteration (3). It is well-known that (3) converges for any given *x*^{(0)} if and only if *ρ*(H) < 1 (see [1–5]), where *ρ*(H) denotes the spectral radius of the matrix *H*. Thus, to establish the convergence results of iterative methods, we mainly study the spectral radius of the iteration matrix in iteration (3).

*I*is the identity matrix,

*L*and

*U*are strictly lower and strictly upper triangular, respectively. According to the standard decomposition (4), the iteration matrices for SOR iterative methods of (1) are listed in the following.

*ω*∈ (0, 1) is the overrelaxation parameter.

Recently, the class of strong *H*-matrices including strictly or irreducibly diagonally dominant matrices (whose comparison matrices are nonsingular *M*-matrices) has been extended to encompass a wider set, known as the set of *general H*−*matrices* (whose comparison matrices are *M*-matrices). In a recent paper, a partition of the *n* × *n* general *H*−matrix set *H _{n}* was obtained in [6–8]. Here, we give a different partition: general

*H*−matrix set

*H*are partitioned into two mutually exclusive classes: the strong class,

_{n}*H*−matrices are nonsingular, and the weak class,

*H*−matrices are singular, in which singular and nonsingular

*H*−matrices coexist. As is shown in [1–5, 9–27], classical iterative methods such as Jacobi, Gauss-Seidel, symmetric Gauss-Seidel, JOR and SOR(SSOR) iterative methods for linear systems, whose coefficient matrices are strong

*H*−matrices including strictly or irreducibly diagonally dominant matrices, are convergent to the unique solution of (1) for any choice of the initial guess

*x*

^{(0)}. However, the same need not be true in case of some iterative methods for the linear systems with weak

*H*−matrices. Let us investigate the following example.

*Assume that either A or B is the coefficient matrix of linear system*(1),

*where*

*It is verified that both A and B are weak H*−*matrices and not strong H*−*matrices, since their comparison matrices are both singular M*−*matrices. How can we get the convergence on FSOR-, BSOR- and SSOR iterative methods for linear systems with this class of matrices without direct computations?*

In last years, Zhang et al. in [28] and Zhang et al. in [29] studied the convergence of Jacobli, Gauss-Seidel and symmetric Gauss-Seidel iterative methods for the linear systems with nonstrictly diagonally dominant matrices and general *H*−matrices, and established some significant results. In this paper the convergence of FSOR-, BSOR- and SSOR- iterative method will be studied for the linear systems with weak *H*−matrices and some necessary and sufficient conditions are proposed, such that these iterative methods are convergent for the linear systems with this class of matrices. Then some numerical examples are given to demonstrate the convergence results obtained in this paper.

The paper is organized as follows. Some notations and preliminary results about general *H*-matrices are given in Section 2. The main results of this paper are given in Section 3, where we give some necessary and sufficient conditions on convergence for FSOR-, BSOR- and SSOR- iterative methods of linear systems with weak *H*-matrices. In Section 4, some numerical examples are given to demonstrate the convergence results obtained in this paper. Future work is given in Section 4.

## 2 Preliminaries

In this section we give some notations and preliminary results relating to the special matrices that are used in this paper.

^{m×n}(ℝ

^{m×n}) will be used to denote the set of all

*m*×

*n*complex (real) matrices. ℤ denotes the set of all integers. Let

*α*⊆ 〈

*n*〉 = {1, 2,...,

*n*} ⊂ ℤ. For nonempty index sets

*α, β*⊆ 〈

*n*〉,

*A*(

*α, β*) is the submatrix of

*A*∈ ℂ

^{n×n}with row indices in

*α*and column indices in

*β*. The submatrix

*A*(

*α, α*) is abbreviated to

*A*(

*α*).

*Let A*∈ ℂ

^{n}×

*n*,

*α*⊂ 〈

*n*〉

*and α'*= 〈

*n*〉 −

*α. If A*(

*α*)

*is nonsingular, the matrix*

*is called the Schur complement with respect to A*(

*α*),

*indices in both α and α' are arranged with increasing order*. We shall confine ourselves to the nonsingular

*A*(

*α*) as far as

*A/α*is concerned.

*A*= (

*a*) ∈ ℂ

_{ij}^{m×n}and

*B*= (

*b*) ∈ ℂ

_{ij}^{m×n},

*A*°

*B*= (

*a*) ∈ ℂ

_{ij}b_{ij}^{m×n}denotes

*the Hadamard product*of the matrices

*A*and

*B*. A matrix

*A*= (

*a*) ∈ ℝ

_{ij}^{n×n}is called

*nonnegative*if

*a*≥ 0 for all

_{ij}*i, j*∈ 〈

*n*〉.

*A matrix A*= (

*a*) ∈ ℝ

_{ij}^{n×n}

*is called a Z*−

*matrix if a*≤ 0

_{ij}*for all i*

*i*=

*j*. We will use

*Z*to denote the set of all

_{n}*n*×

*n Z*−

*matrices. A matrix A*= (

*a*) ∈

_{ij}*Z*

_{n}*is called an M*−

*matrix if A can be expressed in the form A*=

*sI*−

*B, where B*≥ 0,

*and s*≥

*ρ*(

*B*),

*the spectral radius of B. If s >*−

*ρ*(*B*), A is called a nonsingular M*matrix; if s*=

*ρ*(

*B*),

*A is called a singular M*−

*matrix. M*,

_{n}*n*×

*n M*−matrices, the set of all

*n*×

*n*nonsingular

*M*−matrices and the set of all

*n*×

*n*singular

*M*−matrices, respectively. It is easy to see that

*The comparison matrix of a given matrix A*= (

*a*) ∈ ℂ

_{ij}^{n×n},

*denoted by μ*(

*A*) = (

*μ*),

_{ij}*is defined by*

*μ*(

*A*) ∈

*Z*for a matrix

_{n}*A*∈ ℂ

^{n×n}. The set of

*equimodular matrices*associated with

*A*, denoted by

*ω*(

*A*) = {

*B*∈ ℂ

^{n×n}:

*μ*(

*B*) =

*μ*(

*A*)}. Note that both

*A*and

*μ*(

*A*) are in

*ω*(

*A*). A matrix

*A*= (

*a*) ∈ ℂ

_{ij}^{n×n}is called a

*general H*−

*matrix*if

*μ*(

*A*) ∈

*M*(See [1]). If

_{n}*A*is called a

*strong H*−

*matrix*; if

*A*is called a

*weak H*−

*matrix*.

*H*,

_{n}*n*×

*n*general

*H*−matrices, the set of all

*n*×

*n*strong

*H*−matrices and the set of all

*n*×

*n*weak

*H*−matrices, respectively. Similarly to equalities (9), we have

*n*≥

*2*, an

*n*×

*n*complex matrix

*A*is

*reducible*if there exists an

*n*×

*n*permutation matrix

*P*such that

*A*

_{11}is an

*r*×

*r*submatrix and

*A*

_{22}is an (

*n*−

*r*) × (

*n*−

*r*) submatrix, where

*1*≤

*r*<

*n*. If no such permutation matrix exists, then

*A*is called

*irreducible*. If

*A*is a

*1*×

*1*complex matrix, then

*A*is irreducible if its single entry is nonzero, and reducible otherwise.

*A matrix A*∈ ℂ

^{n×n}

*is called diagonally dominant by row if*

*holds for all i*∈ 〈

*n*〉.

*If inequality in*(13)

*holds strictly for all i*∈ 〈

*n*〉,

*A is called strictly diagonally dominant by row. If A is irreducible and the inequality in*(13)

*holds strictly for at least one i*∈ 〈

*n*〉,

*A is called irreducibly diagonally dominant by row. If*(13)

*holds with equality for all i*∈ 〈

*n*〉,

*A is called diagonally equipotent by row*.

*D _{n}*(

*SD*,

_{n}*ID*) and

_{n}*DE*will be used to denote the sets of all

_{n}*n*×

*n*(strictly, irreducibly) diagonally dominant matrices and the set of all

*n*×

*n*diagonally equipotent matrices, respectively.

*A matrix A*∈ ℂ

^{n×n}

*is called generalized diagonally dominant if there exist positive constants α*,

_{i}*i*∈ 〈

*n*〉,

*such that*

*holds for all i*∈ 〈

*n*〉.

*If inequality in*(14)

*holds strictly for all i*∈ 〈

*n*〉,

*A is called generalized strictly diagonally dominant. If*(14)

*holds with equality for all i*∈ 〈

*n*〉,

*A is called generalized diagonally equipotent*.

We denote the sets of all *n* × *n* generalized (strictly) diagonally dominant matrices and the set of all *n* × *n* generalized diagonally equipotent matrices by *GD _{n}*(

*GSD*) and

_{n}*GDE*, respectively.

_{n}(See [30–32]). *Let A* ∈ *D _{n}*(

*GD*).

_{n}*Then*

*if and only if A has no (generalized) diagonally equipotent principal submatrices. Furthermore, if A*∈

*D*) ∩

_{n}*Z*(

_{n}*GD*∩

_{n}*Z*),

_{n}*then*

*if and only if A has no (generalized) diagonally equipotent principal submatrices*.

(See [31, 32]).

(See [6]). *GD _{n}* ⊂ H

_{n}(See [6]). *Let A* ∈ ℂ^{n×n}*be irreducible. Then A* ∈ *H _{n} if and only if A* ∈

*GD*

_{n}More importantly, under the condition of "reducibility", we have the following conclusion.

*Let A*∈ ℂ

^{n × n}

*be reducible. Then A*∈

*H*

_{n}if and only if in the Frobenius normal form of A*each irreducible diagonal square block R*=

_{ii}is generalized diagonally dominant, where P is a permutation matrix, R_{ii}*A*(

*α*)

_{i}*is either*1 × 1

*zero matrices or irreducible square matrices*,

*R*=

_{ij}*A*(

*α*,

_{i}*α*),

_{j}*i*≠

*j, i, j*= 1, 2,... ,

*s*, further,

*α*∩

_{i}*α*= ∅

_{j}*for i*≠

*j, and*

(See [6, 29]). *A matrix**if and only if in the Frobenius normal from* (15) *of A, each irreducible diagonal square block R _{ii} is generalized diagonally dominant and has at least one generalized diagonally equipotent principal submatrix*.

The following definitions and lemmas come from [28, 29].

*Let E*^{iθ} = (*e ^{iθrs}*) ∈ ℂ

^{n×n},

*where*

*e*= cos

^{iθrs}*⊆*+

_{rs}*i*sin

*θ*,

_{rs}*and θ*∈ ℝ

_{rs}*for all r, s*∈ 〈

*n*〉.

*The matrix E*= (

^{iθ}*e*) ∈ ℂ

^{iθrs}^{n × n}

*is called a π*−

*ray pattern matrix if*

1. *θ _{rs}*+

*θ*=

_{sr}*2kβ holds for all r, s*∈ 〈

*n*〉,

*r*≠

*s, where k*∈ ℤ;

2. *θ _{rs}*−

*θ*=

_{rt}*θ*+ (2

_{ts}*k*+ 1)

*π holds for all r, s, t*∈ 〈

*n*〉

*and r*≠

*s, r*≠

*t, t*≠

*s, where k*∈ ℤ;

3. θ* _{rr}* = 0

*for all r*∈ 〈

*n*〉.

*Any complex matrix A*= (

*a*) ∈ ℂ

_{rs}^{n × n}has the following form:

*where η*∈ ℝ, |

*A*| = (|

*a*|) ∈ ℝ

_{rs}^{n × n}

*and E*= (

^{iθ}*e*) ∈ ℂ

^{iθrs}^{n ×n}

*with θ*∈ ℝ and

_{rs}*θ*= 0

_{rr}*for r, s*∈ 〈

*n*〉.

*The matrix E*ray pattern matrix

^{i}θ is called a*of the matrix A. If the ray pattern matrix E*(16)

^{i}θ of the matrix A given in*is a π*−

*ray pattern matrix, then A is called a π*−

*ray matrix*.

*n* × *n π* −ray matrices. Obviously, if a matrix *ξ* ∈ ℂ.

*Let a matrix A* = *D _{A}* −

*L*−

_{A}*U*= (

_{A}*a*) ∈ ℂ

_{rs}^{n ×n}with

*D*=

_{A}*diag*(

*a*

_{11},

*a*

_{22}, ... ,

*a*).

_{nn}*Then*

*if and only if there exists an n*×

*n unitary diagonal matrix D such that D*

^{−1}

*AD*=

*e*.(|

^{iη}*D*|−|

_{A}*L*|−|

_{A}*U*|)

_{A}*for η*∈ ℝ.

*A matrix * =

*A*(

*i*

_{1},

*i*

_{2}, ... ,

*i*), 1 <

_{k}*k*≤

*n, such that*

*where D*=

_{Ak}*diag*(

*a*

_{i1i1},... ,

*a*).

_{ikik}## 3 Main results

In numerical linear algebra, the successive overrelaxation iterative method, simultaneously introduced by Frankel (1950) [33] and Young (1950) [34], is a famous iterative method used to solve a linear system of equations. This iterative method is also called the *accelerate Liebmann method* by Frankel (1950) [33] and the other many subsequent researchers. Kahan (1958) [35] calls it the *extrapolated Gauss-Seidel method*. It is often called the method of *systematic overrelaxation*. Frankel showed that for the numerical solutation of the Dirichlet problem for a rectangle, successive overrelaxation iterative method gave substantially larger (by an order of magnitude) asymptotic rates of convergence than those for the point Jacobi and point Gauss-Seidel iterative methods with suitable chosen relaxation factor. Young (1950) [34] and Young (1954) [36] showed that these conclusions held more generally for matrices satisfying his definition of propertly A, and that these results could be rigorously applied to the iterative solution of matrix equations arising from discrete approximations to a large class of elliptic partial differential equations for general regions.

Later, this iterative method was developed as three iterative methods, i.e., the forward, backward and symmetric successive overrelaxation (FSOR-, BSOR- and SSOR-) iterative methods. Though these iterative methods can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is strictly or irreducibly diagonally dominant matrix, Hermitian positive definite matrix, strong *H*−matrix and consistently ordered *p*-cyclic matrix. Some classic results on convergence of SOR iterative methods are as follows:

(See [13–15, 23]). *Let A* ∈ *SD _{n} ∪ ID_{n}. Then ρ*(

*H*) < 1,

_{FSOR}*ρ*(

*H*) < 1 and

_{BSOR}*ρ*(

*H*) < 1,

_{SSOR}*where H*(5) , (6) and (7) ,

_{FSOR}, H_{BSOR}and H_{SSOR}are defined in*respectively, and therefore the sequence*{

*x*

^{(i)}}

*generated by the FSOR-, BSOR- and SSOR-scheme*(3) ,

*respectively, converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}.

(See [37]). *Let**Then the sequence* {*x*^{(i)}} *generated by the FSOR-, BSOR- and SSOR-scheme* (3) *, respectively, converges to the unique solution of* (1) *for any choice of the initial guess x*^{(0)}.

(See [4, 5, 38]). *Let A* ∈ ℂ^{n × n}*be a Hermitian positive definite matrix. Then the sequence* {*x*^{(i)}} *generated by the FSOR-, BSOR- and SSOR-scheme* (3) *, respectively, converges to the unique solution of* (1) *for any choice of the initial guess x*^{(0)}.

In this section, we mainly study convergence of SOR iterative methods for the linear systems with weak *H*-matrices.

*Let A* = *I* − *L*− *U* ∈ DE _{n}*be irreducible. Then for ω* ∈ (0, 1), *ρ*(*H _{FSOR(ω)}*) < 1 and

*ρ*(

*H*{

_{BSOR(ω)}) < 1, i.e. the sequence*x*

^{(i)})}

*generated by the FSOR and BSOR iterative schemes*(5)

*and*(6)

*converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}

*if and only if A*

*λ*of

*H*such that |

_{FSOR}*λ*| ≥ 1. According to Equality (5),

*λ*≥ 1,

*λ*− 1 +

*ω*≠ 0. Hence, equality (17) yields

*λ*=

*μe*

^{iθ}with

*μ*≥ 1 and

*θ*∈

*R*. Then 1 −

*cosθ*≥ 0,

*μ*− 1 ≥ 0 and

*μ*

*− 1 ≥ 0. Again,*

^{2}*ω*∈ (0, 1) shows 1 −

*ω > 0*. Therefore, we have

*A*=

*I*−

*L*−

*U*∈

*DE*is irreducible, both

_{n}*L*≠ 0 and

*U*≠ 0. As a result, (19) and (22) indicate that

*A*(

*λ, ω*) ∈

*D*and irreducible. Again, since

_{n}*A(λ, ω*) is singular and hence

*D*such that

*A*=

*I*−

*L*−

*U*∈ DE

_{n},

Because of |*λ* ≥ 1 and *ω* ∈ (0,1), the latter equality of (25) implies |*λ*| = 1. As a result, (25) and (19) show *A*(*λ, ω*) = *I* − *L*− *U* = *A*. From (23), it is easy to see *ρ*(*H _{FSOR(ω)}) < 1*, i.e. FSOR-method converges.

*n*×

*n*unitary diagonal matrix

*D*such that

*A*=

*I*−

*L*−

*U*=

*A*=

*I*−

*D*|

*L*|

*D*

^{−1}−

*D*|

*U*|

*D*

^{−1}and

Sice *A* = *I* − *L*− *U* ∈ DE _{n} and is irreducible, Lemma 2.4 shows that *I* − |*L*| − |*U*| is singular and hence *det*(*I* − |*L*| − |*U*|) = 0. Therefore, (27) yields *det*(*I* − *H*_{FSOR(ω)}) = 0, which shows that 1 is an eigenvalue of *H*_{FSOR(ω)}. Then, we have that *ρ*(*H*_{FSOR(ω)}) ≥ 1, i.e. FSOR-method doesn’t converge. This is a contradiction. Thus, the assumption is incorrect and hence,

In the same way, we can prove that for *ω* ∈ (0,1), BSOR-method converges, i.e. *ρ*(_{HBSOR(ω)}) < 1 if and only if

*Let A = I* − *L* − *U* = (*a _{ij}*) ∈

*D*≠ 0

_{n}with a_{ii}*for all i*∈ 〈n〉.

*Then for ω*∈ (0, 1),

*ρ*(

*H*

_{FSOR(ω)}) < 1

*and ρ*(

*H*

_{BSOR}(

*ω*) < 1,

*i.e. the sequence*{x

^{(i)})}

*generated by the FSOR and BSOR iterative schemes*(5)

*and*(6)

*converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}

*if and only if A is nonsingular*.

The conclusion of this theorem is not difficult to be obtained form Lemma 2.1 2, Theorem 3.1 and Theorem 3.4. □

In what follows we will propose the convergence result for SSOR iterative method for linear system with weak *H*-matrices including nonstrictly diagonally dominant matrices. Firstly, the following lemma will be given for the convenience of the proof.

([32]). *Let**where E, F, L, U* ∈ *C*^{n × n}*and E is nonsingular. Then A*/*E is nonsingular if and only if A is nonsingular, where A*/*E* = *F* − *LE*^{−1}*U is the Schur complement of A with respect to E*.

*Let A* = *I* − *L* − *U* ∈ *DE _{n} be irreducible. Then for ω* ∈ (0, 1),

*ρ*(

*H*

_{SOR(ω)}) < 1,

*i.e. the sequence*{

*x*

^{(i)})}

*generated by the SSOR iterative schemes*(7)

*converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}

*if and only if A*

*λ*of

*H*such that |

_{SSOR}*λ*| ≥ 1. According to equalities (5), (6) and (7),

*R*=

*I*−

*ωL*,

*S*=

*λ.I*−

*ωU*) ,

*T*=

*λ*

^{−1}[(1 −

*ω*)

*I*+

*ωL*],

*V*= (1 −

*ω*)

*I*+

*ωU*and

*B*(

*λ*,

*ω*) =

*ℬ/R*is the Schur complement of

*ℬ*with respect to the principal submatrix

*R*. Since

*B*(

*λ, ω*) =

*ℬ/R*is singular, Lemma 3.6 shows that

*ℬ*is also singular. Again since

*A*is irreducible, both

*L*≠ 0 and

*U*≠ 0. As a result,

*ℬ*is also irreducible. Since

*A*=

*I*−

*L*−

*U*= (

*a*) ∈

_{ij}*DE*with unit diagonal entries,

_{n}*ω*∈ (0, 1) and |

*λ*≥ 1, we have that both

*i*∈

*N*= {1, 2,...,

*n*}. Immediately, we obtain

*ℬ*∈

*D*

_{2n}. Again,

*ℬ*is irreducible and singular, and hence Lemma 2.4 shows that

*λ*| = 1. Let

*λ*=

*e*

^{iθ}with

*θ*∈

*R*. Since

*n*×

*n*unitary diagonal matrix

*D*such that

*D̃*=

*diag*(

*D, D*) and

The latter two equalities of (34) indicate that *θ* = *2kβ;* where *k* is an integer and thus *λ* = *e ^{i}2kβ* = 1, and there exists an

*n*×

*n*unitary diagonal matrix

*D*such that

*D*

^{−1}

*AD*=

*I*− |

*L*| − |

*U*|, i.e.

*ρ*(

*H*

_{SSOR(ω)}) < 1, i.e. SSOR-method converges.

*n*×

*n*unitary diagonal matrix

*D*such that

*A*=

*I*−

*L*−

*U*=

*I*−

*D*|

*L*|

*D*

^{−1}−

*D*|

*U*|

*D*

^{−1}and hence

*C*(

*ω*) = (

*I*−

*ω*|

*U*|)− [(1−

*ω*)

*I*+

*ω*|

*L*|](

*I*−

*ω*|

*L*|)

^{−1}[(1−

*ω*)

*I*+

*ω*|

*U*|] and let

*R̂*=

*I*−

*ω*|

*L*|,

*Ŝ*=

*I*−

*ω*|

*U*|,

*T̂*= (1 −

*ω*)

*I*+

*ω*|

*L*|,

*V̂*= (1 −

*ω*)

*I*+

*ω*|

*U*| and

*C*(

*ω*) =

*ℬ̂*/

*R̂*is the Schur complement of

*ℬ̂*with respect to the principal submatrix

*R̂*. (33) and (37) show

*ℬ̂*is singular. Therefore, Lemma 3.6 yields that

*C*(

*ω*) is singular, i.e.

*det*(

*I*−

*H*) = 0, which shows that 1 is an eigenvalue of

_{SSOR}*H*

_{SSOR(ω)}. Then, we have that

*ρ*(

*H*

_{SSOR(ω)}) ≥ 1, i.e. SSOR-method doesn’t converge. This is a contradiction. Thus, the assumption is incorrect and

*Let A* = *I* − *L* − *U* = (*a _{ij}*) ∈

*D*with

_{n}*a*≠ 0

_{ii}*for all i*∈ 〈

*n*〉.

*Then for ω*∈ (0, 1),

*ρ*(

*H*

_{SSOR(ω)}) < 1,

*i.e. the sequence*{

*x*

^{(i)}}

*generated by the SSOR iterative schemes*(7)

*converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}

*if and only if A is nonsingular*.

The conclusion of this theorem is easy to be obtained form Lemma 2.1 2, Theorem 3.1 and Theorem 3. 7. □

*Let**be irreducible. Then for ω* ∈ (0, 1), *ρ*(*H*_{FSOR(ω)}) < 1 and *ρ*(*H*_{BSOR(ω)}) < 1, *i.e. the sequence* {*x*^{(i)}} *generated by the FSOR and BSOR iterative schemes* (5) *and* (6) *converges to the unique solution of* (1) *for any choice of the initial guess x*^{(0)}*if and only if A*

*A*=

*I*−

*L*−

*U*∈

*GDE*. Then, from Definition 2. 2, there exists a positive diagonal matrix

_{n}*D*such that

*Â*=

*D*

^{−1}

*AD*=

*I*−

*D*

^{−1}

*LD*−

*D*

^{−1}

*UD*=

*I*−

*L̂*−

*Û*∈

*DE*and is irreducible, where

_{n}*L̂*=

*D*

^{−1}

*LD*and

*Û*=

*D*

^{−1}

*UD*. Theorem 3.4 shows that for

*ω*∈ (0, 1),

*ρ*(

*Ĥ*

_{FSOR(ω)}) < 1 and

*ρ*(

*Ĥ*

_{BSOR(ω)}) < 1, i.e. the sequence {

*x*

^{(i)}} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess

*x*

^{(0)}if and only if

*D*is a positive diagonal matrix, it follows from Definition 2.9 and Definition 2.10 that

*ω*∈ (0, 1),

*ρ*(

*H*

_{FSOR(ω)}) =

*ρ*(

*Ĥ*

_{FSOR(ω)}) < 1 and

*ρ*(

*H*

_{BSOR(ω)}) =

*ρ*(

*Ĥ*

_{BSOR(ω)}) < 1, i.e. the sequence {

*x*

^{(i)}} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess

*x*

^{(0)}if and only if

*Let**be irreducible. Then for ω* ∈ (0, 1), *ρ*(*H*_{SSOR(ω)}) < 1, *i.e. the sequence* {*x*^{(i)}} *generated by the SSOR iterative schemes* (7) *converges to the unique solution of* (1) *for any choice of the initial guess x ^{(0)} if and only if*

Therefore, similarly as in the proof of Theorem 3. 9, we have with Theorem 3.7 that for *ω* ∈ (0, 1), *ρ*(*H*_{SSOR(ω)}) = *ρ*(*Ĥ*_{SSOR(ω)}) < 1 i.e. the sequence {*x*^{(i)}} generated by the SSOR iterative schemes (7) converges to the unique solution of (1) for any choice of the initial guess *x*^{(0)} if and only if

*Let**with a _{ii}* ≠ 0

*for all i*∈ 〈

*n*〉.

*Then for ω*∈ (0, 1),

*ρ*(

*H*

_{FSOR(ω)}) < 1 and

*ρ*(

*H*

_{BSOR(ω)}) < 1,

*i.e. the sequence*{

*x*

^{(i)}}

*generated by the FSOR and BSOR iterative schemes*(5)

*and*(6)

*converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}

*if and only if A is nonsingular*.

*a*≠ 0 for all

_{ii}*i*∈ 〈

*n*〉, H

_{FSOR(ω)}and

*H*

_{BSOR}(

*ω*) exist and Theorem 2.7 shows that there exists some

*i*∈ 〈

*n*〉 such that diagonal square block

*R*in the Frobenius normal from (15) of

_{ii}*A*is irreducible and generalized diagonally equipotent. Let

*R*. Direct computations give

_{ii}Since *R _{ii}* ∈

*GDE*is irreducible, Theorem 3.9 shows that for

_{n}*ω*∈ (0, 1), if

*x*

^{(i)}} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess

*x*

^{(0)}, then

*R*∈

_{ii}*GDE*but

_{n}*R*=

_{ii}*A*(

*α*

_{i}) is nonsingular. However, it is easy to obtain that

*R*=

_{jj}*A*(

*α*) is nonsingular that satisfies

_{j}*A*is nonsingular. This completes the proof of the necessity.

Let us prove the sufficiency. Assume that *A* is nonsingular, then each diagonal square block *R _{ii}* in the Frobenius normal from (15) of

*A*is nonsingular for all

*i*∈ 〈

*n*〉. Since

*R*in the Frobenius normal from (15) of

_{ii}*A*is irreducible and generalized diagonally equipotent and the other diagonal square block

*R*is generalized strictly diagonally dominant or generalized irreducibly diagonally dominant. Again, each irreducible and generalized diagonally equipotent diagonal square block

_{jj}*R*is nonsingular. Lemma 2.12 yields that

_{ii}*x*)} generated by the FSOR and BSOR iterative schemes (5) and (6) converges to the unique solution of (1) for any choice of the initial guess

^{(i)}*x*

^{(0)}. This completes the proof. □

*with**a _{ii}* ≠ 0

*for all i*∈ 〈

*n*〉.

*Then for ω*∈ (0, 1),

*ρ*(

*H*

_{SSOR(ω)}) < 1,

*i.e. the sequence*{

*x*

^{(i)}}

*generated by the SSOR iterative schemes*(7)

*converges to the unique solution of*(1)

*for any choice of the initial guess x*

^{(0)}

*if and only if A is nonsingular*.

Similar to the proof of Theorem 3.1 1, the conclusion of this theorem is easy to be obtained form Theorem 3.1, Theorem 3.2 and Theorem 3.1 0. □

## 4 Numerical examples

In this section, some numerical examples are given to demonstrate the convergence results obtained in this paper.

*Let the coefficient matrix A of linear system*(1)

*be given by the following n*×

*n matrix*

*A*∈

_{n}*DE*is irreducible. Since

_{n}*A*is nonsingular. Therefore, Theorem 3.4 and Theorem 3.7 show that for

_{n}*ω*∈ (0, 1),

*x*

^{(i)})} generated by the FSOR-, BSOR- and SSOR-iterative schemes (5), (6) and (7) converges to the unique solution of (1) for any choice of the initial guess

*x*

^{(0)}.

In what follows, the computations on the spectral radii *ρ*_{1} = *ρ*(*H*_{FSOR(ω)}) , *ρ*_{2} = *ρ*(*H*_{BSOR(ω)}) and *ρ*_{3} = *ρ*(*H*_{SSOR(ω)}) of FSOR-, BSOR- and SSOR-iterative matrices for *A*_{100} were performed on PC computer with Matlab 7.0 to verify that the results above are true. The computational results are shown in Table 1.

The comparison of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω

ω | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 |
---|---|---|---|---|---|---|---|---|---|---|---|

ρ_{1} | 1 | 0.900 | 0.800 | 0.702 | 0.606 | 0.514 | 0.430 | 0.358 | 0.514 | 0.890 | 1.284 |

ρ_{2} | 1 | 0.900 | 0.800 | 0.702 | 0.606 | 0.514 | 0.430 | 0.358 | 0.514 | 0.890 | 1.284 |

ρ_{3} | 1 | 0.808 | 0.624 | 0.449 | 0.310 | 0.119 | 0.324 | 0.780 | 1.586 | 3.457 | 1.640 |

It is shown in Table 1 and Fig. 1 that: (i) the changes of *ρ*(*H*_{FSOR(ω)}) and *ρ*(*H*_{BSOR(ω)}) are identical with *ω* increasing. They gradually decrease from 1 to 0.358 with *ω* increasing from 0 to 0.7 while they gradually increase from 0.358 to 1.284 with *ω* increasing from 0.7 to 1. This shows the optimal value of *ω* should be *ω*_{opt} ∈ (0:50; 0:80) such that the SOR iterative method converges faster to the unique solution of (1) for any choice of the initial guess *x*_{0}.

(ii) *ρ*(*H*_{SSOR(ω)}) performs better than *ρ*(*H*_{FSOR(ω)}) and (*H*_{BSOR(ω)}). It decreases quickly from 1 to 0.119 with *ω* increasing from 0 to 0.5 while it increases fast from 0.119 to 1.640 with *ω* increasing from 0.5 to 1. The optimal value of *ω* should be *ω _{opt}* ∈ (0:40; 0:60) such that the SOR iterative method converges faster to the unique solution of (1) for any choice of the initial guess

*x*

_{0}. It follows from Table 1 and Fig. 1 that SSOR iterative method is superior to the other two SOR iterative methods.

*Let the coefficient matrix A of linear system*(1)

*be given by the following 6*×

*6 matrix*

*A*∈

*DE*

_{6}are reducible, there is not any principal submatrix

*A*(

_{k}*k*< 6) in

*A*such that

*A*is nonsingular. Therefore, Theorem 3.5 and Theorem 3.8 show that for

*ω*∈ (0, 1),

*x*

^{(i)})} generated by the FSOR-, BSOR- and SSOR-iterative schemes (5), (6) and (7) converges to the unique solution of (1) for any choice of the initial guess

*x*

^{(0)}.

The computations on Matlab 7.0 of PC yield some comparison results on the spectral radius of SSOR iterative matrices, see Table 2.

The comparison of spectral radii of FSOR-, BSOR- and SSOR-iterative matrices with different ω

ω | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 |
---|---|---|---|---|---|---|---|---|---|---|---|

ρ_{1} | 1 | 0.924 | 0.846 | 0.765 | 0.679 | 0.620 | 0.573 | 0.527 | 0.510 | 0.788 | 1.092 |

ρ_{2} | 1 | 0.924 | 0.849 | 0.774 | 0.700 | 0.628 | 0.561 | 0.498 | 0.449 | 0.712 | 1.000 |

ρ_{3} | 1 | 0.849 | 0.697 | 0.543 | 0.417 | 0.332 | 0.351 | 0.342 | 0.443 | 0.532 | 0.585 |

It is shown in Table 2 and Fig. 2 that (i) the change of *ρ*(*H*_{FSOR(ω)}) is similar to the one of *ρ*(*H*_{BSOR(ω)}). They gradually decrease to their minimal values then gradually increase from their minimal values with *ω* increasing from 0 to 1. This shows the optimal value of *ω* for FSOR- and BSOR-iterative method should be *ω _{opt}* ∈ (0:70; 0:90). But

*ρ*(

*H*

_{FSOR(ωopt)}) >

*ρ*(

*H*

_{FSOR(ωopt)}) shows that the BSOR iterative method converges much faster than the FSOR does to the unique solution of (1) for any choice of the initial guess

*x*

_{0}.

(ii) Similarly as in Example 1, *ρ*(*H*_{SSOR(ω)}) performs better than *ρ*(*H*_{FSOR(ω)}) and *ρ*(*H*_{BSOR(ω)}). It decreases quickly from 1 to 0.332 with *ω* increasing from 0 to 0.5 while it quickly increases near about *ω* = 0.5 and then decreases gradually to 0.342 at *ω* = *0.7*. Finally, it increases quickly from 0.342 to 0.585 with *ω* increasing from 0.7 to 1. The optimal value of *ω* should be *ω _{opt}* ∈ (0.40, 0.60) such that the SOR iterative method converges faster to the unique solution of (1) for any choice of the initial guess

*x*

_{0}. It follows from Table 2 and Fig. 2 that SSOR iterative method is superior to the other two SOR iterative methods.

## 5 Further work

In this paper some necessary and sufficient conditions are proposed such that SOR iterative methods, including FSOR, BSOR and SSOR iterative methods, are convergent for linear systems with weak *H*-matrices. The class of weak *H*-matrices with singular comparison matrices is a subclass of general *H*−matrices [29] and has some theoretical problems. In particular, the convergence problem on AOR iterative methods for this class of matrices is an open problem and is a focus of our further work.

This work is supported by the National Natural Science Foundations of China (Nos.11201362, 11601409, 11271297), the Natural Science Foundation of Shaaxi Province of China (No. 2016JM1009) and the Science Foundation of the Education Department of Shaanxi Province of China (No. 14JK1305).

## References

- [1]↑
Berman A., Plemmons R.J.: Nonnegative Matrices in the Mathematical Sciences. Academic, New York, 1979.

- [2]
Demmel J.W.: Applied Numerical Linear Algebra. SIAM Press, 1997.

- [3]
Golub G.H., Van Loan C.F.: Matrix Computations, third ed. Johns Hopkins University Press, Baltimore, 1996.

- [6]↑
Bru R., Corral C., Gimenez I. and Mas J.: Classes of general H-matrices. Linear Algebra Appl., 429(2008), 2358-2366.

- [7]
Bru R., Corral C., Gimenez I. and Mas J.: Schur coplement of general

*H*–matrices. Numer. Linear Algebra Appl., 16(2009), 935-974. - [8]↑
Bru R., Gimenez I. and Hadjidimos A.: Is

*A*∈ ℂ*n,n*a general*H*–matrices. Linear Algebra Appl., 436(2012), 364-380. - [9]↑
Cvekovi

*ć*L.J., Herceg D.: Convergence theory for AOR method. Journal of Computational Mathematics, 8(1990), 128-134. - [10]
Darvishi M.T., Hessari P.: On convergence of the generalized AOR method for linear systems with diagonally dominant coef?cient matrices. Applied Mathematics and Computation, 176(2006), 128-133.

- [11]
Evans D.J., Martins M.M.: On the convergence of the extrapolated AOR method. Internat. J. Computer Math, 43(1992), 161-171.

- [12]
Gao Z.X., Huang T.Z.: Convergence of AOR method. Applied Mathematics and Computation, 176(2006), 134-140.

- [13]↑
Hadjidimos A.: Accelerated overrelaxation method. Mathematics of Computation. 32(1978), 149-157.

- [14]
James K.R., Riha W.: Convergence Criteria for Successive Overrelaxation. SIAM Journal on Numerical Analysis. 12(1975), 137-143.

- [15]↑
James K.R.: Convergence of Matrix Iterations Subject to Diagonal Dominance. SIAM Journal on Numerical Analysis. 10(1973), 478-484.

- [16]
Li W.: On nekrasov matrices. Linear Algebra Appl., 281(1998), 87-96.

- [17]
Martins M.M.: On an Accelerated Overrelaxation Iterative Method for Linear Systems With Strictly Diagonally Dominant Matrix. Mathematics of Computation, 35(1980), 1269-1273.

- [18]
Ortega J.M., Plemmons R.J.: Extension of the Ostrowski-Reich theorem for SOR iterations. Linear Algebra Appl., 28(1979), 177-191.

- [19]
Plemmons R.J.:

*M*–matrix characterization I: Nonsingular*M*–matrix. Linear Algebra Appl., 18(1977), 175-188. - [20]
Song Y.Z.: On the convergence of the MAOR method, Journal of Computational and Applied Mathematics, 79(1997), 299-317.

- [21]
Song Y.Z.: On the convergence of the generalized AOR method. Linear Algebra and its Applications, 256(1997), 199-218.

- [22]
Tian G.X., Huang T.Z., Cui S.Y.: Convergence of generalized AOR iterative method for linear systems with strictly diagonally dominant matrices. Journal of Computational and Applied Mathematics, 213(2008), 240-247.

- [24]
Wang X.M.: Convergence for the MSOR iterative method applied to H-matrices. Applied Numerical Mathematics, 21(1996), 469-479.

- [25]
Wang X.M.: Convergence theory for the general GAOR type iterative method and the MSOR iterative method applied to H-matrices. Linear Algebra and its Applications, 250(1997), 1-19.

- [26]
Xiang S.H., Zhang S.L.: A convergence analysis of block accelerated over-relaxation iterative methods for weak block H-matrices to partition ?, Linear Algebra and its Applications. 418(2006), 20-32.

- [28]↑
Zhang C.Y., Xu F.M., Xu Z.B., Li J.C.: General H-matrices and their Schur complements. Frontiers of Mathematics in China, 9(2014), 1141-1168.

- [29]↑
Zhang C.Y., Ye D., Zhong C.L., Luo S.H.: Convergence on Gauss-Seidel iterative methods for linear systems with general H-matrices. Electronic Journal of Linear Algebra, 30(2015), 843-870.

- [30]↑
Zhang C.Y. and Li Y.T.: Diagonal Dominant Matrices and the Determing of H matrices and M matrices, Guangxi Sciences, 12(2005), 1161-164.

- [31]↑
Zhang C.Y., Xu C.X., Li Y.T.: The Eigenvalue Distribution on Schur Complements of H matrices, Linear Algebra Appl., 422(2007), 250-264.

- [32]↑
Zhang C.Y., Luo S.H., Xu C.X., Jiang H.Y.: Schur complements of generally diagonally dominant matrices and criterion for irreducibility of matrices. Electronic Journal of Linear Algebra, 18(2009), 69-87.

- [33]↑
Frankel S.P.: Convergence rates of iterative treatments of partial differential equations. Math. Tables Aids Comput., 4(1950), 65-75.

- [34]↑
Young D.M.: Iterative methods for solving partial differential equations of elliptic type. Doctoral Thesis, Harvard University, Cambridge, MA, 1950.

- [35]↑
Kahan W.: Gauss-Seidel methods of solving large systems of linear equations. Doctoral Thesis, University of Toronto, Toronto, Canada, 1958.

- [36]↑
Young D.M.: Iterative methods for solving partial differential equations of elliptic type. Trans. Amer. Math. Soc. 76(1954) 92-111.

- [37]↑
Neumaier A. and Varga R.S.: Exact convergence and divergence domains for the symmetric successive overrelaxation iterative (SSOR) method applied to H -matrices. Linear Algebra Appl., 58(1984), 261-272.

- [38]↑
Meurant G.: Computer Solution of Large Linear Systems. Studies in Mathematics and its Applications, Vol. 28, North-Holland Publishing Co., Amsterdam, 1999.