Jianxing Zhao and Caili Sang

# Some new bounds of the minimum eigenvalue for the Hadamard product of an M-matrix and an inverse M-matrix

De Gruyter | Published online: February 13, 2016

# Abstract

Some convergent sequences of the lower bounds of the minimum eigenvalue for the Hadamard product of a nonsingular M-matrix B and the inverse of a nonsingular M-matrix A are given by using Brauer’s theorem. It is proved that these sequences are monotone increasing, and numerical examples are given to show that these sequences could reach the true value of the minimum eigenvalue in some cases. These results in this paper improve some known results.

MSC 2010: 15A06; 15A18; 15A42

## 1 Introduction

For a positive integer n, N denotes the set ﹛1, 2, · · · , n﹜, and ℝn×n(ℂn×n) denotes the set of all n × n real (complex) matrices throughout.

A matrix A = [aij] Î ℝn×n is called a nonsingular M-matrix if aij ≤ 0, i, j Î N, i ≠ j, A is nonsingular and A−1 ≥ 0 (see [1, p.133]). Denote by Mn the set of all n × n nonsingular M-matrices.

If A is a nonsingular M-matrix, then there exists a positive eigenvalue of A equal to τ(A) ≡ [ρ(A−1)]−1, where ρ(A−1) is the perron eigenvalue of the nonnegative matrix A−1. It is easy to prove that τ(A) = min﹛|ƛ| : ƛ Î σ(A)﹜, where σ(A) denotes the spectrum of A (see [2, p.357]).

A matrix A is called reducible if there exists a nonempty proper subset IN such that aij = 0, ∀ i Î I,jI. If A is not reducible, then we call A irreducible (see [3, p.128]).

For two real matrices A = [aij] and B = [bij] of the same size, the Hadamard product of A and B is defined as the matrix AB = [aijbij]. If A and B are two nonsingular M-matrices, then it was proved in [4, Proposition 3] that AB−1 is also a nonsingular M-matrix.

Let A = [aij] Î ℝn×n, aii ≠ 0. For i, j, k Î N, j ≠ i, denote

d j = k j | a j k | | a j j | , s j i = | a j i | + k j , i | a j k | d k | a j j | , s i = max j i s i j ; u j i = | a j i | + k j , i | a j k | s k i | a j j | , u i = max j i u i j .

Recently, some lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse have been proposed. Let A = [aij] Î Mn, it was proved in  that

0 < τ ( A A 1 ) 1.

Subsequently, Fiedler and Markham  gave a lower bound on τ(AA−1),

τ ( A A 1 ) 1 n ,

and conjectured that

τ ( A A 1 ) 2 n .

Chen , Song  and Yong  have independently proved this conjecture.

In 2007, Li et al. improved the conjecture τ ( A A 1 ) 2 n when A−1 is a doubly stochastic matrix and gave the following result (i.e., [9, Theorem 3.1]): Let A = [aij] Î Mn and A−1 be a doubly stochastic matrix. Then

(1) τ ( A A 1 ) min i N a i i s i j i | a i j | 1 + j i s j i .

In 2013, Zhou et al.  also obtained (1) under the same condition.

In 2015, Chen gave the following result (i.e., [11, Theorem 3.2]): Let A = [aij] Î Mn and A−1 = [αij] be a doubly stochastic matrix. Then

(2) τ ( A A 1 ) min i j 1 2 a i i α i i + a j j α j j                       [ ( a i i α i i a j j α j j ) 2 + 4 u i u j α i i α j j ( k i | a k i | ) ( k j | a k j | ) ] 1 2

and they have obtained

min i j 1 2 a i i α i i + a j j α j j [ ( a i i α i i a j j α j j ) 2 + 4 u i u j α i i α j j ( k i | a k i | ) ( k j | a k j | ) ] 1 2                                             min i N a i i s i j i | a i j | 1 + j i s i j ,

i.e., under this condition, the bound of (2) is better than the one of (1).

In this paper, we present some convergent sequences of the lower bounds of τ(BA−1) and τ(AA−1), which improve (2). Numerical examples show that these sequences could reach the true value of τ(AA−1) in some cases.

## 2 Some notations and lemmas

In this section, we first give the following notations, which will be useful in the following proofs.

Let A = [ a i j ] n × n , a i i 0.  For  i , j , k N , j i , t = 1 , 2 , , denote

h i = max j i | a j i | | a j j | s j i k j , i | a j k | s k i , υ j i ( 0 ) = | a j i | + k j , i | a j k | s k i h i | a j j | , p j i ( t ) = | a j i | + k j , i | a j k | υ k i ( t 1 ) | a j j | , p i ( t ) = max j i { p i j ( t ) } , h i ( t ) = max j i | a j i | | a j j | p j i ( t ) k j , i | a j k | p k i ( t ) , υ j i ( t ) = | a j i | + k j , i | a j k | p k i ( t ) h i ( t ) | a j j | .
Lemma 2.1.

If A = [aij] Î ℝn×nis strictly row diagonally dominant, then, for all i, j Î N, j ≠ i, t = 1, 2, … ,

(a) 1 > djsjiujiυji(0)pji(1)υji(1)pji(2)υji(2) ≥ · · · ≥ pji(t)υji(t) ≥ · · · ≥ 0;

(b) 1 ≥ hi ≥ 0, 1 ≥ hi(t) ≥ 0.

Proof. Since A is a strictly row diagonally dominant matrix, that is, | a j j |  <  k j | a j k | , j  N, , obviously, 1 > dj ≥ 0, j Î N. By the definitions of dj, sji, we have 1 > djsji ≥ 0, j, i Î N, j ≠ i. And then, by the definitions of sji, uji, we have sjiuji, j,i Î N, j ≠ i. Hence,

| a j i | | a j j | s j i k j , i | a j k | s k i = | a j j | s j i k j , i | a j k | d k | a j j | s j i k j , i | a j k | s k i 1 ,
from the definition of hi, we have 0 ≤ hi ≤ 1, i Î N. Furthermore, by the definitions of uji, υji (0), we have ujiυji (0), j, i Î N, j ≠ i.

Since h i = max j i | a j i | | a j j | s j i k j , i | a j k | s k i , i N , , we have

h i | a j i | | a j i | s j i k j , i | a j k | s k i , i . e . , s j i h i | a j i | + k j , i | a j k | s k i h i | a j j | = υ j i ( 0 ) , j , i N , j i .
From the definitions of υji (0), pji (1), we have υji (0)pji (1) ≥ 0, j, i Î N, j ≠ i.

Hence,

| a j i | | a j j | p j i ( 1 ) k j , i | a j k | p k i ( 1 ) = | a j j | p j i ( 1 ) k j , i | a j k | υ k i ( 0 ) | a j j | p j i ( 1 ) k j , i | a j k | p k i ( 1 ) 1 ,
by the definition of hi (1), we have 0 ≤ hi (1) ≤ 1, i Î N.

Since h i ( 1 ) = max j i | a j i | | a j j | p j i ( 1 ) k j , i | a j k | p k i ( 1 ) , i N , we have

h i ( 1 ) | a j i | | a j j | p j i ( 1 ) k j , i | a j k | p k i ( 1 ) , i . e . , p j i ( 1 ) h i ( 1 ) | a j i | + k j , i | a j k | p k i ( 1 ) h i ( 1 ) | a j j | = υ j i ( 1 ) , j , i N , j i .
By 0 ≤ hi (1) ≤ 1, i Î N, we have pji (1)υji (1) ≥ 0, j, i Î N, j ≠ i. From the definitions of υji (1), pji (2), we obtain υji (1)pji (2) ≥ 0, j, i Î N, j ≠ i.

In the same way as above, we can also prove that

p j i ( 2 ) υ j i ( 2 ) p j i ( t ) υ j i ( t ) 0 , 1 h i ( t ) 0 , j , i N , j i , t = 2 , 3 , ….

The proof is completed. □

Using the same technique as the proof of Lemma 2.2 in , we can obtain the following lemma.

Lemma 2.2.

If A = [aji] Î Mnis a strictly row diagonally dominant matrix, then A−1 = [αij] exists, and

α j i | a j i | + k j , i | a j k | υ k i ( t ) a j i α i i = p j i ( t + 1 ) α i i , j , i N , j i , t = 0 , 1 , 2 , ….

Lemma 2.3.

 Let A = [aij] Î ℂn×nand x1, x2, · · · , xnbe positive real numbers. Then all the eigenvalues of A lie in the region

i , j = 1 i j n z : | z a i i | | z a j j | ( x i k i 1 x k | a k i | ) ( x j k j 1 x k | a k j | ) .

## 3 Main results

In this section, we give several convergent sequences for τ(BA−1) and τ(AA−1).

Theorem 3.1.

Let A = [aij], B = [bij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,

(3) τ ( B A 1 ) min i j 1 2 α i i b i i + α j j b j j                       [ ( α i i b i i α j j b j j ) 2 + 4 p j ( t ) p j ( t ) α i i α j j ( k i | b k j | ) ] 1 2 = Ω t .

Proof. It is evident that the result holds with equality for n = 1.

We next assume that n ≥ 2. Since A Î Mn, there exists a positive diagonal matrix D such that D−1AD is a strictly row diagonally dominant M-matrix, and

τ ( B A 1 ) = τ ( D 1 ( B A 1 ) D ) = τ ( B ( D 1 A D ) 1 ) .

Therefore, for convenience and without loss of generality, we assume that A is a strictly row diagonally dominant matrix.

(a) First, we assume that A and B are irreducible matrices. Since A is irreducible, then 0 < pi(t) < 1, for any i Î N. Let τ(BA−1) = ƛ. Since ƛ is an eigenvalue of BA−1, then 0 < ƛ < biiαii. By Lemma 2.2 and Lemma 2.3, there is a pair (i, j) of positive integers with i ≠ j such that

(4) | λ b i i α i i | | λ b j j α j j | ( p i ( t ) k i 1 p k ( t ) | b k i α k i | ) ( p j ( t ) k j 1 p k ( t ) | b k j α k j | )                                  ( p i ( t ) k i 1 p k ( t ) | b k i p k i ( t ) α i i | ) ( p j ( t ) k j 1 p k ( t ) | b k j p k j ( t ) α j j | )                                   ( p i ( t ) k i 1 p k ( t ) | b k i p k ( t ) α i i | ) ( p j ( t ) k j 1 p k ( t ) | b k j p k ( t ) α j j | )                                   = ( p i ( t ) α i i k i | b k i | ) ( p j ( t ) α i i k j | b k j | ) .

From inequality (4), we have

(5) ( λ b i i α i i ) ( λ b j j α j j ) p j ( t ) α i i α j j ( k i | b k i | ) ( k j | b k j | ) .

Thus, (5) is equivalent to

λ 1 2 α i i b i i + α j j b j j [ ( α i i b i i α j j b j j ) 2 + 4 p j ( t ) p j ( t ) α i i α j j ( k i | b k j | ) ] 1 2 ,
that is
τ ( B A 1 ) 1 2 α i i b i i + α j j b j j [ ( α i i b i i α j j b j j ) 2 + 4 p i ( t ) p j ( t ) α i i α j j ( k j | b k j | ) ] 1 2                   min i j 1 2 α i i b i i + α j j b j j [ ( α i i b i i α j j b j j ) 2 + 4 p i ( t ) p j ( t ) α i i α j j ( k i | b k j | ) ] 1 2 .

(b) Now, assume that one of A and B is reducible. It is well known that a matrix in Zn = ﹛A = [aij] Î ℝn×n : aij ≤ 0, i ≠ j﹜ is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of ). If we denote by C = [cij] the n × n permutation matrix with c12 = c23 = · · · = cn–1,n = cn1 = 1, the remaining cij zero, then both A − ϵC and B − ϵC are irreducible nonsingular M-matrices for any chosen positive real number ϵ, sufficiently small such that all the leading principal minors of both A − ϵC and B − ϵC are positive. Now we substitute A − ϵC and B − ϵC for A and B, in the previous case, and then letting ϵ → 0, the result follows by continuity. □

Theorem 3.2.

The sequence ﹛Ωt﹜, t = 1, 2, … obtained from Theorem 3.1 is monotone increasing with an upper bound τ(BA−1) and, consequently, is convergent.

Proof. By Lemma 2.1, we have pji(t)pji(t+1) ≥ 0, j, i Î N, j ≠ i, t = 1, 2, … , so by the definiton of pi(t), it is easy to see that the sequence ﹛pi(t)﹜ is monotone decreasing. Then ﹛Ωt﹜ is a monotonically increasing sequence with an upper bound τ(BA−1). Hence, the sequence is convergent. □

If B = A, according to Theorem 3.1, the following corollary is established.

Corollary 3.3.

Let A = [aij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,

(6) τ ( A A 1 ) min i j 1 2 a i i α i i a j j α j j                        [ ( a i i α i i a j j α j j ) 2 + 4 p i ( t ) p j ( t ) α i i α j j ( k j | a k j | ) ] 1 2 = ϒ t .

Remark 3.4.

We give a simple comparison between (2) and (6). According to Lemma 2.1, we know that ujipji(t), j, i Î N, j ≠ i, t = 1, 2, …. Furthermore, by the definitions of ui, pi(t), we have uipi(t), i Î N, t = 1, 2, …. Obviously, for t = 1, 2, …, the bounds in (6) are bigger than the bound in (2).

Similar to the proof of Theorem 3.1, Theorem 3.2 and Corollary 3.3, we can obtain Theorem 3.5, Theorem 3.6 and Corollary 3.7, respectively.

Theorem 3.5.

Let A = [aij], B = [bij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,

τ ( B A 1 ) min i j 1 2 α i i b i i + α j j b j j                        [ ( α i i b i i α j j b j j ) 2 + 4 s i s j α i i α j j ( k i | b k i | p k i ( t ) s k ) ( k j | b k j | p k j ( t ) s k ) ] 1 2 = Δ t .

Theorem 3.6.

The sequence ﹛Δt﹜, t = 1, 2, … obtained from Theorem 3.5 is monotone increasing with an upper bound τ(BA−1) and, consequently, is convergent.

Corollary 3.7.

Let A = [aij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,

τ ( A A 1 ) min i j 1 2 α i i a i i + α j j a j j                        [ ( α i i a i i α j j a j j ) 2 + 4 s i s j α i i α j j ( k i | a k i | p k i ( t ) s k ) ( k j | a k j | p k j ( t ) s k ) ] 1 2 = Γ t .

Let Lt = max﹛ t, Γt﹜. By Corollary 3.3 and Corollary 3.7, the following theorem is easily found.

Theorem 3.8.

Let A = [aij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,

τ ( A A 1 ) L t .

## 4 Numerical examples

In this section, several numerical examples are given to verify the theoretical results.

Example 4.1.

Consider the following M-matrix:

A = [ 20 1 2 3 4 1 1 3 2 2 1 18 3 1 1 4 2 1 3 1 2 1 10 1 1 1 0 1 1 1 3 1 0 16 4 2 1 1 1 2 1 3 0 2 15 1 1 1 2 3 3 2 1 1 1 12 2 0 1 0 1 3 1 1 0 1 9 0 1 0 3 1 1 4 1 0 0 12 0 1 2 4 1 1 1 0 1 3 14 0 3 1 0 1 1 1 0 1 2 11 ] .

Since Ae = e and ATe = e, e = [1, 1, · · ·, 1]T, A−1is doubly stochastic. Numerical results are given in Table 1 for the total number of iterations T = 10. In fact, τ(AA−1) = 0.9678.

Table 1.

The lower upper of τ(AA−1)

 Method t Lt Corollary 2.5 of  0.1401 Conjecture of  0.2000 Theorem 3.1 of  0.2519 Theorem 3.2 of  0.4125 Theorem 3.1 of  0.4471 Theorem 3.2 of  0.4732 Corollary 3 of  0.6064 Theorem 3.8 t = 1 0.7388 t = 2 0.8553 t = 3 0.9059 t = 4 0.9261 t = 5 0.9346 t = 6 0.9383 t = 7 0.9400 t = 8 0.9407 t = 9 0.9409 t = 10 0.9409

Remark 4.2.

Numerical results in Table 1 show that:

(a) Lower bounds obtained from Theorem 3.8 are bigger than these corresponding bounds in [4,9,11,13-16].

(b) Sequence obtained from Theorem 3.8 is monotone increasing.

(c) The sequence obtained from Theorem 3.8 can approximate effectively to the true value of τ(AA−1).

Example 4.3.

A nonsingular M-matrix A = [aij] Î ℝn×nwhose inverse is doubly stochastic, is randomly generated by Matlab 7.1 (with 0-1 average distribution).

The numerical results obtained for T = 500 are listed in Table 2, where T are defined in Example 4.1.

Remark 4.4.

Numerical results in Table 2 show that it is effective by Theorem 3.8 to estimate τ(AA−1) for large order matrices.

Table 2.

The lower upper of τ(AA−1)

 Method t Lt(n = 200) Lt(n = 500) Theorem 3.8 t = 1 0.0319 0.0133 t = 30 0.3953 0.1972 t = 60 0.6065 0.3452 t = 90 0.7293 0.4647 t = 120 0.8016 0.5609 t = 150 0.8428 0.6384 t = 180 0.8647 0.7011 t = 210 0.8773 0.7519 t = 240 0.8844 0.7928 t = 270 0.8885 0.8255 t = 300 0.8909 0.8520 t = 330 0.8923 0.8734 t = 360 0.8930 0.8908 t = 390 0.8935 0.9049 t = 420 0.8937 0.9163 t = 450 0.8938 0.9249 t = 480 0.8939 0.9316 t = 500 0.8940 0.9352

Example 4.5.

Let A = [aij] Î ℝn×n, where a11 = a22 = · · · = an,n = 2, a12 = a23 = · · · = an–1,n = an,1 = −1, and aij = 0 elsewhere.

It is easy to know that A is a nonsingular M-matrix. If we apply Theorem 3.8 for n = 10 and n = 100, we have τ(AA−1) ≥ 0.7507 and τ(AA−1) ≥ 0.7500 when t = 1, respectively. In fact, τ(AA−1) = 0.7507 for n = 10 and τ(AA−1) = 0.7500 for n = 100.

Remark 4.6.

Numerical results in Example 4.5 show that the lower bound obtained from Theorem 3.8 could reach the true value of τ(AA−1) in some cases.

## 5 Conclusions

In this paper, we present a convergent sequence ﹛Lt﹜, t = 1, 2, … to approximate τ(AA−1). Although we do not give the error analysis, i.e., how accurately these bounds can be computed, from the numerical experiments of Example 4.3, we are pleased to see that the bounds are still on the increase with the increase of the number of iterations. At present, it is very difficult for the authors to give the error analysis. Next, we will study this problem.

# Acknowledgement

The authors are very indebted to the reviewers for their valuable comments and corrections, which improved the original manuscript of this paper. This work is supported by the National Natural Science Foundations of China (Nos.11361074,11501141), Foundation of Guizhou Science and Technology Department (Grant No.2073), Scientific Research Foundation for the introduction of talents of Guizhou Minzu university (No.15XRY003) and Scientific Research Foundation of Guizhou Minzu university (No.15XJS009).

### References

 Berman, A., Plemmons, R.J.:Nonnegative matrices in the mathematical sciences. Classics in Applied Mathematics, Vol.9, SIAM, Philadelphia, 1994. Search in Google Scholar

 Horn, R.A., Johnson, C.R.: Topics in matrix analysis. Cambridge University Press, 1991. Search in Google Scholar

 Chen, J.L., Chen, X.H.: Special matrix. Tsinghua University Press, 2000. Search in Google Scholar

 Fiedler, M., Markham, T.L.: An inequality for the Hadamard product of an M -matrix and inverse M -matrix. Linear Algebra Appl. 101(1998), 1–8. Search in Google Scholar

 Fiedler, M., Johnson, C.R., Markham, T.L., Neumann, M.: A trace inequality for M -matrix and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 71(1985), 81–94. Search in Google Scholar

 Chen, S.C.: A lower bound for the minimum eigenvalue of the Hadamard product of matrices. Linear Algebra Appl. 378(2004), 159-166. Search in Google Scholar

 Song, Y.Z.: On an inequality for the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 305(2000), 99-105. Search in Google Scholar

 Yong, X.R.: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 320(2000), 167-171. Search in Google Scholar

 Li, H.B., Huang, T.Z., Shen, S.Q., Li, H.: Lower bounds for the minimum eigenvalue of Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 420(2007), 235–247. Search in Google Scholar

 Zhou, D.M., Chen, G.L., Wu, G.X., Zhang, X.Y.: On some new bounds for eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 438(2013), 1415-1426. Search in Google Scholar

 Chen, F.B.: New inequalities for the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2015(2015), 35. Search in Google Scholar

 Horn, R.A., Johnson, C.R.: Matrix analysis. Cambridge University Press, 1985. Search in Google Scholar

 Zhou, D.M., Chen, G.L., Wu, G.X., Zhang, X.Y.: Some inequalities for the Hadamard product of an M -matrix and an inverse M -matrix. J. Inequal. Appl. 2013(2013), 16. Search in Google Scholar

 Cheng, G.H., Tan, Q., Wang, Z.D.: Some inequalities for the minimum eigenvalue of the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2013(2013), 65. Search in Google Scholar

 Li, Y.T., Chen, F.B., Wang D.F.: New lower bounds on eigenvalue of the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 430(2009), 1423-1431. Search in Google Scholar

 Li, Y.T., Wang F., Li, C.Q., Zhao, J.X.: Some new bounds for the minimum eigenvalue of the Hadamard product of an M -matrix and an inverse M -matrix. J. Inequal. Appl. 2013(2013), 480. Search in Google Scholar