Abstract
Some convergent sequences of the lower bounds of the minimum eigenvalue for the Hadamard product of a nonsingular M-matrix B and the inverse of a nonsingular M-matrix A are given by using Brauer’s theorem. It is proved that these sequences are monotone increasing, and numerical examples are given to show that these sequences could reach the true value of the minimum eigenvalue in some cases. These results in this paper improve some known results.
1 Introduction
For a positive integer n, N denotes the set ﹛1, 2, · · · , n﹜, and ℝn×n(ℂn×n) denotes the set of all n × n real (complex) matrices throughout.
A matrix A = [aij] Î ℝn×n is called a nonsingular M-matrix if aij ≤ 0, i, j Î N, i ≠ j, A is nonsingular and A−1 ≥ 0 (see [1, p.133]). Denote by Mn the set of all n × n nonsingular M-matrices.
If A is a nonsingular M-matrix, then there exists a positive eigenvalue of A equal to τ(A) ≡ [ρ(A−1)]−1, where ρ(A−1) is the perron eigenvalue of the nonnegative matrix A−1. It is easy to prove that τ(A) = min﹛|ƛ| : ƛ Î σ(A)﹜, where σ(A) denotes the spectrum of A (see [2, p.357]).
A matrix A is called reducible if there exists a nonempty proper subset I ⊂ N such that aij = 0, ∀ i Î I, ∀ jI. If A is not reducible, then we call A irreducible (see [3, p.128]).
For two real matrices A = [aij] and B = [bij] of the same size, the Hadamard product of A and B is defined as the matrix A ⚬ B = [aijbij]. If A and B are two nonsingular M-matrices, then it was proved in [4, Proposition 3] that A ⚬ B−1 is also a nonsingular M-matrix.
Let A = [aij] Î ℝn×n, aii ≠ 0. For i, j, k Î N, j ≠ i, denote
Recently, some lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse have been proposed. Let A = [aij] Î Mn, it was proved in [5] that
Subsequently, Fiedler and Markham [4] gave a lower bound on τ(A ⚬ A−1),
and conjectured that
Chen [6], Song [7] and Yong [8] have independently proved this conjecture.
In 2007, Li et al. improved the conjecture
In 2013, Zhou et al. [10] also obtained (1) under the same condition.
In 2015, Chen gave the following result (i.e., [11, Theorem 3.2]): Let A = [aij] Î Mn and A−1 = [αij] be a doubly stochastic matrix. Then
and they have obtained
i.e., under this condition, the bound of (2) is better than the one of (1).
In this paper, we present some convergent sequences of the lower bounds of τ(B ⚬ A−1) and τ(A ⚬ A−1), which improve (2). Numerical examples show that these sequences could reach the true value of τ(A ⚬ A−1) in some cases.
2 Some notations and lemmas
In this section, we first give the following notations, which will be useful in the following proofs.
Let
If A = [aij] Î ℝn×nis strictly row diagonally dominant, then, for all i, j Î N, j ≠ i, t = 1, 2, … ,
(a) 1 > dj ≥ sji ≥ uji ≥ υji(0) ≥ pji(1) ≥ υji(1) ≥ pji(2) ≥ υji(2) ≥ · · · ≥ pji(t) ≥ υji(t) ≥ · · · ≥ 0;
(b) 1 ≥ hi ≥ 0, 1 ≥ hi(t) ≥ 0.
Proof. Since A is a strictly row diagonally dominant matrix, that is,
from the definition of hi, we have 0 ≤ hi ≤ 1, i Î N. Furthermore, by the definitions of uji, υji(0), we have uji ≥ υji(0), j, i Î N, j ≠ i.
Since
From the definitions of υji(0), pji(1), we have υji(0) ≥ pji(1) ≥ 0, j, i Î N, j ≠ i.
Hence,
by the definition of hi(1), we have 0 ≤ hi(1) ≤ 1, i Î N.
Since
By 0 ≤ hi(1) ≤ 1, i Î N, we have pji(1) ≥ υji(1) ≥ 0, j, i Î N, j ≠ i. From the definitions of υji(1), pji(2), we obtain υji(1) ≥ pji(2) ≥ 0, j, i Î N, j ≠ i.
In the same way as above, we can also prove that
The proof is completed. □
Using the same technique as the proof of Lemma 2.2 in [11], we can obtain the following lemma.
If A = [aji] Î Mnis a strictly row diagonally dominant matrix, then A−1 = [αij] exists, and
[12] Let A = [aij] Î ℂn×nand x1, x2, · · · , xnbe positive real numbers. Then all the eigenvalues of A lie in the region
3 Main results
In this section, we give several convergent sequences for τ(B ⚬ A−1) and τ(A ⚬ A−1).
Let A = [aij], B = [bij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,
Proof. It is evident that the result holds with equality for n = 1.
We next assume that n ≥ 2. Since A Î Mn, there exists a positive diagonal matrix D such that D−1AD is a strictly row diagonally dominant M-matrix, and
Therefore, for convenience and without loss of generality, we assume that A is a strictly row diagonally dominant matrix.
(a) First, we assume that A and B are irreducible matrices. Since A is irreducible, then 0 < pi(t) < 1, for any i Î N. Let τ(B ⚬ A−1) = ƛ. Since ƛ is an eigenvalue of B ⚬ A−1, then 0 < ƛ < biiαii. By Lemma 2.2 and Lemma 2.3, there is a pair (i, j) of positive integers with i ≠ j such that
From inequality (4), we have
Thus, (5) is equivalent to
that is
(b) Now, assume that one of A and B is reducible. It is well known that a matrix in Zn = ﹛A = [aij] Î ℝn×n : aij ≤ 0, i ≠ j﹜ is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [1]). If we denote by C = [cij] the n × n permutation matrix with c12 = c23 = · · · = cn–1,n = cn1 = 1, the remaining cij zero, then both A − ϵC and B − ϵC are irreducible nonsingular M-matrices for any chosen positive real number ϵ, sufficiently small such that all the leading principal minors of both A − ϵC and B − ϵC are positive. Now we substitute A − ϵC and B − ϵC for A and B, in the previous case, and then letting ϵ → 0, the result follows by continuity. □
The sequence ﹛Ωt﹜, t = 1, 2, … obtained from Theorem 3.1 is monotone increasing with an upper bound τ(B ⚬ A−1) and, consequently, is convergent.
Proof. By Lemma 2.1, we have pji(t) ≥ pji(t+1) ≥ 0, j, i Î N, j ≠ i, t = 1, 2, … , so by the definiton of pi(t), it is easy to see that the sequence ﹛pi(t)﹜ is monotone decreasing. Then ﹛Ωt﹜ is a monotonically increasing sequence with an upper bound τ(B ⚬ A−1). Hence, the sequence is convergent. □
If B = A, according to Theorem 3.1, the following corollary is established.
Let A = [aij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,
We give a simple comparison between (2) and (6). According to Lemma 2.1, we know that uji ≥ pji(t), j, i Î N, j ≠ i, t = 1, 2, …. Furthermore, by the definitions of ui, pi(t), we have ui ≥ pi(t), i Î N, t = 1, 2, …. Obviously, for t = 1, 2, …, the bounds in (6) are bigger than the bound in (2).
Similar to the proof of Theorem 3.1, Theorem 3.2 and Corollary 3.3, we can obtain Theorem 3.5, Theorem 3.6 and Corollary 3.7, respectively.
Let A = [aij], B = [bij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,
The sequence ﹛Δt﹜, t = 1, 2, … obtained from Theorem 3.5 is monotone increasing with an upper bound τ(B ⚬ A−1) and, consequently, is convergent.
Let A = [aij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,
Let Lt = max﹛ t, Γt﹜. By Corollary 3.3 and Corollary 3.7, the following theorem is easily found.
Let A = [aij] Î Mnand A−1 = [αij]. Then, for t = 1, 2, …,
4 Numerical examples
In this section, several numerical examples are given to verify the theoretical results.
Consider the following M-matrix:
Since Ae = e and ATe = e, e = [1, 1, · · ·, 1]T, A−1is doubly stochastic. Numerical results are given in Table 1 for the total number of iterations T = 10. In fact, τ(A ⚬ A−1) = 0.9678.
Method | t | Lt |
Corollary 2.5 of [13] | 0.1401 | |
Conjecture of [4] | 0.2000 | |
Theorem 3.1 of [9] | 0.2519 | |
Theorem 3.2 of [15] | 0.4125 | |
Theorem 3.1 of [14] | 0.4471 | |
Theorem 3.2 of [11] | 0.4732 | |
Corollary 3 of [16] | 0.6064 | |
Theorem 3.8 | t = 1 | 0.7388 |
t = 2 | 0.8553 | |
t = 3 | 0.9059 | |
t = 4 | 0.9261 | |
t = 5 | 0.9346 | |
t = 6 | 0.9383 | |
t = 7 | 0.9400 | |
t = 8 | 0.9407 | |
t = 9 | 0.9409 | |
t = 10 | 0.9409 |
Numerical results in Table 1 show that:
(a) Lower bounds obtained from Theorem 3.8 are bigger than these corresponding bounds in [4,9,11,13-16].
(b) Sequence obtained from Theorem 3.8 is monotone increasing.
(c) The sequence obtained from Theorem 3.8 can approximate effectively to the true value of τ(A ⚬ A−1).
A nonsingular M-matrix A = [aij] Î ℝn×nwhose inverse is doubly stochastic, is randomly generated by Matlab 7.1 (with 0-1 average distribution).
The numerical results obtained for T = 500 are listed in Table 2, where T are defined in Example 4.1.
Numerical results in Table 2 show that it is effective by Theorem 3.8 to estimate τ(A ⚬ A−1) for large order matrices.
Method | t | Lt(n = 200) | Lt(n = 500) |
Theorem 3.8 | t = 1 | 0.0319 | 0.0133 |
t = 30 | 0.3953 | 0.1972 | |
t = 60 | 0.6065 | 0.3452 | |
t = 90 | 0.7293 | 0.4647 | |
t = 120 | 0.8016 | 0.5609 | |
t = 150 | 0.8428 | 0.6384 | |
t = 180 | 0.8647 | 0.7011 | |
t = 210 | 0.8773 | 0.7519 | |
t = 240 | 0.8844 | 0.7928 | |
t = 270 | 0.8885 | 0.8255 | |
t = 300 | 0.8909 | 0.8520 | |
t = 330 | 0.8923 | 0.8734 | |
t = 360 | 0.8930 | 0.8908 | |
t = 390 | 0.8935 | 0.9049 | |
t = 420 | 0.8937 | 0.9163 | |
t = 450 | 0.8938 | 0.9249 | |
t = 480 | 0.8939 | 0.9316 | |
t = 500 | 0.8940 | 0.9352 |
Let A = [aij] Î ℝn×n, where a11 = a22 = · · · = an,n = 2, a12 = a23 = · · · = an–1,n = an,1 = −1, and aij = 0 elsewhere.
It is easy to know that A is a nonsingular M-matrix. If we apply Theorem 3.8 for n = 10 and n = 100, we have τ(A ⚬ A−1) ≥ 0.7507 and τ(A ⚬ A−1) ≥ 0.7500 when t = 1, respectively. In fact, τ(A ⚬ A−1) = 0.7507 for n = 10 and τ(A ⚬ A−1) = 0.7500 for n = 100.
Numerical results in Example 4.5 show that the lower bound obtained from Theorem 3.8 could reach the true value of τ(A ⚬ A−1) in some cases.
5 Conclusions
In this paper, we present a convergent sequence ﹛Lt﹜, t = 1, 2, … to approximate τ(A ⚬ A−1). Although we do not give the error analysis, i.e., how accurately these bounds can be computed, from the numerical experiments of Example 4.3, we are pleased to see that the bounds are still on the increase with the increase of the number of iterations. At present, it is very difficult for the authors to give the error analysis. Next, we will study this problem.
Acknowledgement
The authors are very indebted to the reviewers for their valuable comments and corrections, which improved the original manuscript of this paper. This work is supported by the National Natural Science Foundations of China (Nos.11361074,11501141), Foundation of Guizhou Science and Technology Department (Grant No.[2015]2073), Scientific Research Foundation for the introduction of talents of Guizhou Minzu university (No.15XRY003) and Scientific Research Foundation of Guizhou Minzu university (No.15XJS009).
References
[1] Berman, A., Plemmons, R.J.:Nonnegative matrices in the mathematical sciences. Classics in Applied Mathematics, Vol.9, SIAM, Philadelphia, 1994.10.1137/1.9781611971262Search in Google Scholar
[2] Horn, R.A., Johnson, C.R.: Topics in matrix analysis. Cambridge University Press, 1991.10.1017/CBO9780511840371Search in Google Scholar
[3] Chen, J.L., Chen, X.H.: Special matrix. Tsinghua University Press, 2000.Search in Google Scholar
[4] Fiedler, M., Markham, T.L.: An inequality for the Hadamard product of an M -matrix and inverse M -matrix. Linear Algebra Appl. 101(1998), 1–8.10.1016/0024-3795(88)90139-5Search in Google Scholar
[5] Fiedler, M., Johnson, C.R., Markham, T.L., Neumann, M.: A trace inequality for M -matrix and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 71(1985), 81–94.10.1016/0024-3795(85)90237-XSearch in Google Scholar
[6] Chen, S.C.: A lower bound for the minimum eigenvalue of the Hadamard product of matrices. Linear Algebra Appl. 378(2004), 159-166.10.1016/j.laa.2003.09.011Search in Google Scholar
[7] Song, Y.Z.: On an inequality for the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 305(2000), 99-105.10.1016/S0024-3795(99)00224-4Search in Google Scholar
[8] Yong, X.R.: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 320(2000), 167-171.10.1016/S0024-3795(00)00211-1Search in Google Scholar
[9] Li, H.B., Huang, T.Z., Shen, S.Q., Li, H.: Lower bounds for the minimum eigenvalue of Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 420(2007), 235–247.10.1016/j.laa.2006.07.008Search in Google Scholar
[10] Zhou, D.M., Chen, G.L., Wu, G.X., Zhang, X.Y.: On some new bounds for eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 438(2013), 1415-1426.10.1016/j.laa.2012.09.013Search in Google Scholar
[11] Chen, F.B.: New inequalities for the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2015(2015), 35.10.1186/s13660-015-0555-1Search in Google Scholar
[12] Horn, R.A., Johnson, C.R.: Matrix analysis. Cambridge University Press, 1985.10.1017/CBO9780511810817Search in Google Scholar
[13] Zhou, D.M., Chen, G.L., Wu, G.X., Zhang, X.Y.: Some inequalities for the Hadamard product of an M -matrix and an inverse M -matrix. J. Inequal. Appl. 2013(2013), 16.10.1186/1029-242X-2013-16Search in Google Scholar
[14] Cheng, G.H., Tan, Q., Wang, Z.D.: Some inequalities for the minimum eigenvalue of the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2013(2013), 65.10.1186/1029-242X-2013-65Search in Google Scholar
[15] Li, Y.T., Chen, F.B., Wang D.F.: New lower bounds on eigenvalue of the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 430(2009), 1423-1431.10.1016/j.laa.2008.11.002Search in Google Scholar
[16] Li, Y.T., Wang F., Li, C.Q., Zhao, J.X.: Some new bounds for the minimum eigenvalue of the Hadamard product of an M -matrix and an inverse M -matrix. J. Inequal. Appl. 2013(2013), 480.10.1186/1029-242X-2013-480Search in Google Scholar
© 2016 Zhao and Sang, published by De Gruyter Open.
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.