Some convergent sequences of the lower bounds of the minimum eigenvalue for the Hadamard product of a nonsingular M-matrix B and the inverse of a nonsingular M-matrix A are given by using Brauer’s theorem. It is proved that these sequences are monotone increasing, and numerical examples are given to show that these sequences could reach the true value of the minimum eigenvalue in some cases. These results in this paper improve some known results.
For a positive integer n, N denotes the set ﹛1, 2, · · · , n﹜, and ℝ^{n×n}(ℂ^{n×n}) denotes the set of all n × n real (complex) matrices throughout.
A matrix A = [a_{ij}] Î ℝ^{n×n} is called a nonsingular M-matrix if a_{ij} ≤ 0, i, j Î N, i ≠ j, A is nonsingular and A^{−1} ≥ 0 (see [1, p.133]). Denote by M_{n} the set of all n × n nonsingular M-matrices.
If A is a nonsingular M-matrix, then there exists a positive eigenvalue of A equal to τ(A) ≡ [ρ(A^{−1})]^{−1}, where ρ(A^{−1}) is the perron eigenvalue of the nonnegative matrix A^{−1}. It is easy to prove that τ(A) = min﹛|ƛ| : ƛ Î σ(A)﹜, where σ(A) denotes the spectrum of A (see [2, p.357]).
A matrix A is called reducible if there exists a nonempty proper subset I ⊂ N such that a_{ij} = 0, ∀ i Î I, ∀ jI. If A is not reducible, then we call A irreducible (see [3, p.128]).
For two real matrices A = [a_{ij}] and B = [b_{ij}] of the same size, the Hadamard product of A and B is defined as the matrix A ⚬ B = [a_{ij}b_{ij}]. If A and B are two nonsingular M-matrices, then it was proved in [4, Proposition 3] that A ⚬ B^{−1} is also a nonsingular M-matrix.
Let A = [a_{ij}] Î ℝ^{n×n}, a_{ii} ≠ 0. For i, j, k Î N, j ≠ i, denote
Recently, some lower bounds for the minimum eigenvalue of the Hadamard product of an M-matrix and its inverse have been proposed. Let A = [a_{ij}] Î M_{n}, it was proved in [5] that
Subsequently, Fiedler and Markham [4] gave a lower bound on τ(A ⚬ A^{−1}),
and conjectured that
Chen [6], Song [7] and Yong [8] have independently proved this conjecture.
In 2007, Li et al. improved the conjecture
In 2013, Zhou et al. [10] also obtained (1) under the same condition.
In 2015, Chen gave the following result (i.e., [11, Theorem 3.2]): Let A = [a_{ij}] Î M_{n} and A^{−1} = [α_{ij}] be a doubly stochastic matrix. Then
and they have obtained
i.e., under this condition, the bound of (2) is better than the one of (1).
In this paper, we present some convergent sequences of the lower bounds of τ(B ⚬ A^{−1}) and τ(A ⚬ A^{−1}), which improve (2). Numerical examples show that these sequences could reach the true value of τ(A ⚬ A^{−1}) in some cases.
In this section, we first give the following notations, which will be useful in the following proofs.
Let
If A = [a_{ij}] Î ℝ^{n×n}is strictly row diagonally dominant, then, for all i, j Î N, j ≠ i, t = 1, 2, … ,
(a) 1 > d_{j} ≥ s_{ji} ≥ u_{ji} ≥ υ_{ji}^{(0)} ≥ p_{ji}^{(1)} ≥ υ_{ji}^{(1)} ≥ p_{ji}^{(2)} ≥ υ_{ji}^{(2)} ≥ · · · ≥ p_{ji}^{(t)} ≥ υ_{ji}^{(t)} ≥ · · · ≥ 0;
(b) 1 ≥ h_{i} ≥ 0, 1 ≥ h_{i}^{(t)} ≥ 0.
Proof. Since A is a strictly row diagonally dominant matrix, that is,
Since
Hence,
Since
In the same way as above, we can also prove that
The proof is completed. □
Using the same technique as the proof of Lemma 2.2 in [11], we can obtain the following lemma.
If A = [a_{ji}] Î M_{n}is a strictly row diagonally dominant matrix, then A^{−1} = [α_{ij}] exists, and
[12] Let A = [a_{ij}] Î ℂ^{n×n}and x_{1}, x_{2}, · · · , x_{n}be positive real numbers. Then all the eigenvalues of A lie in the region
In this section, we give several convergent sequences for τ(B ⚬ A^{−1}) and τ(A ⚬ A^{−1}).
Let A = [a_{ij}], B = [b_{ij}] Î M_{n}and A^{−1} = [α_{ij}]. Then, for t = 1, 2, …,
Proof. It is evident that the result holds with equality for n = 1.
We next assume that n ≥ 2. Since A Î M_{n}, there exists a positive diagonal matrix D such that D^{−1}AD is a strictly row diagonally dominant M-matrix, and
Therefore, for convenience and without loss of generality, we assume that A is a strictly row diagonally dominant matrix.
(a) First, we assume that A and B are irreducible matrices. Since A is irreducible, then 0 < p_{i}^{(t)} < 1, for any i Î N. Let τ(B ⚬ A^{−1}) = ƛ. Since ƛ is an eigenvalue of B ⚬ A^{−1}, then 0 < ƛ < b_{ii}α_{ii}. By Lemma 2.2 and Lemma 2.3, there is a pair (i, j) of positive integers with i ≠ j such that
From inequality (4), we have
Thus, (5) is equivalent to
(b) Now, assume that one of A and B is reducible. It is well known that a matrix in Z_{n} = ﹛A = [a_{ij}] Î ℝ^{n×n} : a_{ij} ≤ 0, i ≠ j﹜ is a nonsingular M-matrix if and only if all its leading principal minors are positive (see condition (E17) of Theorem 6.2.3 of [1]). If we denote by C = [c_{ij}] the n × n permutation matrix with c_{12} = c_{23} = · · · = c_{n}_{–1,n} = c_{n}_{1} = 1, the remaining c_{ij} zero, then both A − ϵC and B − ϵC are irreducible nonsingular M-matrices for any chosen positive real number ϵ, sufficiently small such that all the leading principal minors of both A − ϵC and B − ϵC are positive. Now we substitute A − ϵC and B − ϵC for A and B, in the previous case, and then letting ϵ → 0, the result follows by continuity. □
The sequence ﹛Ω_{t}﹜, t = 1, 2, … obtained from Theorem 3.1 is monotone increasing with an upper bound τ(B ⚬ A^{−1}) and, consequently, is convergent.
Proof. By Lemma 2.1, we have p_{ji}^{(t)} ≥ p_{ji}^{(t+1)} ≥ 0, j, i Î N, j ≠ i, t = 1, 2, … , so by the definiton of p_{i}^{(t)}, it is easy to see that the sequence ﹛p_{i}^{(t)}﹜ is monotone decreasing. Then ﹛Ω_{t}﹜ is a monotonically increasing sequence with an upper bound τ(B ⚬ A^{−1}). Hence, the sequence is convergent. □
If B = A, according to Theorem 3.1, the following corollary is established.
Let A = [a_{ij}] Î M_{n}and A^{−1} = [α_{ij}]. Then, for t = 1, 2, …,
We give a simple comparison between (2) and (6). According to Lemma 2.1, we know that u_{ji} ≥ p_{ji}^{(t)}, j, i Î N, j ≠ i, t = 1, 2, …. Furthermore, by the definitions of u_{i}, p_{i}^{(t)}, we have u_{i} ≥ p_{i}^{(t)}, i Î N, t = 1, 2, …. Obviously, for t = 1, 2, …, the bounds in (6) are bigger than the bound in (2).
Similar to the proof of Theorem 3.1, Theorem 3.2 and Corollary 3.3, we can obtain Theorem 3.5, Theorem 3.6 and Corollary 3.7, respectively.
Let A = [a_{ij}], B = [b_{ij}] Î M_{n}and A^{−1} = [α_{ij}]. Then, for t = 1, 2, …,
The sequence ﹛Δ_{t}﹜, t = 1, 2, … obtained from Theorem 3.5 is monotone increasing with an upper bound τ(B ⚬ A^{−1}) and, consequently, is convergent.
Let A = [a_{ij}] Î M_{n}and A^{−1} = [α_{ij}]. Then, for t = 1, 2, …,
Let L_{t} = max﹛ _{t}, Γ_{t}﹜. By Corollary 3.3 and Corollary 3.7, the following theorem is easily found.
Let A = [a_{ij}] Î M_{n}and A^{−1} = [α_{ij}]. Then, for t = 1, 2, …,
In this section, several numerical examples are given to verify the theoretical results.
Consider the following M-matrix:
Since Ae = e and A^{T}e = e, e = [1, 1, · · ·, 1]^{T}, A^{−1}is doubly stochastic. Numerical results are given in Table 1 for the total number of iterations T = 10. In fact, τ(A ⚬ A^{−1}) = 0.9678.
Method | t | L_{t} |
Corollary 2.5 of [13] | 0.1401 | |
Conjecture of [4] | 0.2000 | |
Theorem 3.1 of [9] | 0.2519 | |
Theorem 3.2 of [15] | 0.4125 | |
Theorem 3.1 of [14] | 0.4471 | |
Theorem 3.2 of [11] | 0.4732 | |
Corollary 3 of [16] | 0.6064 | |
Theorem 3.8 | t = 1 | 0.7388 |
t = 2 | 0.8553 | |
t = 3 | 0.9059 | |
t = 4 | 0.9261 | |
t = 5 | 0.9346 | |
t = 6 | 0.9383 | |
t = 7 | 0.9400 | |
t = 8 | 0.9407 | |
t = 9 | 0.9409 | |
t = 10 | 0.9409 |
Numerical results in Table 1 show that:
(a) Lower bounds obtained from Theorem 3.8 are bigger than these corresponding bounds in [4,9,11,13-16].
(b) Sequence obtained from Theorem 3.8 is monotone increasing.
(c) The sequence obtained from Theorem 3.8 can approximate effectively to the true value of τ(A ⚬ A^{−1}).
A nonsingular M-matrix A = [a_{ij}] Î ℝ^{n×n}whose inverse is doubly stochastic, is randomly generated by Matlab 7.1 (with 0-1 average distribution).
The numerical results obtained for T = 500 are listed in Table 2, where T are defined in Example 4.1.
Numerical results in Table 2 show that it is effective by Theorem 3.8 to estimate τ(A ⚬ A^{−1}) for large order matrices.
Method | t | L_{t}(n = 200) | L_{t}(n = 500) |
Theorem 3.8 | t = 1 | 0.0319 | 0.0133 |
t = 30 | 0.3953 | 0.1972 | |
t = 60 | 0.6065 | 0.3452 | |
t = 90 | 0.7293 | 0.4647 | |
t = 120 | 0.8016 | 0.5609 | |
t = 150 | 0.8428 | 0.6384 | |
t = 180 | 0.8647 | 0.7011 | |
t = 210 | 0.8773 | 0.7519 | |
t = 240 | 0.8844 | 0.7928 | |
t = 270 | 0.8885 | 0.8255 | |
t = 300 | 0.8909 | 0.8520 | |
t = 330 | 0.8923 | 0.8734 | |
t = 360 | 0.8930 | 0.8908 | |
t = 390 | 0.8935 | 0.9049 | |
t = 420 | 0.8937 | 0.9163 | |
t = 450 | 0.8938 | 0.9249 | |
t = 480 | 0.8939 | 0.9316 | |
t = 500 | 0.8940 | 0.9352 |
Let A = [a_{ij}] Î ℝ^{n×n}, where a_{11} = a_{22} = · · · = a_{n,n} = 2, a_{12} = a_{23} = · · · = a_{n}_{–1,n} = a_{n,}_{1} = −1, and a_{ij} = 0 elsewhere.
It is easy to know that A is a nonsingular M-matrix. If we apply Theorem 3.8 for n = 10 and n = 100, we have τ(A ⚬ A^{−1}) ≥ 0.7507 and τ(A ⚬ A^{−1}) ≥ 0.7500 when t = 1, respectively. In fact, τ(A ⚬ A^{−1}) = 0.7507 for n = 10 and τ(A ⚬ A^{−1}) = 0.7500 for n = 100.
Numerical results in Example 4.5 show that the lower bound obtained from Theorem 3.8 could reach the true value of τ(A ⚬ A^{−1}) in some cases.
In this paper, we present a convergent sequence ﹛L_{t}﹜, t = 1, 2, … to approximate τ(A ⚬ A^{−1}). Although we do not give the error analysis, i.e., how accurately these bounds can be computed, from the numerical experiments of Example 4.3, we are pleased to see that the bounds are still on the increase with the increase of the number of iterations. At present, it is very difficult for the authors to give the error analysis. Next, we will study this problem.
The authors are very indebted to the reviewers for their valuable comments and corrections, which improved the original manuscript of this paper. This work is supported by the National Natural Science Foundations of China (Nos.11361074,11501141), Foundation of Guizhou Science and Technology Department (Grant No.[2015]2073), Scientific Research Foundation for the introduction of talents of Guizhou Minzu university (No.15XRY003) and Scientific Research Foundation of Guizhou Minzu university (No.15XJS009).
[1] Berman, A., Plemmons, R.J.:Nonnegative matrices in the mathematical sciences. Classics in Applied Mathematics, Vol.9, SIAM, Philadelphia, 1994. Search in Google Scholar
[2] Horn, R.A., Johnson, C.R.: Topics in matrix analysis. Cambridge University Press, 1991. Search in Google Scholar
[3] Chen, J.L., Chen, X.H.: Special matrix. Tsinghua University Press, 2000. Search in Google Scholar
[4] Fiedler, M., Markham, T.L.: An inequality for the Hadamard product of an M -matrix and inverse M -matrix. Linear Algebra Appl. 101(1998), 1–8. Search in Google Scholar
[5] Fiedler, M., Johnson, C.R., Markham, T.L., Neumann, M.: A trace inequality for M -matrix and the symmetrizability of a real matrix by a positive diagonal matrix. Linear Algebra Appl. 71(1985), 81–94. Search in Google Scholar
[6] Chen, S.C.: A lower bound for the minimum eigenvalue of the Hadamard product of matrices. Linear Algebra Appl. 378(2004), 159-166. Search in Google Scholar
[7] Song, Y.Z.: On an inequality for the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 305(2000), 99-105. Search in Google Scholar
[8] Yong, X.R.: Proof of a conjecture of Fiedler and Markham. Linear Algebra Appl. 320(2000), 167-171. Search in Google Scholar
[9] Li, H.B., Huang, T.Z., Shen, S.Q., Li, H.: Lower bounds for the minimum eigenvalue of Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 420(2007), 235–247. Search in Google Scholar
[10] Zhou, D.M., Chen, G.L., Wu, G.X., Zhang, X.Y.: On some new bounds for eigenvalues of the Hadamard product and the Fan product of matrices. Linear Algebra Appl. 438(2013), 1415-1426. Search in Google Scholar
[11] Chen, F.B.: New inequalities for the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2015(2015), 35. Search in Google Scholar
[12] Horn, R.A., Johnson, C.R.: Matrix analysis. Cambridge University Press, 1985. Search in Google Scholar
[13] Zhou, D.M., Chen, G.L., Wu, G.X., Zhang, X.Y.: Some inequalities for the Hadamard product of an M -matrix and an inverse M -matrix. J. Inequal. Appl. 2013(2013), 16. Search in Google Scholar
[14] Cheng, G.H., Tan, Q., Wang, Z.D.: Some inequalities for the minimum eigenvalue of the Hadamard product of an M -matrix and its inverse. J. Inequal. Appl. 2013(2013), 65. Search in Google Scholar
[15] Li, Y.T., Chen, F.B., Wang D.F.: New lower bounds on eigenvalue of the Hadamard product of an M -matrix and its inverse. Linear Algebra Appl. 430(2009), 1423-1431. Search in Google Scholar
[16] Li, Y.T., Wang F., Li, C.Q., Zhao, J.X.: Some new bounds for the minimum eigenvalue of the Hadamard product of an M -matrix and an inverse M -matrix. J. Inequal. Appl. 2013(2013), 480. Search in Google Scholar
© 2016 Zhao and Sang, published by De Gruyter Open.
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.