In this section, we present our main results. Firstly, we give some notation and lemmas.

Let *A* ≥ 0 and *D = diag*(*a*_{ij}). Denote $C=A-D,{\mathcal{J}}_{A}={D}_{1}^{-1}C,{D}_{1}=diag({d}_{ii}),$ where $${d}_{ii}=\left\{\begin{array}{l}1,if\phantom{\rule{thinmathspace}{0ex}}{a}_{ii}=0,\\ {a}_{ii},if\phantom{\rule{thinmathspace}{0ex}}{a}_{ii}\ne 0.\end{array}\right.$$

By the definition of *𝒥*_{A}, we obtain
$$\rho ({\mathcal{J}}_{{A}^{T}})=\rho ({D}_{1}^{-1}{C}^{T})=\rho (C{D}_{1}^{-1})=\rho ({D}_{1}^{-1}(C{D}_{1}^{-1}){D}_{1})=\rho ({D}_{1}^{-1}C)=\rho ({\mathcal{J}}_{A}).$$

**Lemma 2.1:** *([9]). **Let A* *∈* *C*^{n×n}, and let x_{1}*, x*_{2}*,..., x*_{n} be positive real numbers. Then all the eigenvalues of A lie in the region
$$\bigcup _{i}\{z\in C:|z-{a}_{ii}|\le {x}_{i}\sum _{j\ne i}\frac{1}{{x}_{j}}|{a}_{ji}|,i\in N\}.$$

**Lemma 2.2:** *([3]). **Let A* ∈ *C*^{n×n}, and let x_{1}*, x*_{2}*, ... , x*_{n} be positive real numbers. Then all the eigenvalues of A lie in the region
$$\bigcup _{j\ne j}\{z\in \mathbb{C}:|z-{a}_{ii}||z-{a}_{jj}|\le ({x}_{i}\sum _{\kappa \ne i}\frac{1}{{x}_{k}}|{a}_{ki}|)({x}_{j}\sum _{l\ne j}\frac{1}{{x}_{l}}|{a}_{lj}|)\}.$$

**Lemma 2.3:** *([3]). **Let A, B* ∈ *R*^{n×n} , and let X, Y ∈ *R*^{n×n} be diagonal matrices. Then
$$X(A\circ B)Y=(XAY)\circ B=(XA)\circ (BY)=(AY)\circ (XB)=A\circ (XBY)\phantom{\rule{thickmathspace}{0ex}}.$$

**Lemma 2.4:** *([3]). **Let A =* (*a*_{ij}) ∈ *M*_{n}. Then there exists a positive diagonal matrix X such that X^{-1} *AX is a strictly diagonally dominant M-matrix*.

**Lemma 2.5:** *Let A =* (*a*_{ij}) ∈ *R*^{n×n} be a strictly diagonally dominant matrix and let A^{-}1 = (*α*_{ij}). *Then*
$${\alpha}_{ij}\le {w}_{ji}{\alpha}_{jj}\le {w}_{j}{\alpha}_{jj},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}j,i\in N,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}j\ne i.$$

**Proof:** *This proof is similar to the one of Lemma 2.2 in [10].*

**Theorem 2.6:** *Let A =* (*a*_{ij}) ≥ 0, *B* = (*b*_{ij}) ∈ *M*_{n} and let B^{-1}* =* (*ß*_{ij})*. Then*
$$\rho (A\circ {B}^{-1})\le \underset{1\le i\le n}{max}\{({a}_{ii}+{w}_{i}\rho ({\mathcal{J}}_{A}){d}_{ii}){\beta}_{ii}\}.$$(4)

**Proof:** *It is evident that the result holds with equality for **n =* 1.We next assume that *n* ≥ 2.(1) First, we assume that *A* and *B* are irreducible matrices. Since *B* is an M-matrix, by Lemma 2.4, there exists a positive diagonal matrix *X*, such that *X*^{-1} *BX* is a strictly row diagonally dominant M-matrix, and
$$\rho (A\circ {B}^{-1})=\rho ({X}^{-1}(A\circ {B}^{-1})X)=\rho (A\circ ({X}^{-1}BX{)}^{-1}).$$Hence, for convenience and without loss of generality, we assume that *B* is a strictly diagonally dominant matrix.On the other hand, since A is irreducible and so is ${\mathcal{J}}_{{A}^{T}}.$ Then there exists a positive vector *x =* (*x*_{i}) such that ${\mathcal{J}}_{{A}^{T}}x=\rho ({\mathcal{J}}_{{A}^{T}})x=\rho ({\mathcal{J}}_{A})x,$ thus, we obtain $\sum _{j\ne i}{a}_{ji}{x}_{j}=\rho ({\mathcal{J}}_{A}){d}_{ii}{x}_{i}.$
Let $\stackrel{~}{A}=({\stackrel{~}{a}}_{ij})=XA{X}^{-1}$ in which *X* is the positive matrix *X = diag(x*_{1}*, x*_{2},..., *x*_{n}). Then, we have
$$\stackrel{~}{A}=({\stackrel{~}{a}}_{ij})=XA{X}^{-1}=\left(\begin{array}{cccc}{a}_{11}& \frac{{a}_{12}{x}_{1}}{{x}_{2}}& \cdots & {a}_{1n}{x}_{1}\\ \frac{{a}_{21}{x}_{2}}{{x}_{1}}& {a}_{22}& \cdots & \frac{{a}_{2n}{x}_{2}}{{x}_{n}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{{a}_{n1}{x}_{n}}{{x}_{1}}& \frac{{a}_{n2}{x}_{n}}{{x}_{2}}& \cdots & {a}_{nn}\end{array}\right).$$From Lemma 2.3, we have
$$\stackrel{~}{A}\circ {B}^{-1}=(XA{X}^{-1})\circ {B}^{-1}=X(A\circ {B}^{-1}){X}^{-1}.$$Thus, we obtain $\rho (\stackrel{~}{A}\circ {B}^{-1})=\rho (A\circ {B}^{-1}).\phantom{\rule{thinmathspace}{0ex}}\text{Let}\phantom{\rule{thinmathspace}{0ex}}\lambda =\rho (\stackrel{~}{A}\circ {B}^{-1}),\phantom{\rule{thinmathspace}{0ex}}\text{so that}\phantom{\rule{thinmathspace}{0ex}}\lambda \ge {a}_{ii}{\beta}_{ii},\mathrm{\forall}i\in N.$ By Lemma 2.1, there exists *i*_{0} ∈* N*, such that
$$|\lambda -{a}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}|\le {w}_{{i}_{0}}\sum _{t\ne {i}_{0}}\frac{1}{{w}_{t}}{\stackrel{~}{a}}_{t{i}_{0}}{\beta}_{t{i}_{0}}\le {w}_{{i}_{0}}\sum _{t\ne {i}_{0}}\frac{1}{{w}_{t}}{\stackrel{~}{a}}_{t{i}_{0}}{w}_{t{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}$$
$$\le {w}_{{i}_{0}}\sum _{t\ne {i}_{0}}{\stackrel{~}{a}}_{t{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}={w}_{{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}\sum _{t\ne {i}_{0}}\frac{{a}_{t{i}_{0}}{x}_{t}}{{x}_{{i}_{0}}}={w}_{{i}_{0}}\rho ({\mathcal{J}}_{A}){d}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}.$$Therefore, $$\lambda \le \phantom{\rule{thickmathspace}{0ex}}{a}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}+{w}_{{i}_{0}}\rho ({\mathcal{J}}_{A}){d}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}=({a}_{{i}_{0}{i}_{0}}+{w}_{{i}_{0}}\rho ({\mathcal{J}}_{A}){d}_{{i}_{0}{i}_{0}}){\beta}_{{i}_{0}{i}_{0}},$$ *i.e*., $$\rho (A\circ {B}^{-1})\le ({a}_{{i}_{0}{i}_{0}}+{w}_{{i}_{0}}\rho ({\mathcal{J}}_{A}){d}_{{i}_{0}{i}_{0}}){\beta}_{{i}_{0}{i}_{0}}\le \underset{1\le i\le n}{max}\{({a}_{ii}+{w}_{i}\rho ({\mathcal{J}}_{A}){d}_{ii}){\beta}_{ii}\}.$$(2) Now, assume that one of *A* and *B* is reducible. It is well known that a matrix in *Z*_{n} is a nonsingular M-matrix if and only if all its leading principal minors are positive *(see* [1]). If we denote by *τ =* (*t*_{ij}) the *n* × *n* monomial matrix with *t*_{12} = *t*_{23} = · · · = *t*_{n-1}, *n* = *t*_{n1} = -1, the remaining *t*_{ij} zero, then both *A — ɛτ* and *B + ɛτ* are irreducible matrices for any chosen positive real number *ɛ*, sufficiently small such that all the leading principal minors of *B + ɛτ* are positive. Now, we substitute *A — ɛτ* and *B + ɛτ* for *A* and *B*, respectively, in the previous case, and then letting *ɛ →* 0, the result follows by continuity.

**Theorem 2.7:** *Let B =* (*b*_{ij}) ∈ *M*_{n} and B^{-1}* =* (*ß*_{ij})*. Then*
$$\tau (B)\ge \frac{1}{\underset{1\le i\le n}{max}\{(1+{w}_{i}(n-1)){\beta}_{ii}\}}.$$(5)*Proof*. Let all entries of *A* in (4) be 1. Then *a*_{ii} = 1(∀*i* ∈ *N*)*, ρ*(*J*_{A}) = n - 1. Therefore, by (4), we have
$$\tau (B)=\frac{1}{\rho ({B}^{-1})}\ge \frac{1}{\underset{1\le i\le n}{max}\{(1+{w}_{i}(n-1)){\beta}_{ii}\}}.$$The proof is completed.

**Theorem 2.8:** *Let A =* (*a*_{ij})* ≥* 0, *B =* (*b*_{ij})* ∈* *M*_{n} and let B^{-1} = (*ß*_{ij})*. Then* $$\rho (A\circ {B}^{-1})\le \frac{1}{2}\underset{i\ne j}{max}\left\{{a}_{ii}{\beta}_{ii}+{a}_{jj}{\beta}_{jj}+{\mathrm{\Delta}}_{ij}\right\},$$(6) *where* ${\mathrm{\Delta}}_{ij}=[({a}_{ii}{\beta}_{ii}-{a}_{jj}{\beta}_{jj}{)}^{2}+4{w}_{i}{w}_{j}{\rho}^{2}({\mathcal{J}}_{A}){d}_{ii}{d}_{jj}{\beta}_{ii}{\beta}_{jj}{]}^{\frac{1}{2}}.$

**Proof:** *It is evident that the result holds with equality for **n =* 1.We next assume that *n* ≥ 2. For convenience and without loss of generality, we assume that *B* is a strictly row diagonally dominant matrix.(i) First, we assume that *A* and *B* are irreducible matrices. Since *A* is irreducible and so is ${\mathcal{J}}_{{A}^{T}}.$ Then there exists a positive vector *y = (y¡)* such that ${\mathcal{J}}_{{A}^{T}}y=\rho ({\mathcal{J}}_{{A}^{T}})y=\rho ({\mathcal{J}}_{A})y,$ thus, we obtain
$$\sum _{\kappa \ne i}{a}_{ki}{y}_{k}=\rho ({\mathcal{J}}_{A}){d}_{ii}{y}_{i},\sum _{\kappa \ne j}{a}_{kj}{y}_{k}=\rho ({\mathcal{J}}_{A}){d}_{jj\mathcal{Y}j}.$$Let $\hat{A}=({\hat{a}}_{ij})=YA{Y}^{-1}$ in which *Y* is the positive matrix *Y = diag*(*y*_{1}*, y*_{2}*,..., y*_{n}). Then, we have
$$\hat{A}=({\hat{a}}_{ij})=YA{Y}^{-1}=\phantom{\rule{thinmathspace}{0ex}}\left(\begin{array}{cccc}{a}_{11}& \frac{{a}_{12}{y}_{1}}{{y}_{2}}& \cdots & \frac{{a}_{1n}{y}_{1}}{{y}_{n}}\\ \frac{{a}_{21}{y}_{2}}{{y}_{1}}& {a}_{22}& \cdots & \frac{{a}_{2n}{y}_{2}}{{y}_{n}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{{a}_{n1}{y}_{n}}{{y}_{1}}& \frac{{a}_{n2}{y}_{n}}{{y}_{2}}& \cdots & {a}_{nn}\end{array}\right).$$From Lemma 2.3, we get $$\hat{A}\circ {B}^{-1}=(YA{Y}^{-1})\circ {B}^{-1}=Y(A\circ {B}^{-1}){Y}^{-1}.$$Thus, we obtain $\rho (\hat{A}\circ {B}^{-1})=\rho (A\circ {B}^{-1}).$
Let $\lambda =\rho (\hat{A}\mathrm{o}{B}^{-1})$
so that *λ* ≥ *a*_{ii} ß_{ii} (∀*i* ∈* N*). By Lemma 2.2, there exist i_{0}, *j*_{0} ∈* N, i*_{0} ≠* j*_{0} such that
$$|\lambda -{a}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}||\lambda -{a}_{{j}_{0}{j}_{0}}{\beta}_{{j}_{0}{j}_{0}}|\underset{\_}{<}({w}_{{i}_{0}}\sum _{\kappa \ne {i}_{0}}\frac{1}{{w}_{k}}{\hat{a}}_{k{i}_{0}}{\beta}_{k{i}_{0}})({w}_{{j}_{0}}\sum _{\kappa \ne {j}_{0}}\frac{1}{{w}_{k}}{\hat{a}}_{k{j}_{0}}{\beta}_{k{j}_{0}}).$$Note that
$${w}_{{i}_{0}}\sum _{\kappa \ne {i}_{0}}\frac{1}{{w}_{k}}{\hat{a}}_{k{i}_{0}}{\beta}_{k{i}_{0}}\underset{\_}{<}{w}_{{i}_{0}}\sum _{\kappa \ne {i}_{0}}\frac{1}{{w}_{k}}{\hat{a}}_{k{i}_{0}}{w}_{k{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}\underset{\_}{<}{w}_{{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}\sum _{\kappa \ne {i}_{0}}{\hat{a}}_{k{i}_{0}}={w}_{{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}\rho ({\mathcal{J}}_{A}){d}_{{i}_{0}{i}_{0};}{w}_{{j}_{0}}\sum _{\kappa \ne {j}_{0}}\frac{1}{{w}_{k}}{\hat{a}}_{k{j}_{0}}{\beta}_{k{j}_{0}}\underset{\_}{<}{w}_{{j}_{0}}\sum _{\kappa \ne {j}_{0}}\frac{1}{{w}_{k}}{\hat{a}}_{k{j}_{0}}{w}_{k{j}_{0}}{\beta}_{{j}_{0}{j}_{0}}\underset{\_}{<}{w}_{{j}_{0}}{\beta}_{{j}_{0}{j}_{0}}\sum _{\kappa \ne {j}_{0}}{\hat{a}}_{k{j}_{0}}={w}_{{j}_{0}}{\beta}_{{j}_{0}{j}_{0}}\rho ({\mathcal{J}}_{A}){d}_{{j}_{0}{j}_{0}}.$$Hence, we obtain
$$\lambda \le \frac{1}{2}({a}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}+{a}_{{j}_{0}{j}_{0}}{\beta}_{{j}_{0}{j}_{0}}+{\mathrm{\Delta}}_{{j}_{0}{j}_{0}}),$$*i.e*.,
$$\rho (A\mathrm{o}{B}^{-1})\le \frac{1}{2}({a}_{{i}_{0}{i}_{0}}{\beta}_{{i}_{0}{i}_{0}}+{a}_{{j}_{0}{j}_{0}}{\beta}_{{j}_{0}{j}_{0}}+{\mathrm{\Delta}}_{{i}_{0}{i}_{0}})\le \frac{1}{2}\underset{i\ne j}{max}\{{a}_{ii}{\beta}_{ii}+{a}_{jj}{\beta}_{jj}+{\mathrm{\Delta}}_{ij}\},$$where
$\mathrm{\Delta}=[({a}_{ii}{\beta}_{ii}-{a}_{jj}{\beta}_{jj}{)}^{2}+4{w}_{i}{w}_{j}{\rho}^{2}({\mathcal{J}}_{A})({d}_{ii}{d}_{jj}{\beta}_{ii}{\beta}_{jj}{]}^{\frac{1}{2}}.$(ii) Now, assume that one of *A* and *B* is reducible. We substitute *A — ɛτ* and *B + ɛτ* for *A* and *B*, respectively, in the previous case (as in the proof of Theorem 2.6), and then letting *ɛ →* 0, the result follows by continuity.

**Theorem 2.9:** *Let B =* (*b*_{ij}) ∈* M*_{n} and B^{-1}* =* (*β*_{ij})*. Then*
$$\tau (B)\ge \frac{2}{\underset{i\ne j}{max}\{{\beta}_{ii}+{\beta}_{jj}+{\beta}_{jj}\}},$$(7)where ${\mathrm{\Delta}}_{ij}=[({\beta}_{ii}-{\beta}_{jj}{)}^{2}+4(n-1{)}^{2}{w}_{i}{w}_{j}{\beta}_{ii}{\beta}_{jj}{]}^{\frac{1}{2}}.$

**Proof:** *Proof*. Let all entries of *A* in (6) be 1. Then
$${a}_{ii}=1(\mathrm{\forall}i\in N),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\rho ({\mathcal{J}}_{A})=n-1,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{\mathrm{\Delta}}_{ij}=[({\beta}_{ii}-{\beta}_{jj}{)}^{2}+4(n-1{)}^{2}{w}_{i}{w}_{j}{\beta}_{ii}{\beta}_{jj}{]}^{\frac{1}{2}}.$$Therefore, by (6), we have
$$\tau (B)=\frac{1}{\rho ({B}^{-1})}\ge \frac{2}{\underset{i\ne j}{max}\left\{{\beta}_{ii}+{\beta}_{jj}+{\mathrm{\Delta}}_{ij}\right\}}.$$The proof is completed.

## Comments (0)