Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2018: 0.34

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

# A Geršgorin-type eigenvalue localization set with n parameters for stochastic matrices

Xiaoxiao Wang
/ Chaoqian Li
/ Yaotang Li
Published Online: 2018-04-02 | DOI: https://doi.org/10.1515/math-2018-0030

## Abstract

A set in the complex plane which involves n parameters in [0, 1] is given to localize all eigenvalues different from 1 for stochastic matrices. As an application of this set, an upper bound for the moduli of the subdominant eigenvalues of a stochastic matrix is obtained. Lastly, we fix n parameters in [0, 1] to give a new set including all eigenvalues different from 1, which is tighter than those provided by Shen et al. (Linear Algebra Appl. 447 (2014) 74-87) and Li et al. (Linear and Multilinear Algebra 63(11) (2015) 2159-2170) for estimating the moduli of subdominant eigenvalues.

MSC 2010: 65F15; 15A18; 15A51

## 1 Introduction

Stochastic matrices and eigenvalue localization of stochastic matrices play key roles in many application fields, such as Computer Aided Geometric Design [1], Birth-Death Processes [2, 3, 4, 5], and Markov chains [6]. An entrywise nonnegative matrix A = [aij] ∈ ℝn×n is called row stochastic (or simply stochastic) if all its row sums are 1, that is,

$∑j=1naij=1,foreachi∈N={1,2,…,n}.$

Let us denote the ith deleted column sum of the moduli of off-diagonal entries of A by

$Ci(A)=∑j≠iaji.$

Obviously, 1 is an eigenvalue of a stochastic matrix with a corresponding eigenvector e = [1, 1, …, 1]T. From the Perron-Frobenius Theorem [7], for any eigenvalue λ of A, that is, λ ∈ σ(A), we have |λ| ≤ 1 [8]. Here we call |λ| a moduli of subdominant eigenvalue of a stochastic matrix A if 1 > |λ| > |η| for every eigenvalue η different from 1 and λ [8, 9, 10].

Since the subdominant eigenvalue of a stochastic matrix is crucial for bounding the convergence rate of stochastic processes [8, 11, 12, 13, 14], it is interesting to give a set to localize all eigenvalues different from 1, or an upper bound for the moduli of its subdominant eigenvalue [8, 15].

One can use the well-known Geršgorin circle set [16] to localize all eigenvalues for a stochastic matrix. However, this set always includes the trival eigenvalue 1, and thus it is not always precise for capturing all eigenvalues different from 1 of a stochastic matrix. Therefore, several authors have tried to modify the Geršgorin circle set to localize more precisely all eigenvalues different from 1. In [8], Cvetković et al. gave the following set.

#### Theorem 1.1

([8, Theorem 3.4]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

$λ∈Γ(A)={z∈C:|z−γ(A)|<1−trace(A)+(n−1)γ(A)},$

where γ(A) = $\begin{array}{}\underset{i\in N}{max}\end{array}$ (aiili(A)), li(A) = $\begin{array}{}\underset{j\ne i}{min}\end{array}$ aji and trace(A) is the trace of A.

However, the set provided by Theorem 1.1 is not effective in some cases, such as, for the class of stochastic matrices

$SM0={A∈Rn×n:Aisstochastic,andaii=li=0,foreachi∈N},$

for more details, see [15]. To overcome this drawback, Li and Li [15] provided another set as follows.

#### Theorem 1.2

([15, Theorem 6]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

$λ∈Γ~(A)={z∈C:|z+γ~(A)|

where γ̃(A) = $\begin{array}{}\underset{i\in N}{max}\end{array}$ (Li(A) − aii) and Li(A) = $\begin{array}{}\underset{j\ne i}{max}\end{array}$ aji.

Recently, by taking respectively

$li(A)=minj≠iaji,vi(A)=max0,12mink,m≠i,k≠m{aki+ami}=12mink,m≠i,k≠m{aki+ami},$

and

$qi(A)=1n−1∑j≠iaji$

to modify the Geršgorin circle set, Shen et al. [12], and Li et al. [11] gave three sets to localize all eigenvalues different from 1.

#### Theorem 1.3

([11, 12]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

$λ∈Γstol(A)=⋃i∈NΓistol(A)={z∈C:|aii−z−li(A)|

and

$λ∈Γstoq(A)=⋃i∈NΓistoq(A)={z∈C:|aii−z−qi(A)|

where

$Cli(A)=∑j≠i|aji−li(A)|=∑j≠iaji−∑j≠ili(A)=Ci(A)−(n−1)li(A),Cvi(A)=∑j≠i|aji−vi(A)|=Ci(A)−(n−3)vi(A)−2li(A)$

and Cqi(A) = $\begin{array}{}\sum _{j\ne i}\end{array}$ |ajiqi(A)|.

Remark here that Shen et al. [12] used these three sets to localize any real eigenvalue different from 1, which are generalized to localize all eigenvalues different from 1 by Li et al. [11].

Also in [11], Li et al. provided another two modifications of the Geršgorin circle set by taking respectively

$Li(A)=maxj≠iaji,andVi(A)=12maxk,m≠i,k≠m{aki+ami}.$

#### Theorem 1.4

([11, Theorems 3.3 and 3.8]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

$λ∈ΓstoL(A)=⋃i∈NΓistoL(A)={z∈C:|Li(A)−aii+z|

and

$λ∈ΓstoV(A)=⋃i∈NΓistoV(A)={z∈C:|Vi(A)−aii+z|

where

$CLi(A)=∑j≠i|Li(A)−aji|=(n−1)Li(A)−Ci(A)$

and

$CVi(A)=∑j≠i|Vi(A)−aji|=(n−3)Vi(A)+2Li(A)−Ci(A).$

Note that li(A), vi(A), qi(A), Vi(A) and Li(A) are all in the interval $\begin{array}{}\left[\underset{j\ne i}{min}{a}_{ji},\underset{j\ne i}{max}{a}_{ji}\right]\end{array}$. So it is natural to ask whether or not there is an optimal value in $\begin{array}{}\left[\underset{j\ne i}{min}{a}_{ji},\underset{j\ne i}{max}{a}_{ji}\right]\end{array}$ such that the set, which is obtained by using this value to modify the Geršgorin circle set, captures all eigenvalues different from 1 of a stochastic matrix most precisely. To answer this question, we give a set in Section 2 with n parameters in [0, 1] to localize all eigenvalues different from 1 for a stochastic matrix, and show that this set would reduce to Γstol (A), Γstov (A), Γstoq (A), ΓstoV (A) and ΓstoL (A) by taking some fixed parameters. And we use this set in Section 3 to give an upper bound for the moduli of its subdominant eigenvalue for a stochastic matrix. In section 4, by choosing special values of these n parameters in [0, 1] for the upper bound obtained in Section 3, we give a new set including all eigenvalues different from 1, which is better than Γstol (A) and ΓstoL (A) in the sense of estimating the moduli of subdominant eigenvalues.

## 2 A Geršgorin-type eigenvalue localization set with n parameters

We first begin with an important lemma, which is used to give some modifications of the Geršgorin circle set.

#### Lemma 2.1

([8, 11, 12]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. For any d = [d1,d2, …, dn]T ∈ ℝn, if μσ(A)∖{1}, thenμ is an eigenvalue of the matrix

$B=edT−A.$

Lemma 2.1 shows that once an eigenvalue localization set for B = edTA is given, we can get a set to localize all eigenvalues different from 1 for the stochastic matrix A [11]. Now we present the following choice of d:

$d=Lαi(A),$(1)

where $\begin{array}{}{L}^{{\alpha }_{i}}\left(A\right)=\left[{L}_{1}^{{\alpha }_{1}}\left(A\right),{L}_{2}^{{\alpha }_{2}}\left(A\right),\dots ,{L}_{n}^{{\alpha }_{n}}\left(A\right){\right]}^{T},\end{array}$ αi ∈ [0, 1] for iN and

$Liαi(A)=αiLi(A)+(1−αi)li(A)=αimaxj≠i,j∈Naji+(1−αi)minj≠i,j∈Naji,i∈N.$

By Lemma 2.1 and (1), we can obtain the following set to localize all eigenvalues different from 1 of a stochastic matrix.

#### Theorem 2.2

Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then for any αi ∈ [0, 1], iN,

$λ∈ΓstoLα(A)=⋃i∈NΓistoLαi(A),$

where

$ΓistoLαi(A)={z∈C:|αiLi(A)+(1−αi)li(A)−aii+z|≤CLiαi(A)}$

and

$CLiαi(A)=∑j≠i|Liαi(A)−aji|=∑j≠i|αiLi(A)+(1−αi)li(A)−aji|.$(2)

#### Proof

Let $\begin{array}{}{B}^{{\alpha }_{i}}=e{d}^{T}-A=\left[{b}_{ij}^{{\alpha }_{i}}\right],\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\text{\hspace{0.17em}where}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}d={L}^{{\alpha }_{i}}\left(A\right)=\left[{L}_{1}^{{\alpha }_{1}}\left(A\right),{L}_{2}^{{\alpha }_{2}}\left(A\right),\dots ,{L}_{n}^{{\alpha }_{n}}\left(A\right){\right]}^{T}.\end{array}$ By applying the Geršgorin circle theorem to Bαi, we have that for any λ̂ ∈ σ (Bαi),

$λ^∈⋃i∈N{z∈C:|biiαi−z|≤Ci(Bαi)}.$

By Lemma 2.1, we have for λ ∈ σ(A)∖{1}, then −λ ∈ σ(Bαi), that is,

$−λ∈⋃i∈N{z∈C:|biiαi−z|≤Ci(Bαi)}.$

Furthermore, note that for any iN,

$biiαi=Liαi(A)−aii=αiLi(A)+(1−αi)li(A)−aii$

and

$Ci(Bαi)=∑j≠i|Liαi(A)−aji|=CLiαi(A).$

Hence,

$λ∈ΓstoLα(A)=⋃i∈NΓistoLαi(A),$

where $\begin{array}{}{\mathit{\Gamma }}_{i}^{sto{L}^{{\alpha }_{i}}}\end{array}$(A) = {z ∈ ℂ : |αi Li(A) + (1 − αi)li(A) − aii + z| ≤ $\begin{array}{}C{L}_{i}^{{\alpha }_{i}}\end{array}$(A)}. □

#### Example 2.3

Consider the first 50 stochastic matrices generated by the MATLAB code

$k=10;A=rand(k,k);A=inv(diag(sum(A′)))∗A,$

and take αi ∈ [0, 1] for i = 1, 2, …, 10 by the MATLAB code

$alpha=rand(1,k).$

By drawing the sets ΓstoLα(A) in Theorem 2.2 and

$Γ=Γ(A)⋂Γ~(A)$

in Theorems 1.1 and 1.2, it is not difficult to see that the number of ΓstoLα(A) ⊂ Γ is 46, that if 1 ∉ Γ, then 1 ∉ ΓstoLα(A), and that if 1 ∈ Γ, then ΓstoLα(A) may not contain the trivial eigenvalue 1 (also see Table 1). So, by these examples, we conclude that the set in Theorem 2.2 captures all eigenvalues different from 1 of a stochastic matrix more precisely than the sets in Theorem 1.1 and Theorem 1.2 in some cases.

Table 1

Comparisons of ΓstoLα(A) and Γ = (Γ (A) ⋂ Γ̃̃(A))

#### Remark 2.4

1. When αi = 1 for each iN, then $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Li(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = CLi(A) for any iN, which implies ΓstoLα(A) reduces to ΓstoL(A) in Theorem 1.4;

2. When $\begin{array}{}{\alpha }_{i}=\frac{{V}_{i}\left(A\right)-{l}_{i}\left(A\right)}{{L}_{i}\left(A\right)-{l}_{i}\left(A\right)}\end{array}$ ∈ [0, 1] and Li(A) > li(A) for each iN, then $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Vi(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = CVi(A) for any iN. On the other hand, if for some iN, Li(A) = li(A), then for any αi ∈ [0, 1] we also have $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Vi(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = CVi(A). These imply ΓstoLα(A) reduces to ΓstoV(A) in Theorem 1.4;

3. When $\begin{array}{}{\alpha }_{i}=\frac{{q}_{i}\left(A\right)-{l}_{i}\left(A\right)}{{L}_{i}\left(A\right)-{l}_{i}\left(A\right)}\end{array}$ ∈ [0, 1] and Li(A) > li(A) for each iN, then $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = qi(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Cqi(A) for any iN. On the other hand, if for some iN, Li(A) = li(A), then for any αi ∈ [0, 1] we also have $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = qi(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Cqi(A). These imply ΓstoLα(A) reduces to Γstoq(A) in Theorem 1.3;

4. When $\begin{array}{}{\alpha }_{i}=\frac{{v}_{i}\left(A\right)-{l}_{i}\left(A\right)}{{L}_{i}\left(A\right)-{l}_{i}\left(A\right)}\end{array}$ ∈ [0, 1] and Li(A) > li(A) for each iN, then $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = vi(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Cvi(A) for any iN. On the other hand, if for some iN, Li(A) = li(A), then for any αi ∈ [0, 1] we also have $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = vi(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Cvi(A). These imply ΓstoLα(A) reduces to Γstov(A) in Theorem 1.3;

5. When αi = 0 for each iN, then $\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = li(A) and C$\begin{array}{}{L}_{i}^{{\alpha }_{i}}\end{array}$(A) = Cli(A) for any iN, which implies ΓstoLα(A) reduces to Γstol(A) in Theorem 1.3.

Hence, we say that the set ΓstoLα(A) is a generalization of Γstol(A), Γstov(A) and Γstoq(A) in Theorem 1.3, and ΓstoV(A) and ΓstoL(A) in Theorem 1.4. Moreover, according to αi ∈ [0, 1] in Theorem 2.2, we can get the following result easily.

#### Remark 2.5

Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

$λ∈Γ[0,1](A)=⋂α∈[0,1]ΓstoLα(A).$

Furthermore, Γ[0,1] (A) ⊆ (ΓstoL(A) ⋂ ΓstoV(A) ⋂ Γstoq(A) ⋂ Γstov(A) ⋂ Γstol(A)).

The set Γ[0,1] (A) in Remark 2.5 is not of much practical use because it involves some parameters αi. In fact, we can take some special αi in practice, which is illustrated by the following example.

#### Example 2.6

Consider the third stochastic matrix A in Example 2.3. By Table 1, we have that

$1∈ΓstoLα(A),ΓstoLα(A)⊈Γ,andΓ⊈ΓstoLα(A),$

which is shown in Figure 1, where ΓstoLα(A) is drawn slightly thicker than Γ. Furthermore, we take the first 3 vectors

$α(j)=[α1(j),α2(j),…,α10(j)],j=1,2,3$

Fig. 1

ΓstoLα(A) ⊈ Γ, and ΓΓstoLα(A)

generated by the MATLAB code alpha = rand(1, 10), that is,

$α(1)=[0.8147,0.9058,0.1270,0.9134,0.6324,0.0975,0.2785,0.5469,0.9575,0.9649],α(2)=[0.1576,0.9706,0.9572,0.4854,0.8003,0.1419,0.4218,0.9157,0.7922,0.9595],$

and

$α(3)=[0.6557,0.0357,0.8491,0.9340,0.6787,0.7577,0.7431,0.3922,0.6555,0.1712].$

By Remark 2.5, we have that for any λ ∈ σ(A)∖{1},

$λ∈(ΓstoLα(1)(A)⋂ΓstoLα(2)(A)⋂ΓstoLα(3)(A)).$

We draw this set in the complex plane, see Figure 2. It is easy to see

$1∉(ΓstoLα(1)(A)⋂ΓstoLα(2)(A)⋂ΓstoLα(3)(A))$

Fig. 2

(ΓstoLα(1)(A) ⋂ ΓstoLα(2)(A) ⋂ ΓstoLα(3)(A)) ⊂ Γ

and

$(ΓstoLα(1)(A)⋂ΓstoLα(2)(A)⋂ΓstoLα(3)(A))⊂Γ.$

This example shows that we can take some special αi to get a set which is tighter than the sets in Theorems 1.1 and 1.2.

It is well-known that an eigenvalue inclusion set leads to a sufficient condition for nonsingular matrices, and vice versa [12, 16]. Hence, from Theorem 2.2 or Remark 2.5, we can get a nonsingular condition for stochastic matrices.

#### Proposition 2.7

Let A = [aij] ∈ ℝn×n be a stochastic matrix. If for some i ∈ [0, 1], iN,

$|α¯iLi(A)+(1−α¯i)li(A)−aii|>CLiα¯i(A),i∈N,$(3)

where $\begin{array}{}C{L}_{i}^{{\overline{\alpha }}_{i}}\end{array}$ is defined as (2), then A is nonsingular.

#### Proof

Suppose that A is singular, that is, 0 ∈ σ (A). From Theorem 2.2, we have that for any αi ∈ [0, 1], iN,

$0∈ΓstoLα(A)=⋃i∈NΓistoLαi(A).$

In particular,

$0∈ΓstoLα¯(A)=⋃i∈NΓistoLα¯i(A).$

Hence, there is an index i0N such that

$|α¯i0Li0(A)+(1−α¯i0)li0(A)−ai0i0|≤CLi0α¯i(A).$

This contradicts (3). The conclusion follows.  □

## 3 An upper bound for the moduli of subdominant eigenvalues

By using the set ΓstoLα(A) in Theorem 2.2, we can give a bound to estimate the moduli of subdominant eigenvalues of a stochastic matrix.

#### Theorem 3.1

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then $|λ|≤ρ[0,1],$(4)

where $\begin{array}{}{\rho }^{\left[0,1\right]}=\underset{i\in N}{max}\underset{{\alpha }_{i}\in \left[0,1\right]}{min}\left\{\sum _{j=1}^{n}|{\alpha }_{i}{L}_{i}\left(A\right)+\left(1-{\alpha }_{i}\right){l}_{i}\left(A\right)-{a}_{ji}|\right\}.\end{array}$

#### Proof

Let $fi(αi)=∑j=1n|αiLi(A)+(1−αi)li(A)−aji|=CLiαi(A)+|αiLi(A)+(1−αi)li(A)−aii|,αi∈[0,1],i∈N,$

where $CLiαi(A)=∑j≠i|αiLi(A)+(1−αi)li(A)−aji|.$

Therefore, each fi(αi), iN is a continuous function of αi ∈ [0,1], and there are α̃i ∈ [0,1], i∈ N such that $fi(α~i)=minαi∈[0,1]CLiαi(A)+|αiLi(A)+(1−αi)li(A)−aii|,i∈N.$(5)

For these α̃i ∈ [0,1], iN, by Theorem 2.2 we have $λ∈ΓstoLα~(A)=⋃i∈NΓistoLα~i(A).$

Hence, there is an index i0N such that $|α~i0Li0(A)+(1−α~i0)li0(A)−ai0i0+λ|≤CLi0α~i0(A)},$

which gives $|λ|≤CLi0α~i0(A)+|α~i0Li0(A)+(1−α~i0)li0(A)−ai0i0|.$(6)

By (5) we have $|λ|≤minαi0∈[0,1]CLi0αi0(A)+|αi0Li0(A)+(1−αi0)li0(A)−ai0i0|,$

which implies $|λ|≤maxi∈Nminαi∈[0,1]CLiαi(A)+|αiLi(A)+(1−αi)li(A)−aii|=maxi∈Nminαi∈[0,1]∑j=1n|αiLi(A)+(1−αi)li(A)−aji|=ρ[0,1].$

The conclusion follows.  □

As in the proof of Theorem 3.1, we can give another bound to estimate the moduli of subdominant eigenvalues by using the sets Γstol(A), Γstov(A) and Γstoq(A) in Theorem 1.3, ΓstoL(A) and ΓstoV(A) in Theorem 1.4, respectively.

#### Theorem 3.2

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then $|λ|≤min{ρL,ρV,ρq,ρv,ρl},$(7)

where $ρL=maxi∈Naii+nLi(A)−Ci(A),ρV=maxi∈Naii+(n−2)Vi(A)+2Li(A)−Ci(A),ρq=maxi∈N∑j=1n|aji−qi(A)|,ρv=maxi∈Naii−(n−4)vi(A)−2li(A)+Ci(A)$

and $ρl=maxi∈Naii−(n−2)li(A)+Ci(A).$

#### Proof

We first prove |λ| ≤ ρL. From Theorem 1.4, $λ∈ΓstoL(A)=⋃i∈NΓistoL(A).$

As in the proof of Theorem 3.1, we have that there is an index i0N such that $|Li0(A)−ai0i0+λ|≤CLi0(A),$

and $|λ|≤|ai0i0−Li0(A)|+CLi0(A)≤ai0i0+Li0(A)+(n−1)Li0(A)−Ci0(A)=ai0i0+nLi0(A)−Ci0(A)≤maxi∈Naii+nLi(A)−Ci(A),$

i.e., |λ|≤ ρL . Similarly, by $λ∈ΓstoV(A),λ∈Γstoq(A),λ∈Γstov(A), and λ∈Γstol(A),$

we can get respectively $|λ|≤ρV,|λ|≤ρq,|λ|≤ρv, and |λ|≤ρl.$

The conclusion follows.  □

By the choices of αi in Remark 2.4, it is easy to get the relationships between ρ[0, 1], ρL, ρV, ρq, ρv and ρl as follows.

#### Theorem 3.3

Let A = [aij] ∈ ℝn × n be a stochastic matrix. Then $ρ[0,1]≤min{ρL,ρV,ρq,ρv,ρl},$

where ρ[0, 1], ρL, ρV, ρq, ρv, and ρl are defined in Theorem 3.1 and Theorem 3.2, respectively.

As in the proof of Theorem 3.2, by Theorems 1.1 and 1.2 two upper bounds for the subdominant eigenvalue of a stochastic matrix are obtained easily.

#### Proposition 3.4

Let A = [aij] ∈ ℝn × n be a stochastic matrix and λ ∈ σ(A)∖{1} be its subdominant eigenvalue. Then $|λ|≤1−trace(A)+nγ(A), and |λ|≤trace(A)+nγ~(A)−1,$

consequently, $|λ|≤min{1−trace(A)+nγ(A), trace(A)+nγ~(A)−1}.$(8)

For the comparison of ρ[0, 1] and the upper bound $Λ:=min{1−trace(A)+nγ(A), trace(A)+nγ~(A)−1}$

in (8), we conclude here that by taking some special αi and the fact that Λ is given by Theorems 1.1 and 1.2, an upper bound can be obtained, which is better than $min{1−trace(A)+nγ(A), trace(A)+nγ~(A)−1}.$

## 4 Special choices of αi for the set ΓstoLα(A)

In this section, we choose αi for the set ΓstoLα(A) to give a set, which is tighter than the sets Γstol(A) and ΓstoL(A) by determining the optimal value of αi for estimating the moduli of subdominant eigenvalues of a stochastic matrix.

For a given stochastic matrix A = [aij] ∈ ℝn × n, let $N+(A)={i∈N:Δi(A)≥0}$

and $N−(A)={i∈N:Δi(A)<0},$

where Δi(A) = nLi(A) + (n − 2)li(A) − 2Ci(A). Obviously, N = N+ (A) ⋃ N (A).

#### Proposition 4.1

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then $|λ|≤ρ0,1,$(9)

where $ρ0,1=maxmaxi∈N+(A)aii−(n−2)li(A)+Ci(A),maxi∈N−(A)aii+nLi(A)−Ci(A).$

#### Proof

Note that $CLiαi(A)+|αiLi(A)+(1−αi)li(A)−aii|=∑j≠i|αiLi(A)+(1−αi)li(A)−(αiaji+(1−αi)aji)|+|αiLi(A)+(1−αi)li(A)−aii|≤αi∑j≠i|Li(A)−aji|+(1−αi)∑j≠i|li(A)−aji|+αiLi(A)+(1−αi)li(A)+aii=(n−1)αiLi(A)−(1−αi)li(A)+(1−2αi)Ci(A)+αiLi(A)+(1−αi)li(A)+aii=aii−(n−2)li(A)+Ci(A)+αinLi(A)+(n−2)li(A)−2Ci(A)=aii−(n−2)li(A)+Ci(A)+αiΔi(A).$

Hence, from Theorem 3.1, we have $|λ|≤maxi∈Nminαi∈[0,1]∑j=1n|αiLi(A)+(1−αi)li(A)−aji|=maxi∈Nminαi∈[0,1]CLiαi(A)+|αiLi(A)+(1−αi)li(A)−aii|≤maxi∈Nminαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A)=maxmaxi∈N+(A)minαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A),maxi∈N−(A)minαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A).$(10)

Furthermore, let $f(α)=aii−(n−2)li(A)+Ci(A)+αΔi(A), α∈[0,1].$

Then when Δi(A) ≥ 0, f(α) reaches its minimum aii − (n − 2)li(A) + Ci(A) at α = 0, and when Δi(A) < 0, f(α) reaches its minimum $aii−(n−2)li(A)+Ci(A)+Δi(A)=aii+nLi(A)−Ci(A)$

at α = 1. Therefore, Inequality (10) is equivalent to $|λ|≤maxmaxi∈N+(A)aii−(n−2)li(A)+Ci(A),maxi∈N−(A)aii+nLi(A)−Ci(A).$

The conclusion follows.  □

By the proof of Proposition 4.1, it is not difficult to see that the upper bound ρ0, 1 is larger than ρ[0, 1] in Theorem 3.1, but ρ0, 1 depends only on the entries of a stochastic matrix. Moreover, ρ0, 1ρL and ρ0, 1ρl, which are given as follows.

#### Proposition 4.2

Let A = [aij] ∈ ℝn × n be a stochastic matrix. Then $ρ0,1≤min{ρl,ρL},$

where ρl, ρL and ρ0, 1 are defined in Theorem 3.2 and Proposition 4.1, respectively.

#### Proof

By the proof of Proposition 4.1, we have that ρ0, 1 is equivalent to the last of Inequality (10), that is, $ρ0,1=maxmaxi∈N+(A)minαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A),maxi∈N−(A)minαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A).$

Also let $f(α)=aii−(n−2)li(A)+Ci(A)+αΔi(A), α∈[0,1].$

Then when Δi(A) ≥ 0, f(α) is a monotonically increasing function of α, and when Δi(A) < 0, f(α) is a monotonically decreasing function of α.

For the case that $\begin{array}{}{L}_{i}\left(A\right)=\underset{\genfrac{}{}{0}{}{j\ne i,}{j\in N}}{max}{a}_{ji}>{l}_{i}\left(A\right)=\underset{\genfrac{}{}{0}{}{j\ne i,}{j\in N}}{min}{a}_{ji},i\in N,\end{array}$ we will prove ρ0, 1 < ρL. Note that $f(0)=aii−(n−2)li(A)+Ci(A), and f(1)=aii+nLi(A)−Ci(A).$

Since f(α) is increasing when Δi(A) ≥ 0, we have $maxi∈N+(A)aii−(n−2)li(A)+Ci(A)=maxi∈N+(A)minαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A)≤maxi∈N+(A)aii−(n−2)li(A)+Ci(A)+Δi(A)=maxi∈N+(A)aii+nLi(A)−Ci(A),$

which implies $ρ0,1≤maxi∈Naii+nLi(A)−Ci(A)=ρL.$

Then similarly as in the proof of ρ0, 1ρL, we can obtain easily ρ0, 1ρl.

For the case that Li(A) = li(A) for some iN, we have Δi(A) = 0 and $aii+nLi(A)−Ci(A)=aii+(n−2)Vi(A)+2Li(A)−Ci(A)=∑j=1n|aji−qi(A)|=aii−(n−4)vi(A)−2li(A)+Ci(A)=aii−(n−2)li(A)+Ci(A).$

Similarly as in the case Li(A) > li(A), iN, we can also obtain easily $ρ0,1≤ρL and ρ0,1≤ρl.$

The conclusion follows.  □

By propositions 4.1 and 4.2, we know that the optimal values of αi, iN for the bound $maxi∈Nminαi∈[0,1]aii−(n−2)li(A)+Ci(A)+αiΔi(A),$

which could be obtained by using the set ΓstoLα(A) in Theorem 2.2, are αi = 0 for iN+(A) and αi = 1 for iN(A) such that $ρ0,1=maxmaxi∈N+(A)aii−(n−2)li(A)+Ci(A),maxi∈N−(A)aii+nLi(A)−Ci(A).$

is less than or equal to the bounds obtained by using the sets in Theorem 1.3 and 1.4 respectively. This provides a choice of αi, iN for the set ΓstoLα(A) to localize all eigenvalues different from 1 of a stochastic matrix.

For a stochastic matrix A = [aij] ∈ ℝn × n and $d=Lαi(A)=[L1α1(A),L2α2(A),…,Lnαn(A)]T$

defined as (1), we take αi = 0 for iN+(A) and αi = 1 for iN(A), that is, $Liαi(A)=Li0(A)=li(A),i∈N+(A),Li1(A)=Li(A),i∈N−(A).$

For this choice, the set ΓstoLα(A) reduces to $ΓstoL0,1(A):=⋃i∈N+(A)ΓistoL0(A)⋃⋃i∈N−(A)ΓistoL1(A),$

where $\begin{array}{}{\mathit{\Gamma }}_{i}^{sto{L}^{0}}\left(A\right)={\mathit{\Gamma }}_{i}^{stol}\left(A\right)\text{\hspace{0.17em}and\hspace{0.17em}}{\mathit{\Gamma }}_{i}^{sto{L}^{1}}\left(A\right)={\mathit{\Gamma }}_{i}^{stoL}\left(A\right).\end{array}$ Hence, we have the following result.

#### Theorem 4.3

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then $λ∈ΓstoL0,1(A)=⋃i∈N+(A)Γistol(A)⋃⋃i∈N−(A)ΓistoL(A).$(11)

#### Example 4.4

Consider the stochastic matrix $A=0.26560.04710.14520.07580.21990.24630.26340.33680.04750.11430.13540.10260.05910.20020.18310.19160.18140.18460.26990.27530.16550.19410.07880.01650.14430.05980.12050.25820.28390.13320.23550.10270.13990.23580.21110.0750.$

By computations, we have that N+ (A) = {2, 4, 6}, N (A) = {1, 3, 5}, and $ΓstoL0,1(A)=Γ2stol(A)⋃Γ4stol(A)⋃Γ6stol(A)⋃Γ1stoL(A)⋃Γ3stoL(A)⋃Γ5stoL(A).$

By drawing the sets Γstol(A), ΓstoL(A) and ΓstoL0, 1(A) in the complex plane (see Figure 3), it is not difficult to see that for any λ ∈ σ(A)∖{1}, $λ∈ΓstoL0,1(A),$

and that although ΓstoL0, 1(A) ⊈ Γstol(A) and Γstol(A) ⊈ ΓstoL0, 1(A), the set ΓstoL0, 1(A) is better than Γstol(A) and ΓstoL(A) for estimating the moduli of subdominant eigenvalues.

Fig. 3

Γstol(A), ΓstoL(A) and ΓstoL0, 1(A)

## 5 Conclusions

In this paper, a set with n parameters in [0, 1] is given to localize all eigenvalues different from 1 for a stochastic matrix A, that is, $σ(A)∖{1}⊆ΓstoLα(A), for any αi∈[0,1],i∈N.$

In particular, when αi = 0 for each iN, ΓstoLα(A) reduces to the set Γstol(A), which consists of n sets $\begin{array}{}{\mathit{\Gamma }}_{i}^{stol}\end{array}$(A), and when αi = 1 for each iN, ΓstoLα(A) reduces to the set ΓstoL(A), which consists of n sets $\begin{array}{}{\mathit{\Gamma }}_{i}^{stoL}\end{array}$(A). The sets Γstol(A) and ΓstoL(A) are used to estimate the moduli of subdominant eigenvalues, that is, for any λ ∈ σ(A)∖{1}, $|λ|≤ρl=maxi∈Naii−(n−2)li(A)+Ci(A)$

and $|λ|≤ρL=maxi∈Naii+nLi(A)−Ci(A).$

Moreover, by taking αi = 0 for iN+(A) and αi = 1 for iN(A), we give a set ΓstoL0, 1(A), which consists of |N+(A)| sets $\begin{array}{}{\mathit{\Gamma }}_{i}^{stol}\end{array}$(A) and |N(A)| sets $\begin{array}{}{\mathit{\Gamma }}_{i}^{stoL}\end{array}$ (A) where |N+(A)|+|N(A)| = n. By using ΓstoL0, 1(A), we can get an upper bound for the moduli of subdominant eigenvalues which is better than ρl and ρL, i.e, for any λ ∈ σ(A)∖{1}, $|λ|≤ρ0,1≤min{ρl,ρL},$

where $ρ0,1=maxmaxi∈N+(A)aii−(n−2)li(A)+Ci(A),maxi∈N−(A)aii+nLi(A)−Ci(A).$

The authors are grateful to the referees for their useful and constructive suggestions. This work is supported by National Natural Science Foundations of China (11601473) and CAS “Light “ West China” Program.

## Acknowledgement

The authors are grateful to the referees for their useful and constructive suggestions. This work is supported by National Natural Science Foundations of China (11601473) and CAS “Light of West China” Program.

## References

• [1]

Peña J.M., Shape Preserving Representations in Computer Aided-Geometric Design, Nova Science Publishers, Hauppage, NY, 1999 Google Scholar

• [2]

Clayton A., Quasi-birth-and-death processes and matrix-valued orthogonal polynomials, SIAM J. Matrix Anal. Appl., 2010, 31, 2239-2260

• [3]

Karlin S., Mcgregor J., A characterization of birth and death processes, Proc. Natl. Acad. Sci. U.S.A., 1959, 45, 375-379

• [4]

Mitrofanov A.YU., Stochastic Markov models of the formation and disintegration of binary complexes, Mat. Model., 2001, 13, 101-109 Google Scholar

• [5]

Mitrofanov A.YU., Sensitivity and convergence of uniformly ergodic Markov chains, J. Appl. Probab., 2005, 42, 1003-1014

• [6]

Seneta E., Non-negative Matrices and Markov Chains, Springer-Verlag, New York, 1981 Google Scholar

• [7]

Berman A., Plemmons R.J., Nonnegative Matrices in the Mathematical Sciences, Classics in Applied Mathematics, SIAM, Philadelphia, 1994 Google Scholar

• [8]

Cvetković L.J., Kostić V., Peña J.M., Eigenvalue Localization Refinements for Matrices Related to Positivity, SIAM J. Matrix Anal. Appl., 2011, 32, 771-784

• [9]

Kirkland S., A cycle-based bound for subdominant eigenvalues of stochastic matrices, Linear and Multilinear Algebra, 2009, 57, 247-266

• [10]

Kirkland S., Subdominant eigenvalues for stochastic matrices with given column sums, Electron. J. Linear Algebra, 2009, 18, 784-800 Google Scholar

• [11]

Li C.Q., Liu Q.B., Li Y.T., Geršgorin-type and Brauer-type eigenvalue localization sets of stochastic matrices, Linear and Multilinear Algebra, 2015, 63(11), 2159-2170

• [12]

Shen S.Q., Yu J., Huang T.Z., Some classes of nonsingular matrices with applications to localize the real eigenvalues of real matrices, Linear Algebra Appl., 2014, 447, 74-87

• [13]

Wang Y.J., Liu W.Q., Caccetta L., Zhou G.L., Parameter selection for nonnegative matrix/tensor sparse decomposition, Operations Research Letters, 2015, 43, 423-426

• [14]

Zhou G., Wang G., Qi L.Q., Alqahtani M., A fast algorithm for the spectral radii of weakly reducible nonnegative tensors, Numerical Linear Algebra with applications, 2018,

• [15]

Li C.Q., Li Y.T., A modification of eigenvalue localization for stochastic matrices, Linear algebra and its applications, 2014, 460, 231-241

• [16]

Varga R.S., Geršgorin and his circles, Springer-Verlag, Berlin, 2004 Google Scholar

## About the article

Accepted: 2018-02-13

Published Online: 2018-04-02

Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 298–310, ISSN (Online) 2391-5455,

Export Citation