Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo


IMPACT FACTOR 2018: 0.726
5-year IMPACT FACTOR: 0.869

CiteScore 2018: 0.90

SCImago Journal Rank (SJR) 2018: 0.323
Source Normalized Impact per Paper (SNIP) 2018: 0.821

Mathematical Citation Quotient (MCQ) 2018: 0.34

ICV 2017: 161.82

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 16, Issue 1

Issues

Volume 13 (2015)

A Geršgorin-type eigenvalue localization set with n parameters for stochastic matrices

Xiaoxiao Wang / Chaoqian Li / Yaotang Li
Published Online: 2018-04-02 | DOI: https://doi.org/10.1515/math-2018-0030

Abstract

A set in the complex plane which involves n parameters in [0, 1] is given to localize all eigenvalues different from 1 for stochastic matrices. As an application of this set, an upper bound for the moduli of the subdominant eigenvalues of a stochastic matrix is obtained. Lastly, we fix n parameters in [0, 1] to give a new set including all eigenvalues different from 1, which is tighter than those provided by Shen et al. (Linear Algebra Appl. 447 (2014) 74-87) and Li et al. (Linear and Multilinear Algebra 63(11) (2015) 2159-2170) for estimating the moduli of subdominant eigenvalues.

Keywords: Stochastic Matrix; Geršgorin set; Subdominant eigenvalue

MSC 2010: 65F15; 15A18; 15A51

1 Introduction

Stochastic matrices and eigenvalue localization of stochastic matrices play key roles in many application fields, such as Computer Aided Geometric Design [1], Birth-Death Processes [2, 3, 4, 5], and Markov chains [6]. An entrywise nonnegative matrix A = [aij] ∈ ℝn×n is called row stochastic (or simply stochastic) if all its row sums are 1, that is,

j=1naij=1,foreachiN={1,2,,n}.

Let us denote the ith deleted column sum of the moduli of off-diagonal entries of A by

Ci(A)=jiaji.

Obviously, 1 is an eigenvalue of a stochastic matrix with a corresponding eigenvector e = [1, 1, …, 1]T. From the Perron-Frobenius Theorem [7], for any eigenvalue λ of A, that is, λ ∈ σ(A), we have |λ| ≤ 1 [8]. Here we call |λ| a moduli of subdominant eigenvalue of a stochastic matrix A if 1 > |λ| > |η| for every eigenvalue η different from 1 and λ [8, 9, 10].

Since the subdominant eigenvalue of a stochastic matrix is crucial for bounding the convergence rate of stochastic processes [8, 11, 12, 13, 14], it is interesting to give a set to localize all eigenvalues different from 1, or an upper bound for the moduli of its subdominant eigenvalue [8, 15].

One can use the well-known Geršgorin circle set [16] to localize all eigenvalues for a stochastic matrix. However, this set always includes the trival eigenvalue 1, and thus it is not always precise for capturing all eigenvalues different from 1 of a stochastic matrix. Therefore, several authors have tried to modify the Geršgorin circle set to localize more precisely all eigenvalues different from 1. In [8], Cvetković et al. gave the following set.

Theorem 1.1

([8, Theorem 3.4]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

λΓ(A)={zC:|zγ(A)|<1trace(A)+(n1)γ(A)},

where γ(A) = maxiN (aiili(A)), li(A) = minji aji and trace(A) is the trace of A.

However, the set provided by Theorem 1.1 is not effective in some cases, such as, for the class of stochastic matrices

SM0={ARn×n:Aisstochastic,andaii=li=0,foreachiN},

for more details, see [15]. To overcome this drawback, Li and Li [15] provided another set as follows.

Theorem 1.2

([15, Theorem 6]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

λΓ~(A)={zC:|z+γ~(A)|<trace(A)+(n1)γ~(A)1},

where γ̃(A) = maxiN (Li(A) − aii) and Li(A) = maxji aji.

Recently, by taking respectively

li(A)=minjiaji,vi(A)=max0,12mink,mi,km{aki+ami}=12mink,mi,km{aki+ami},

and

qi(A)=1n1jiaji

to modify the Geršgorin circle set, Shen et al. [12], and Li et al. [11] gave three sets to localize all eigenvalues different from 1.

Theorem 1.3

([11, 12]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

λΓstol(A)=iNΓistol(A)={zC:|aiizli(A)|<Cli(A)},λΓstov(A)=iNΓistov(A)={zC:|aiizvi(A)|<Cvi(A)}

and

λΓstoq(A)=iNΓistoq(A)={zC:|aiizqi(A)|<Cqi(A)},

where

Cli(A)=ji|ajili(A)|=jiajijili(A)=Ci(A)(n1)li(A),Cvi(A)=ji|ajivi(A)|=Ci(A)(n3)vi(A)2li(A)

and Cqi(A) = ji |ajiqi(A)|.

Remark here that Shen et al. [12] used these three sets to localize any real eigenvalue different from 1, which are generalized to localize all eigenvalues different from 1 by Li et al. [11].

Also in [11], Li et al. provided another two modifications of the Geršgorin circle set by taking respectively

Li(A)=maxjiaji,andVi(A)=12maxk,mi,km{aki+ami}.

Theorem 1.4

([11, Theorems 3.3 and 3.8]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

λΓstoL(A)=iNΓistoL(A)={zC:|Li(A)aii+z|<CLi(A)}

and

λΓstoV(A)=iNΓistoV(A)={zC:|Vi(A)aii+z|<CVi(A)},

where

CLi(A)=ji|Li(A)aji|=(n1)Li(A)Ci(A)

and

CVi(A)=ji|Vi(A)aji|=(n3)Vi(A)+2Li(A)Ci(A).

Note that li(A), vi(A), qi(A), Vi(A) and Li(A) are all in the interval [minjiaji,maxjiaji]. So it is natural to ask whether or not there is an optimal value in [minjiaji,maxjiaji] such that the set, which is obtained by using this value to modify the Geršgorin circle set, captures all eigenvalues different from 1 of a stochastic matrix most precisely. To answer this question, we give a set in Section 2 with n parameters in [0, 1] to localize all eigenvalues different from 1 for a stochastic matrix, and show that this set would reduce to Γstol (A), Γstov (A), Γstoq (A), ΓstoV (A) and ΓstoL (A) by taking some fixed parameters. And we use this set in Section 3 to give an upper bound for the moduli of its subdominant eigenvalue for a stochastic matrix. In section 4, by choosing special values of these n parameters in [0, 1] for the upper bound obtained in Section 3, we give a new set including all eigenvalues different from 1, which is better than Γstol (A) and ΓstoL (A) in the sense of estimating the moduli of subdominant eigenvalues.

2 A Geršgorin-type eigenvalue localization set with n parameters

We first begin with an important lemma, which is used to give some modifications of the Geršgorin circle set.

Lemma 2.1

([8, 11, 12]). Let A = [aij] ∈ ℝn×n be a stochastic matrix. For any d = [d1,d2, …, dn]T ∈ ℝn, if μσ(A)∖{1}, thenμ is an eigenvalue of the matrix

B=edTA.

Lemma 2.1 shows that once an eigenvalue localization set for B = edTA is given, we can get a set to localize all eigenvalues different from 1 for the stochastic matrix A [11]. Now we present the following choice of d:

d=Lαi(A),(1)

where Lαi(A)=[L1α1(A),L2α2(A),,Lnαn(A)]T, αi ∈ [0, 1] for iN and

Liαi(A)=αiLi(A)+(1αi)li(A)=αimaxji,jNaji+(1αi)minji,jNaji,iN.

By Lemma 2.1 and (1), we can obtain the following set to localize all eigenvalues different from 1 of a stochastic matrix.

Theorem 2.2

Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then for any αi ∈ [0, 1], iN,

λΓstoLα(A)=iNΓistoLαi(A),

where

ΓistoLαi(A)={zC:|αiLi(A)+(1αi)li(A)aii+z|CLiαi(A)}

and

CLiαi(A)=ji|Liαi(A)aji|=ji|αiLi(A)+(1αi)li(A)aji|.(2)

Proof

Let Bαi=edTA=[bijαi], whered=Lαi(A)=[L1α1(A),L2α2(A),,Lnαn(A)]T. By applying the Geršgorin circle theorem to Bαi, we have that for any λ̂ ∈ σ (Bαi),

λ^iN{zC:|biiαiz|Ci(Bαi)}.

By Lemma 2.1, we have for λ ∈ σ(A)∖{1}, then −λ ∈ σ(Bαi), that is,

λiN{zC:|biiαiz|Ci(Bαi)}.

Furthermore, note that for any iN,

biiαi=Liαi(A)aii=αiLi(A)+(1αi)li(A)aii

and

Ci(Bαi)=ji|Liαi(A)aji|=CLiαi(A).

Hence,

λΓstoLα(A)=iNΓistoLαi(A),

where ΓistoLαi(A) = {z ∈ ℂ : |αi Li(A) + (1 − αi)li(A) − aii + z| ≤ CLiαi(A)}. □

Example 2.3

Consider the first 50 stochastic matrices generated by the MATLAB code

k=10;A=rand(k,k);A=inv(diag(sum(A)))A,

and take αi ∈ [0, 1] for i = 1, 2, …, 10 by the MATLAB code

alpha=rand(1,k).

By drawing the sets ΓstoLα(A) in Theorem 2.2 and

Γ=Γ(A)Γ~(A)

in Theorems 1.1 and 1.2, it is not difficult to see that the number of ΓstoLα(A) ⊂ Γ is 46, that if 1 ∉ Γ, then 1 ∉ ΓstoLα(A), and that if 1 ∈ Γ, then ΓstoLα(A) may not contain the trivial eigenvalue 1 (also see Table 1). So, by these examples, we conclude that the set in Theorem 2.2 captures all eigenvalues different from 1 of a stochastic matrix more precisely than the sets in Theorem 1.1 and Theorem 1.2 in some cases.

Table 1

Comparisons of ΓstoLα(A) and Γ = (Γ (A) ⋂ Γ̃̃(A))

Remark 2.4

  1. When αi = 1 for each iN, then Liαi(A) = Li(A) and CLiαi(A) = CLi(A) for any iN, which implies ΓstoLα(A) reduces to ΓstoL(A) in Theorem 1.4;

  2. When αi=Vi(A)li(A)Li(A)li(A) ∈ [0, 1] and Li(A) > li(A) for each iN, then Liαi(A) = Vi(A) and CLiαi(A) = CVi(A) for any iN. On the other hand, if for some iN, Li(A) = li(A), then for any αi ∈ [0, 1] we also have Liαi(A) = Vi(A) and CLiαi(A) = CVi(A). These imply ΓstoLα(A) reduces to ΓstoV(A) in Theorem 1.4;

  3. When αi=qi(A)li(A)Li(A)li(A) ∈ [0, 1] and Li(A) > li(A) for each iN, then Liαi(A) = qi(A) and CLiαi(A) = Cqi(A) for any iN. On the other hand, if for some iN, Li(A) = li(A), then for any αi ∈ [0, 1] we also have Liαi(A) = qi(A) and CLiαi(A) = Cqi(A). These imply ΓstoLα(A) reduces to Γstoq(A) in Theorem 1.3;

  4. When αi=vi(A)li(A)Li(A)li(A) ∈ [0, 1] and Li(A) > li(A) for each iN, then Liαi(A) = vi(A) and CLiαi(A) = Cvi(A) for any iN. On the other hand, if for some iN, Li(A) = li(A), then for any αi ∈ [0, 1] we also have Liαi(A) = vi(A) and CLiαi(A) = Cvi(A). These imply ΓstoLα(A) reduces to Γstov(A) in Theorem 1.3;

  5. When αi = 0 for each iN, then Liαi(A) = li(A) and CLiαi(A) = Cli(A) for any iN, which implies ΓstoLα(A) reduces to Γstol(A) in Theorem 1.3.

Hence, we say that the set ΓstoLα(A) is a generalization of Γstol(A), Γstov(A) and Γstoq(A) in Theorem 1.3, and ΓstoV(A) and ΓstoL(A) in Theorem 1.4. Moreover, according to αi ∈ [0, 1] in Theorem 2.2, we can get the following result easily.

Remark 2.5

Let A = [aij] ∈ ℝn×n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then

λΓ[0,1](A)=α[0,1]ΓstoLα(A).

Furthermore, Γ[0,1] (A) ⊆ (ΓstoL(A) ⋂ ΓstoV(A) ⋂ Γstoq(A) ⋂ Γstov(A) ⋂ Γstol(A)).

The set Γ[0,1] (A) in Remark 2.5 is not of much practical use because it involves some parameters αi. In fact, we can take some special αi in practice, which is illustrated by the following example.

Example 2.6

Consider the third stochastic matrix A in Example 2.3. By Table 1, we have that

1ΓstoLα(A),ΓstoLα(A)Γ,andΓΓstoLα(A),

which is shown in Figure 1, where ΓstoLα(A) is drawn slightly thicker than Γ. Furthermore, we take the first 3 vectors

α(j)=[α1(j),α2(j),,α10(j)],j=1,2,3

ΓstoLα(A) ⊈ Γ, and Γ ⊈ ΓstoLα(A)
Fig. 1

ΓstoLα(A) ⊈ Γ, and ΓΓstoLα(A)

generated by the MATLAB code alpha = rand(1, 10), that is,

α(1)=[0.8147,0.9058,0.1270,0.9134,0.6324,0.0975,0.2785,0.5469,0.9575,0.9649],α(2)=[0.1576,0.9706,0.9572,0.4854,0.8003,0.1419,0.4218,0.9157,0.7922,0.9595],

and

α(3)=[0.6557,0.0357,0.8491,0.9340,0.6787,0.7577,0.7431,0.3922,0.6555,0.1712].

By Remark 2.5, we have that for any λ ∈ σ(A)∖{1},

λ(ΓstoLα(1)(A)ΓstoLα(2)(A)ΓstoLα(3)(A)).

We draw this set in the complex plane, see Figure 2. It is easy to see

1(ΓstoLα(1)(A)ΓstoLα(2)(A)ΓstoLα(3)(A))

(ΓstoLα(1)(A) ⋂ ΓstoLα(2)(A) ⋂ ΓstoLα(3)(A)) ⊂ Γ
Fig. 2

(ΓstoLα(1)(A) ⋂ ΓstoLα(2)(A) ⋂ ΓstoLα(3)(A)) ⊂ Γ

and

(ΓstoLα(1)(A)ΓstoLα(2)(A)ΓstoLα(3)(A))Γ.

This example shows that we can take some special αi to get a set which is tighter than the sets in Theorems 1.1 and 1.2.

It is well-known that an eigenvalue inclusion set leads to a sufficient condition for nonsingular matrices, and vice versa [12, 16]. Hence, from Theorem 2.2 or Remark 2.5, we can get a nonsingular condition for stochastic matrices.

Proposition 2.7

Let A = [aij] ∈ ℝn×n be a stochastic matrix. If for some i ∈ [0, 1], iN,

|α¯iLi(A)+(1α¯i)li(A)aii|>CLiα¯i(A),iN,(3)

where CLiα¯i is defined as (2), then A is nonsingular.

Proof

Suppose that A is singular, that is, 0 ∈ σ (A). From Theorem 2.2, we have that for any αi ∈ [0, 1], iN,

0ΓstoLα(A)=iNΓistoLαi(A).

In particular,

0ΓstoLα¯(A)=iNΓistoLα¯i(A).

Hence, there is an index i0N such that

|α¯i0Li0(A)+(1α¯i0)li0(A)ai0i0|CLi0α¯i(A).

This contradicts (3). The conclusion follows.  □

3 An upper bound for the moduli of subdominant eigenvalues

By using the set ΓstoLα(A) in Theorem 2.2, we can give a bound to estimate the moduli of subdominant eigenvalues of a stochastic matrix.

Theorem 3.1

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then |λ|ρ[0,1],(4)

where ρ[0,1]=maxiNminαi[0,1]j=1n|αiLi(A)+(1αi)li(A)aji|.

Proof

Let fi(αi)=j=1n|αiLi(A)+(1αi)li(A)aji|=CLiαi(A)+|αiLi(A)+(1αi)li(A)aii|,αi[0,1],iN,

where CLiαi(A)=ji|αiLi(A)+(1αi)li(A)aji|.

Therefore, each fi(αi), iN is a continuous function of αi ∈ [0,1], and there are α̃i ∈ [0,1], i∈ N such that fi(α~i)=minαi[0,1]CLiαi(A)+|αiLi(A)+(1αi)li(A)aii|,iN.(5)

For these α̃i ∈ [0,1], iN, by Theorem 2.2 we have λΓstoLα~(A)=iNΓistoLα~i(A).

Hence, there is an index i0N such that |α~i0Li0(A)+(1α~i0)li0(A)ai0i0+λ|CLi0α~i0(A)},

which gives |λ|CLi0α~i0(A)+|α~i0Li0(A)+(1α~i0)li0(A)ai0i0|.(6)

By (5) we have |λ|minαi0[0,1]CLi0αi0(A)+|αi0Li0(A)+(1αi0)li0(A)ai0i0|,

which implies |λ|maxiNminαi[0,1]CLiαi(A)+|αiLi(A)+(1αi)li(A)aii|=maxiNminαi[0,1]j=1n|αiLi(A)+(1αi)li(A)aji|=ρ[0,1].

The conclusion follows.  □

As in the proof of Theorem 3.1, we can give another bound to estimate the moduli of subdominant eigenvalues by using the sets Γstol(A), Γstov(A) and Γstoq(A) in Theorem 1.3, ΓstoL(A) and ΓstoV(A) in Theorem 1.4, respectively.

Theorem 3.2

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then |λ|min{ρL,ρV,ρq,ρv,ρl},(7)

where ρL=maxiNaii+nLi(A)Ci(A),ρV=maxiNaii+(n2)Vi(A)+2Li(A)Ci(A),ρq=maxiNj=1n|ajiqi(A)|,ρv=maxiNaii(n4)vi(A)2li(A)+Ci(A)

and ρl=maxiNaii(n2)li(A)+Ci(A).

Proof

We first prove |λ| ≤ ρL. From Theorem 1.4, λΓstoL(A)=iNΓistoL(A).

As in the proof of Theorem 3.1, we have that there is an index i0N such that |Li0(A)ai0i0+λ|CLi0(A),

and |λ||ai0i0Li0(A)|+CLi0(A)ai0i0+Li0(A)+(n1)Li0(A)Ci0(A)=ai0i0+nLi0(A)Ci0(A)maxiNaii+nLi(A)Ci(A),

i.e., |λ|≤ ρL . Similarly, by λΓstoV(A),λΓstoq(A),λΓstov(A),andλΓstol(A),

we can get respectively |λ|ρV,|λ|ρq,|λ|ρv,and|λ|ρl.

The conclusion follows.  □

By the choices of αi in Remark 2.4, it is easy to get the relationships between ρ[0, 1], ρL, ρV, ρq, ρv and ρl as follows.

Theorem 3.3

Let A = [aij] ∈ ℝn × n be a stochastic matrix. Then ρ[0,1]min{ρL,ρV,ρq,ρv,ρl},

where ρ[0, 1], ρL, ρV, ρq, ρv, and ρl are defined in Theorem 3.1 and Theorem 3.2, respectively.

As in the proof of Theorem 3.2, by Theorems 1.1 and 1.2 two upper bounds for the subdominant eigenvalue of a stochastic matrix are obtained easily.

Proposition 3.4

Let A = [aij] ∈ ℝn × n be a stochastic matrix and λ ∈ σ(A)∖{1} be its subdominant eigenvalue. Then |λ|1trace(A)+nγ(A),and|λ|trace(A)+nγ~(A)1,

consequently, |λ|min{1trace(A)+nγ(A),trace(A)+nγ~(A)1}.(8)

For the comparison of ρ[0, 1] and the upper bound Λ:=min{1trace(A)+nγ(A),trace(A)+nγ~(A)1}

in (8), we conclude here that by taking some special αi and the fact that Λ is given by Theorems 1.1 and 1.2, an upper bound can be obtained, which is better than min{1trace(A)+nγ(A),trace(A)+nγ~(A)1}.

4 Special choices of αi for the set ΓstoLα(A)

In this section, we choose αi for the set ΓstoLα(A) to give a set, which is tighter than the sets Γstol(A) and ΓstoL(A) by determining the optimal value of αi for estimating the moduli of subdominant eigenvalues of a stochastic matrix.

For a given stochastic matrix A = [aij] ∈ ℝn × n, let N+(A)={iN:Δi(A)0}

and N(A)={iN:Δi(A)<0},

where Δi(A) = nLi(A) + (n − 2)li(A) − 2Ci(A). Obviously, N = N+ (A) ⋃ N (A).

Proposition 4.1

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then |λ|ρ0,1,(9)

where ρ0,1=maxmaxiN+(A)aii(n2)li(A)+Ci(A),maxiN(A)aii+nLi(A)Ci(A).

Proof

Note that CLiαi(A)+|αiLi(A)+(1αi)li(A)aii|=ji|αiLi(A)+(1αi)li(A)(αiaji+(1αi)aji)|+|αiLi(A)+(1αi)li(A)aii|αiji|Li(A)aji|+(1αi)ji|li(A)aji|+αiLi(A)+(1αi)li(A)+aii=(n1)αiLi(A)(1αi)li(A)+(12αi)Ci(A)+αiLi(A)+(1αi)li(A)+aii=aii(n2)li(A)+Ci(A)+αinLi(A)+(n2)li(A)2Ci(A)=aii(n2)li(A)+Ci(A)+αiΔi(A).

Hence, from Theorem 3.1, we have |λ|maxiNminαi[0,1]j=1n|αiLi(A)+(1αi)li(A)aji|=maxiNminαi[0,1]CLiαi(A)+|αiLi(A)+(1αi)li(A)aii|maxiNminαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A)=maxmaxiN+(A)minαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A),maxiN(A)minαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A).(10)

Furthermore, let f(α)=aii(n2)li(A)+Ci(A)+αΔi(A),α[0,1].

Then when Δi(A) ≥ 0, f(α) reaches its minimum aii − (n − 2)li(A) + Ci(A) at α = 0, and when Δi(A) < 0, f(α) reaches its minimum aii(n2)li(A)+Ci(A)+Δi(A)=aii+nLi(A)Ci(A)

at α = 1. Therefore, Inequality (10) is equivalent to |λ|maxmaxiN+(A)aii(n2)li(A)+Ci(A),maxiN(A)aii+nLi(A)Ci(A).

The conclusion follows.  □

By the proof of Proposition 4.1, it is not difficult to see that the upper bound ρ0, 1 is larger than ρ[0, 1] in Theorem 3.1, but ρ0, 1 depends only on the entries of a stochastic matrix. Moreover, ρ0, 1ρL and ρ0, 1ρl, which are given as follows.

Proposition 4.2

Let A = [aij] ∈ ℝn × n be a stochastic matrix. Then ρ0,1min{ρl,ρL},

where ρl, ρL and ρ0, 1 are defined in Theorem 3.2 and Proposition 4.1, respectively.

Proof

By the proof of Proposition 4.1, we have that ρ0, 1 is equivalent to the last of Inequality (10), that is, ρ0,1=maxmaxiN+(A)minαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A),maxiN(A)minαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A).

Also let f(α)=aii(n2)li(A)+Ci(A)+αΔi(A),α[0,1].

Then when Δi(A) ≥ 0, f(α) is a monotonically increasing function of α, and when Δi(A) < 0, f(α) is a monotonically decreasing function of α.

For the case that Li(A)=maxji,jNaji>li(A)=minji,jNaji,iN, we will prove ρ0, 1 < ρL. Note that f(0)=aii(n2)li(A)+Ci(A),andf(1)=aii+nLi(A)Ci(A).

Since f(α) is increasing when Δi(A) ≥ 0, we have maxiN+(A)aii(n2)li(A)+Ci(A)=maxiN+(A)minαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A)maxiN+(A)aii(n2)li(A)+Ci(A)+Δi(A)=maxiN+(A)aii+nLi(A)Ci(A),

which implies ρ0,1maxiNaii+nLi(A)Ci(A)=ρL.

Then similarly as in the proof of ρ0, 1ρL, we can obtain easily ρ0, 1ρl.

For the case that Li(A) = li(A) for some iN, we have Δi(A) = 0 and aii+nLi(A)Ci(A)=aii+(n2)Vi(A)+2Li(A)Ci(A)=j=1n|ajiqi(A)|=aii(n4)vi(A)2li(A)+Ci(A)=aii(n2)li(A)+Ci(A).

Similarly as in the case Li(A) > li(A), iN, we can also obtain easily ρ0,1ρLandρ0,1ρl.

The conclusion follows.  □

By propositions 4.1 and 4.2, we know that the optimal values of αi, iN for the bound maxiNminαi[0,1]aii(n2)li(A)+Ci(A)+αiΔi(A),

which could be obtained by using the set ΓstoLα(A) in Theorem 2.2, are αi = 0 for iN+(A) and αi = 1 for iN(A) such that ρ0,1=maxmaxiN+(A)aii(n2)li(A)+Ci(A),maxiN(A)aii+nLi(A)Ci(A).

is less than or equal to the bounds obtained by using the sets in Theorem 1.3 and 1.4 respectively. This provides a choice of αi, iN for the set ΓstoLα(A) to localize all eigenvalues different from 1 of a stochastic matrix.

For a stochastic matrix A = [aij] ∈ ℝn × n and d=Lαi(A)=[L1α1(A),L2α2(A),,Lnαn(A)]T

defined as (1), we take αi = 0 for iN+(A) and αi = 1 for iN(A), that is, Liαi(A)=Li0(A)=li(A),iN+(A),Li1(A)=Li(A),iN(A).

For this choice, the set ΓstoLα(A) reduces to ΓstoL0,1(A):=iN+(A)ΓistoL0(A)iN(A)ΓistoL1(A),

where ΓistoL0(A)=Γistol(A) and ΓistoL1(A)=ΓistoL(A). Hence, we have the following result.

Theorem 4.3

Let A = [aij] ∈ ℝn × n be a stochastic matrix. If λ ∈ σ(A)∖{1}, then λΓstoL0,1(A)=iN+(A)Γistol(A)iN(A)ΓistoL(A).(11)

Example 4.4

Consider the stochastic matrix A=0.26560.04710.14520.07580.21990.24630.26340.33680.04750.11430.13540.10260.05910.20020.18310.19160.18140.18460.26990.27530.16550.19410.07880.01650.14430.05980.12050.25820.28390.13320.23550.10270.13990.23580.21110.0750.

By computations, we have that N+ (A) = {2, 4, 6}, N (A) = {1, 3, 5}, and ΓstoL0,1(A)=Γ2stol(A)Γ4stol(A)Γ6stol(A)Γ1stoL(A)Γ3stoL(A)Γ5stoL(A).

By drawing the sets Γstol(A), ΓstoL(A) and ΓstoL0, 1(A) in the complex plane (see Figure 3), it is not difficult to see that for any λ ∈ σ(A)∖{1}, λΓstoL0,1(A),

and that although ΓstoL0, 1(A) ⊈ Γstol(A) and Γstol(A) ⊈ ΓstoL0, 1(A), the set ΓstoL0, 1(A) is better than Γstol(A) and ΓstoL(A) for estimating the moduli of subdominant eigenvalues.

Γstol(A), ΓstoL(A) and ΓstoL0, 1(A)
Fig. 3

Γstol(A), ΓstoL(A) and ΓstoL0, 1(A)

5 Conclusions

In this paper, a set with n parameters in [0, 1] is given to localize all eigenvalues different from 1 for a stochastic matrix A, that is, σ(A){1}ΓstoLα(A),foranyαi[0,1],iN.

In particular, when αi = 0 for each iN, ΓstoLα(A) reduces to the set Γstol(A), which consists of n sets Γistol(A), and when αi = 1 for each iN, ΓstoLα(A) reduces to the set ΓstoL(A), which consists of n sets ΓistoL(A). The sets Γstol(A) and ΓstoL(A) are used to estimate the moduli of subdominant eigenvalues, that is, for any λ ∈ σ(A)∖{1}, |λ|ρl=maxiNaii(n2)li(A)+Ci(A)

and |λ|ρL=maxiNaii+nLi(A)Ci(A).

Moreover, by taking αi = 0 for iN+(A) and αi = 1 for iN(A), we give a set ΓstoL0, 1(A), which consists of |N+(A)| sets Γistol(A) and |N(A)| sets ΓistoL (A) where |N+(A)|+|N(A)| = n. By using ΓstoL0, 1(A), we can get an upper bound for the moduli of subdominant eigenvalues which is better than ρl and ρL, i.e, for any λ ∈ σ(A)∖{1}, |λ|ρ0,1min{ρl,ρL},

where ρ0,1=maxmaxiN+(A)aii(n2)li(A)+Ci(A),maxiN(A)aii+nLi(A)Ci(A).

The authors are grateful to the referees for their useful and constructive suggestions. This work is supported by National Natural Science Foundations of China (11601473) and CAS “Light “ West China” Program.

Acknowledgement

The authors are grateful to the referees for their useful and constructive suggestions. This work is supported by National Natural Science Foundations of China (11601473) and CAS “Light of West China” Program.

References

  • [1]

    Peña J.M., Shape Preserving Representations in Computer Aided-Geometric Design, Nova Science Publishers, Hauppage, NY, 1999 Google Scholar

  • [2]

    Clayton A., Quasi-birth-and-death processes and matrix-valued orthogonal polynomials, SIAM J. Matrix Anal. Appl., 2010, 31, 2239-2260 Web of ScienceCrossrefGoogle Scholar

  • [3]

    Karlin S., Mcgregor J., A characterization of birth and death processes, Proc. Natl. Acad. Sci. U.S.A., 1959, 45, 375-379 CrossrefGoogle Scholar

  • [4]

    Mitrofanov A.YU., Stochastic Markov models of the formation and disintegration of binary complexes, Mat. Model., 2001, 13, 101-109 Google Scholar

  • [5]

    Mitrofanov A.YU., Sensitivity and convergence of uniformly ergodic Markov chains, J. Appl. Probab., 2005, 42, 1003-1014 CrossrefGoogle Scholar

  • [6]

    Seneta E., Non-negative Matrices and Markov Chains, Springer-Verlag, New York, 1981 Google Scholar

  • [7]

    Berman A., Plemmons R.J., Nonnegative Matrices in the Mathematical Sciences, Classics in Applied Mathematics, SIAM, Philadelphia, 1994 Google Scholar

  • [8]

    Cvetković L.J., Kostić V., Peña J.M., Eigenvalue Localization Refinements for Matrices Related to Positivity, SIAM J. Matrix Anal. Appl., 2011, 32, 771-784 CrossrefWeb of ScienceGoogle Scholar

  • [9]

    Kirkland S., A cycle-based bound for subdominant eigenvalues of stochastic matrices, Linear and Multilinear Algebra, 2009, 57, 247-266 CrossrefGoogle Scholar

  • [10]

    Kirkland S., Subdominant eigenvalues for stochastic matrices with given column sums, Electron. J. Linear Algebra, 2009, 18, 784-800 Google Scholar

  • [11]

    Li C.Q., Liu Q.B., Li Y.T., Geršgorin-type and Brauer-type eigenvalue localization sets of stochastic matrices, Linear and Multilinear Algebra, 2015, 63(11), 2159-2170 CrossrefGoogle Scholar

  • [12]

    Shen S.Q., Yu J., Huang T.Z., Some classes of nonsingular matrices with applications to localize the real eigenvalues of real matrices, Linear Algebra Appl., 2014, 447, 74-87 CrossrefWeb of ScienceGoogle Scholar

  • [13]

    Wang Y.J., Liu W.Q., Caccetta L., Zhou G.L., Parameter selection for nonnegative matrix/tensor sparse decomposition, Operations Research Letters, 2015, 43, 423-426 CrossrefWeb of ScienceGoogle Scholar

  • [14]

    Zhou G., Wang G., Qi L.Q., Alqahtani M., A fast algorithm for the spectral radii of weakly reducible nonnegative tensors, Numerical Linear Algebra with applications, 2018,  CrossrefWeb of ScienceGoogle Scholar

  • [15]

    Li C.Q., Li Y.T., A modification of eigenvalue localization for stochastic matrices, Linear algebra and its applications, 2014, 460, 231-241 Web of ScienceCrossrefGoogle Scholar

  • [16]

    Varga R.S., Geršgorin and his circles, Springer-Verlag, Berlin, 2004 Google Scholar

About the article

Received: 2017-09-29

Accepted: 2018-02-13

Published Online: 2018-04-02


Citation Information: Open Mathematics, Volume 16, Issue 1, Pages 298–310, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2018-0030.

Export Citation

© 2018 Wang et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in