Inclusion regions and bounds for the eigenvalues of matrices with a known eigenpair

Let ({\lambda}, v) be a known real eigenpair of a square real matrix A. In this paper it is shown how to locate the other eigenvalues of A in terms of the components of v. The obtained region is a union of Gershgorin discs of the second type recently introduced by the authors in a previous paper. Two cases are considered depending on whether or not some of the components of v are equal to zero. Upper bounds are obtained, in two different ways, for the largest eigenvalue in absolute value of A other than {\lambda}. Detailed examples are provided. Although nonnegative irreducible matrices are somewhat emphasized, the main results in this paper are valid for any square real matrix.


Introduction
In our recent work [7] and [8], we showed how to locate the eigenvalues of any real constant row-sum matrix within a region that is smaller than or, in the worst case, equal to the traditional Gershgorin region. What helped us to obtain such results is the fact that a constant row-sum matrix has a trivial eigenvalue equal to its constant row-sum which is associated with a known eigenvector, the all 1's vector. If a real matrix A is nonnegative and irreducible, its Perron eigenvector v p is known to have all positive components. A goal of this work is to show how to use this information to locate the non-Perron eigenvalues of A in terms of the components of v p . The idea of using a known eigenpair to locate the spectrum of a given matrix was explored recently in the interesting paper [13] by A. Melman. Let (λ, v) be a known real eigenpair of an n × n real matrix A. In this paper it is shown how to locate the other eigenvalues of A in terms of the components of v. The obtained region is a union of Gershgorin discs of the second type recently introduced by the authors in a previous paper. Two cases are considered depending on whether or not some of the components of v are equal to zero. Upper bounds are obtained in different ways for the largest eigenvalue in absolute value of A other than λ. Detailed examples are provided. In Sections 2, 3 and 4, comparisons are made with the original Gershgorin region. In Sections 5 and 6, comparisons are made with the location sets and bounds obtained in [13]. Although nonnegative irreducible matrices are somewhat emphasized, the main results in this paper are valid for any n × n real matrix with n ≥ 3.

First case: v has no zero components
If A is a real constant row-sum or a real constant column sum matrix, then a way to obtain an inclusion region for its eigenvalues is described in [7]. The obtained region is a union of the Gershgorin discs of the second type which were defined for the first time in [6]. Further results on how to improve the location of eigenvalues of real constant row-sum and constant column-sum matrices are obtained in [8]. We assume in this section that A is neither constant row-sum nor constant column sum; however, our results also particularly apply to these matrices. If A is nonnegative and irreducible, we know that it has a positive eigenvalue λ p equal to its spectral radius, [5,Theorem 8.4.4]. This eigenvalue, called the Perron eigenvalue of A, is simple and is associated with an eigenvector v p called the Perron eigenvector of A and having the property that all its components are strictly positive. We show how to use v p to obtain, for the non-Perron eigenvalues of A, an inclusion region that is a union of discs different from those given by the original Gershgorin theorem. To do this, we use the following theorem that allows us to convert A to a matrix B that is constant row-sum and at the same time similar to A; the case where A is nonnegative irreducible can be found in [2]. Theorem 2.1. Let A be an n × n real matrix. Let (λ, v) be a real eigenpair of A with v = (v 1 , v 2 , . . . , v n ) T . Suppose that v i = 0 for i = 1, 2, . . . , n. Let S = diag(v 1 , v 2 , . . . , v n ) and B = [b ij ] be the matrix similar to A given by Then B is a constant row-sum matrix with Be = λe, where e is the all 1's vector in R n .
Proof. Straightforward by calculation of Be.
Remark 2.2. The elements of B are given by Note that the elements of B remain constant if we scale the vector v by any positive number α since Note also that the expression Be = λe means that the row-sum of every row of B is equal to λ. As in our previous work, the eigenvalue λ is called the trivial eigenvalue of B while any eigenvalue of B that is different from λ is called a non-trivial eigenvalue of B.
Since B is a constant row-sum matrix, we can locate its non-trivial eigenvalues by applying the following theorem which can be found in our paper [7, Theorem 2.2]. Theorem 2.3. Let B be an n × n real constant row-sum matrix. All the non-trivial eigenvalues of B are located within the union of the Gershgorin discs of the second type of B T . This type of Gershgorin disc was introduced in [6].
A Gershgorin disc of the second type of A, denotedD i (a ii ,r i ), satisfies the following conditions: 1. Its center a ii is the diagonal element from the i th row of A 2. Its radius: x ij , if n is even.
The Gershgorin region of the second type of A is the union of all its Gershgorin discs of the second type .
Going back to Theorem 2.1, the matrix B is obtained from the real matrix A = [a ij ] according to (1) and has the same spectrum as A. Therefore, the combination of Theorem 2.1 and Theorem 2.3 gives an inclusion region for all the eigenvalues of A other than λ. Let's state this result as a theorem.
Theorem 2.5. Let A be a real matrix and suppose that Av = λv for some real number λ and real eigenvector v = (v 1 , v 2 , . . . , v n ) T with no zero component. Let S = diag(v) and B = S −1 AS. If λ 2 is an eigenvalue of A different from λ, then λ 2 is in the Gershgorin region of the second type of B T .
This is important as it means that the components of v are not needed to find the centers of discs which are simply the diagonal elements of A. The radii depend on the entries of v, but they are easy to calculate as each one of them is a simple algebraic sum of the elements from the corresponding column of B (according to Definition 2.4).
The Perron eigenvector of every nonnegative irreducible matrix is positive. Therefore we have the following corollary.
Corollary 2.7. Let A be an n × n nonnegative irreducible matrix and let v p be its Perron eigenvector. Then every non-Perron eigenvalue of A is in the Gershgorin region of the second type of the matrix B T given by The Perron eigenvalue of A is λ p = 24 and its Perron eigenvector is v p = (2, 1, 1, 1, 1, 1) T .
and let It can be easily verified that the union of the above discs forms an inclusion set for the non-Perron eigenvalues of A which are It is also smaller than the Gershgorin region of A made up of the discs This difference in size between these regions is illustrated in the figure below.
. . . Figure 1: The region given by Corollary 2.7 is colored in blue. In the left hand side graph it is compared to the Gershgorin region of A T and in the right hand side graph it is compared to the Gershgorin region of A. The eigenvalues of A are represented by the small dark points. Observe that all the non-Perron eigenvalues of A are inside the blue region. This is in accordance with Corollary 2.7. Contrary to the Gershgorin theorem, Theorem 2.5 and Corollary 2.7 impose no constraints on the location of the Perron eigenvalue λ p = 24. This explains why λ p is outside the blue region for this particular matrix A.
Remark 2.9. Although nonnegative irreducible matrices are emphasized, the results in this section are valid for any n × n real matrix having an eigenvector with no zero entry. This is clearly understood from the statement of Theorem 2.5.

Example 2.10. Consider the singular matrix
with eigenvector v = (1, 1, 1, 2, 1, −1) T corresponding to the eigenvalue λ = 0. The other eigenvalues of A are approximately −13.32, − 3.71 ± 4.39i and 3.37 ± 2.12i. The matrix B given by Theorem 2.5 is According to the same theorem, all the nonzero eigenvalues of A are in the union of the Gershgorin discs of the second type of B T which arê  Observe that the Gershgorin region of the second type of B T is a subset of the Gershgorin region of A T and all the eigenvalues of A other than λ = 0 belong to it. This is in accordance with Theorem 2.5. The eigenvalue λ = 0 also lies within this region for this particular matrix A, but this is not a consequence of Theorem 2.5.
This generalizes to any singular real matrix with an eigenvector with no zero entry corresponding to the eigenvalue 0.
In the previous two examples, the Gershgorin region of the second type of B T is a subset of the Gershgorin region of A T . Despite the fact that this is what we expect for most matrices, this is not always the case as shown by the following example. This matrix has an eigenvector v = (2, 1, 1) T which we use to construct the following matrix B according to Corollary 2.7: It is easy to check that the Gershgorin region of the second type of B T is larger than the Gershgorin region of A T . Figure 3: The Gershgorin region of the second type of B T is the union of the two blue discs; the Gershgorin region of A T is the union of the two gray discs. From the figure it is clear that the Gershgorin region of the second type of B T is larger than the Gershgorin region of A T . Since neither of the regions is a subset of the other, their intersection provides a better inclusion set for the eigenvalues of A namely 0, 5 and 10 which are represented by the small dark points.
In general, we expect that the Gershgorin region of the second type of the matrix B T obtained by applying Corollary 2.7 to a large dense nonnegative irreducible n × n matrix A is smaller than and contained within the Gershgorin region of A T for the following reason: the radius of a Gershgorin disc is obtained by summing the absolute values of all off-diagonal elements in a given row or column, while the radius of a Gershgorin disc of the second type is obtained by taking the difference between the sum of the largest ≈ n 2 and the sum of the smallest ≈ n 2 off-diagonal elements in a given row or column (see Definition 2.4). In particular, if the eigenvector being used in generating the matrix B is relatively flat (the components of v are close to each other), then the chance is even bigger for the inclusion set given by Corollary 2.7 to be significantly smaller than and contained within the Gershgorin region of A T . Another reason for which we are interested to the inclusion regions given by Theorem 2.5 and Corollary 2.7 is that they can be improved further by applying some ideas from [8]. This is what we shall discuss in the next section.

Further refinement of the inclusion regions
Let A be n × n real matrix with an eigenvector v with no zero component. The inclusion region given by Theorem 2.5 can be improved further by applying Theorem 3.10 and Theorem 3.15 from [8]. The first one applies to constant row-sum matrices of even size and implies the following ideas. We note that this statement incorporates a small official journal correction to the original version.
Theorem 3.1. Let n be an even integer and let B = [b ij ] be an n × n real constant row-sum matrix. Let λ 1 be its constant row-sum and let e be the all 1's vector of n components. Let L j be the j th column of B for j = 1, . . . , n. Construct the matrix Then the Gershgorin region of the second type of F T is a subset of that of B T and contains every eigenvalue of B different from λ 1 .
Proof. Look at [8, Theorem 3.10] and its proof. . Now we apply Theorem 3.1 to B to obtain the matrix F: Since A and B are similar to each other, it follows by Theorem 3.1 that the non-Perron eigenvalues of A are contained in the Gershgorin region of the second type of F T which is made up of the discs:  In the case of matrices of odd size, the region given by Theorem 2.5 can be enhanced by using the following theorem and corollary which are adapted versions of [8, Theorem 3.15 and Corollary 3.16].
Theorem 3.3. Let n be an odd integer with n ≥ 3, let B = [b ij ] be an n × n real constant row-sum matrix and let λ 1 be its constant row-sum. Let β j and γ j be, respectively, the opposite in sign of the ( n−1 2 ) th and the ( n+1 2 ) th largest numbers whereD F,j andD G,j are respectively, the Gershgorin discs of the second type obtained from the j th columns of F and G. Then the region S is contained within the Gershgorin region of the second type of B T and if λ is an eigenvalue of B different from λ 1 , then λ ∈ S.
Proof. Look at [8, Theorem 3.15] and its proof. Proof. The region given by (3)  The region given by Corollary 3.4 is larger than or equal the one given by Theorem 3.3. However it is considered for its relatively simple form. Its graph can by done by graphing the entire regions of F T and G T , then taking their intersection.
The matrix B is similar to A and therefore has the same spectrum, with λ p being equal to its constant row-sum. According to Corollary 2.7, all the non-Perron eigenvalues of A are in the Gershgorin region of the second type of B T . This region can be refined by applying Theorem 3.3 to B to obtain the matrices According to Theorem 3.3, the non-Perron eigenvalues of A are also contained in the region S which is given by (3) and reduces, for this particular example, to the discD F,1 (2, 15) with center 2 and radius 15. Let (λ, v) be a known real eigenpair of the n×n real matrix A, where v has exactly k components equal to 0 for some integer k ≥ 1. There is an n × n permutation matrix P such that The last (n − k) components of v are nonzero. Let where M is k × (n − k) with first column all 1's and all other entries equal to 0. Then A is similar to the matrix C = SP AP T S −1 which has eigenvector . . , v n ) T has no zero entries. Note that the last (n − k) components of v and w are the same. Moreover, S is nonsingular and no computation is needed to find S −1 as it is given by We arrive at the following result which can be used together with the theorems stated in the preceding sections to locate the remaining eigenvalues of A.
where v has at least one zero component and let the matrices P and S be as in (4) and (5). Then the matrix C = SP AP T S −1 is similar to A and has eigenpair (λ, SP v), where every component of the vector SP v is real and nonzero. Then Sv = e and the matrix A is similar to

Example 4.2. Let
which is a constant row-sum matrix since Ce = SAS −1 Sv = 0. To improve the location of eigenvalues, we apply Theorem 3.1 to C to obtain the matrix  Note that Theorem 4.1 together with Theorem 2.1 actually hold for complex matrices, so that every n × n complex matrix is similar to some constant row-sum matrix. In fact, there are other ways to obtain a constant row-sum matrix that is similar to a given n × n complex matrix A. If v is an eigenvector of A associated with some eigenvalue λ, then we can find infinitely many nonsingular matrices that satisfy the equation Sv = e. It follows that C = SAS −1 is a constant row-sum matrix since Ce = C(Sv) = λ(Sv) = λe. Some of the matrices that satisfy Se = v can be easily found. However, in general, the inverse of S may be costly in terms of computation which makes Theorem 4.1 interesting since it uses matrices P and S whose inverses are, respectively, P T and the simply structured matrix S −1 given by (6).

Comparison with other types of inclusion sets
As mentioned in the introduction, the idea of using a known eigenpair to locate the spectrum of a given matrix was explored recently in [13] by A. Melman. A main result in [13] was achieved by combining Gershgorin's and Brauer's theorems, [3] and [1]. To be able to make a transparent comparison between the location sets obtained in our work and those obtained in [13], here is a brief summary of the ideas behind the locations sets obtained in [13].
If the spectrum of the n × n real matrix A (counting algebraic multiplicities) is {λ 1 , λ 2 , . . . , λ n } and v = (v 1 , v 2 , . . . , v n ) T is an eigenvector of A associated with λ 1 , then by Brauer's theorem the matrix A − vz T has spectrum {λ 1 − z T v, λ 2 , . . . , λ n } for every z ∈ C n . A main idea in [13] is to find a specific vector x that allows for the best Gershgorin region among all matrices in A T − zv T |z ∈ C . Fortunately this is algebraically possible if the eigenpair (λ 1 , v) is real and there is an optimization theorem that allows for finding x which is by the way a real vector in this case. (For more details, look at [13].) We remind the reader that the eigenvalues of every n × n complex matrix M = [m ij ] lie in its Ostrowski-Brauer set given by It is difficult to have a complete idea on how the location sets given by Theorems 2.5, 3.1 and 3.3 compare in size with those obtained in [13] for every matrix. Nevertheless, we do the comparison for two matrices used in [13].
Example 5.1. We consider the 3 × 3 matrices   Figure 3]. Note that these conclusions hold for these particular matrices and cannot be considered general facts.
For comparison related to Ostrowski-Brauer sets, we take into consideration that each of the matrices F and G from (Theorems 3.1 and 3.3) has its own Ostrowski-Brauer set and their intersection can be compared to the Ostrowski-Brauer sets obtained in [13]. However, we need first to show that λ 2 , λ 3 , . . . , λ n are eigenvalues of F as well as of G.
Theorem 5.2. Let A be an n × n real matrix with spectrum {λ 1 , λ 2 , . . . , λ n } (counting algebraic multiplicities). Suppose that λ 1 is real and associated with a real eigenvector v which has no zero components. Let B = S −1 AS, where S = diag(v). Let F and G be the matrices obtained from B in Theorems 3.1 and 3.3. Then, for i = 2, 3, . . . , n, we have and Proof. In the case n is even, observe that F is obtained in Theorem 3.1 by subtracting from every column of B a real multiple of the vector e. That is where α = (α 1 , α 2 , . . . , α n ) T ∈ R n . Hence F has the form F = B − eα T and it follows by Brauer's Theorem that λ 2 , λ 3 , . . . , λ n are eigenvalues of F . Hence, λ i ∈ Γ(F ) ∩ Γ(F T ) for i = 2, 3, . . . , n.
In the case where n is odd, we have F = B − eα T and G = B − eβ T for some α and β ∈ R n (look at Theorem 3.3 to see how F and G are obtained). It follows by Brauer's Theorem that λ 2 , λ 3 , . . . , λ n are eigenvalues of each of F and G. Hence, they lie in each of Γ(F ), Γ(F T ), Γ(G) and Γ(G T ) and consequently in their intersection.
Example 5.3. We look again at the matrices A 1 and A 2 in the previous example, which are also given in [13,Example 1] and [13,Example 2]. For the matrix A 1 , by using (7) we find that Γ(F ) ∩ Γ(F T ) = {−2, 0} which is perfect since it consists of two points only. Note that all three eigenvalues of F are in this set since 0 is an eigenvalue of F with multiplicity 2. In fact, these eigenvalues can be obtained simply by looking at F : For the matrix A 2 , by using (7) and Theorem 5.2, we obtain the following location set. We can see that the region obtained here is close in size to the Ostrowski-Brauer sets obtained in [13, Figure 3]. (In [13, Figure 3], the parts of the Ostrowski-Brauer set are each enclosed within a rectangle.) At the end of this section we highlight the following points. 2. In terms of computations, we didn't see much difference since, for all types of discs obtained here and in [13], the calculation of the radii can be done with a number of operations of order O(n).
3. As we mentioned earlier, we would like to emphasize that the location sets given by Theorems 2.5, 3.1 and 3.3 allow for a significant reduction of the Gershgorin region of the transpose of a large matrix A if this matrix is dense and the elements in each row (or in each column for A T ) are relatively close to each other. This fact can be clearly seen from Definition 2.4.
4. Which location is the best depends on the matrix being studied. One may actually consider the intersection of all of them.
6 Bounding the largest in absolute value of the remaining eigenvalues of A Let A be an n × n real matrix with known real eigenpair (λ, v). In this section we are going to derive some upper bounds for the largest in absolute value of the remaining eigenvalues of A. The result applies, in particular, to every nonnegative irreducible matrix with known Perron eigenpair (λ p , v p ).

Some upper bounds derived from the preceding location theorems
It is natural that the farthest point of an inclusion set from the origin provides an upper bound for the absolute values of all eigenvalues comprised inside this region. Each of the locations obtained in the previous sections is a union of discs with centers and radii given explicitly in terms of the entries of the matrix. This allows for an easy derivation of the bounds. Let (λ 1 , v) be a known real eigenpair of the real n × n matrix A = [a ij ]. Without loss of generality, we assume that every components of v is nonzero. If this is not the case, then we use Theorem 4.1. By Theorem 2.5, every eigenvalue λ of A different from λ 1 is in the Gershgorin region of the second type of the matrix B T given by B = S −1 AS, where S = diag(v) This region is made up of the discs {D i (a ii ,r i ), i = 1, 2, . . . , n}, where the radiusr i is obtained from the ith column of B according to Definition 2.4. Let λ 2 be a largest eigenvalue of A in absolute value such that λ 2 = λ 1 . Then there exists i ∈ {1, 2, . . . , n} such that λ 2 ∈D i (a ii ,r i ). That is |λ 2 − a ii | ≤r i . This implies that |λ 2 | ≤ |a ii | +r i , from which we obtain the upper bound If n is even, this upper bound can be improved by considering the matrix F = [f ij ] obtained from B by using Theorem 3.1. Then following the same reasoning as above we obtain wherer i (F T ) is the radius of the Gershgorin disc of the second type obtained from the ith column of F . From this discussion, we state the following theorem where two cases are considered depending on whether the size of A is odd or even.
Theorem 6.1. Let (λ 1 , v) be a known real eigenpair of the n × n real matrix A = [a ij ]. Suppose that every component of v is nonzero. Let S = diag(v) and B = S −1 AS. If λ is an eigenvalue of A different from λ 1 , then wherer i is the radius of the Gershgorin disc of the second type obtained from the ith column of B. Moreover, 1. If n is even and F = [f ij ] is the matrix obtained from B by Theorem 3.1, then wherer i (F T ) is the radius of the Gershgorin disc of the second type obtained from the ith column of F .
2. If n is odd and F = [f ij ] and G = [g ij ] are the matrices obtained from B by Theorem 3.3, then wherer i (F T ) andr i (G T ) are, respectively, the radii of the Gershgorin discs of the second type obtained from the ith columns of F and G.
This theorem applies, in particular, to every n × n nonnegative irreducible matrix A with known Perron eigenpair (λ p , v p ). Note that, in this case, the Perron eigenvalue λ p is itself an upper bound for λ. Therefore the upper bounds given by Theorem 6.1 are considered in the case they are smaller than λ p . In other words, the upper bound given by (13) is better than λ p in the case where λ p is outside the Gershgorin region of the second type of B. The same idea applies to F and G. The Perron eigenvalue of A is λ p = 24 associated with Perron eigenvector v p = (1, 2, 2, 1) T . The remaining distinct eigenvalues of A are λ 2 = −6 and λ 3 = −2 (λ 3 has multiplicity 2). Second largest eigenvalue in absolute value |λ 2 | = 6 can be bounded as follows. We apply Corollary 2.7 to A to obtain the matrix To improve this bound, we first apply Theorem 3.1 to B to obtain the matrix Then we apply Theorem 6.1 to F to obtain a new upper bound m F = 6 ≥ |λ 2 | = 6.
In the preceding example, the bound m F is perfect as it is equal to |λ 2 |. This is not always the case. In fact, looking at several nonnegative matrices, we observed that the gap between the second largest eigenvalue in absolute value and the bounds given by Theorem 6.1 can be sometimes relatively large. In general, the eigenvector v, in Theorem 6.1, can be associated with any eigenvalue of A and not necessarily with its spectral radius. This implies that the bounds given by Theorem 6.1 could be simply upper bounds of the spectral radius itself and therefore, should be compared to the many spectral radius upper bounds that are in the literature. In general, if A has a known eigenpair (λ, v), then the bounds given by Theorem 6.1 are bounding the largest absolute value among the eigenvalues of A other than λ. The upper bounds given by Theorem 6.1 are considered for their simple and explicit algebraic forms and for their potential theoretical applications. For numerical applications, we think we have even more interesting bounds in the next subsection.
6.2 Bounding the eigenvalues of real matrices by using a class of matrix semi-norms Some of the important upper bounds obtained for the second largest eigenvalue, in absolute value, of a nonnegative matrix in general and a stochastic matrix in particular can be found in [12], [14], [16], [17], [18], the valuable book [15] and the interesting survey [11]. Note that an important class of bounds is obtained in [14] by using the technique of converting a nonnegative matrix to a nonnegative constant row-sum matrix via the idea in Theorem 2.1; look at [14,.
In general, if A is an nonnegative irreducible matrix having a Perron eigenpair (λ p , v p ) and S = diag(v p ), then every bound m that applies to stochastic matrices applies also to the matrix A, since the matrix B = 1 λp S −1 AS is stochastic and its spectrum is proportional to that of A. Other bounds obtained by using the technique of matrix deflation are discussed in [4]. If A is any real matrix having an eigenvector v with no zero component and S = diag(v), then every bound M that applies to the general case of real constant row-sum matrices applies also to the matrix A, since the matrix B = S −1 AS is constant row-sum and has the same spectrum as A. Such upper bounds are discussed in our recent articles [9] and [10]. If the eigenvector v of A has some components equal to zero, then Theorem 4.1 can be used. Let B be an n × n real constant row sum matrix such that Be = λ t e. Let τ p (B) be the nonnegative matrix function defined by where ||x|| p is the l p -norm of the vector x with p ∈ N ∪ {∞}. If λ is an eigenvalue of B different from λ t , then |λ| ≤ k τ p (B k ), for every k ∈ N [10, Corollary 2.9].
Two functions τ 1 (B) and τ ∞ (B) are numerically applicable because of their known explicit forms (look at [11] and the included references for stochastic matrices and at [10] for the more general case of constant row-sum matrices): and whereρ(B) is defined as follows.
For the case of the real matrix with a given real eigenpair (λ 1 , v) such that v has no zero components, we have the following corollary which is an immediate consequence of (17).
Corollary 6.5. Let A be an n × n real matrix having spectrum {λ 1 , λ 2 , . . . , λ n } (counting multiplicities). Suppose that λ 1 is real and associated with a real eigenvector v with no zero components and let D = diag(v). Then for i = 2, 3, . . . , n and for every k ∈ N we have where B is the constant row-sum matrix given by B = D −1 AD.
Remark 6.6. If some of the components of v are equal to zero, then we can use Theorem 4.1 to obtain matrix C = SP AP −1 S −1 . If C is constant row-sum, we apply (21) to it. If not, then C has an eigenvector v = SP v with no zero components, which allows us to obtain a constant row-sum matrix B by Theorem 2.1. Then (21) is applied to B.
The inequality in (21) provides an upper bound on the determinants of real matrices.
Corollary 6.7. Let A be an n×n real matrix having real eigenpair (λ, v). Suppose that v has no zero components and let D = diag(v). Then for every k ∈ N we have In particular, Proposition 6.8. Let A and B be as in Corollary 6.5 and let F and G be the n × n constant row-sum matrices obtained from B by Theorem 3.1 or Theorem 3.3. Then τ p (B) = τ p (F ), if n is even (24) and τ p (B) = τ p (F ) = τ p (G), if n is odd.
Proof. Observe that F and G are constructed by adding to each column of B a multiple of the all 1's vector e. That is, F = B + [α 1 e α 2 e . . . α n e], α i ∈ R and G has a similar form. Then (24) and (25) follow from the definition of τ p given by (16).
Remark 6.9. Proposition 6.8 may be considered for computation purposes. In the following examples, F has a simpler structure compared to B since it contains more zeros and smaller integers in absolute value. That makes the calculation of τ 1 (F ) and τ ∞ (F ) faster. At the end, they have the same values as τ 1 (B) and τ ∞ (B).
Example 6.10. In the case where v has no zero component, we reconsider matrix A in Example 6.2. The given eigenpair consists of λ p = 24 and v p = (1, 2, 2, 1) T . By applying (21) to F , we have the following bounds for |λ 2 | = 6.
F k F F 3 F 3 k τ ∞ (F k ) ≈ 6 6 6 k τ 1 (F k ) ≈ 8 6.93 6.54 Note that the bound τ ∞ (F ) is equal to the bound obtained in Example 6.2 by applying Theorem 6.1 to F . Example 6.11. For the case where v has some zero components, we go back to Example 4.2. The given eigenvalue is λ 1 = 0 associated with eigenvector v = (0, 0, 1, 1) T . The eigenvalue we need to bound is λ 2 = 2. Some bounds obtained by applying (21) to F are given in the following table.

Comparison to some existing bounds
Finally, we make comparisons between the bounds discussed in this section and those obtained in [13]. For that, we consider the matrix given in [ The given eigenpair is (24, e) and the other eigenvalues of A are λ 2 = −6 and λ 3 = 5. The eigenvalue to bound is λ 2 . Two bounds obtained for this matrix in [13] are m 1 = 12 and m 2 ≈ 11.36. Hoffman's bound also was calculated for this matrix and found to be m 3 = 12. Note that A is a constant row-sum matrix and the matrices F and G obtained from it according to Theorem 3.3 are The bound obtained by applying Theorem 6.1 is m 4 = 14. Application of (21) to G gives bounds m 5 = τ ∞ (G) = 12 and m 6 = τ ∞ (G 2 ) ≈ 7.75 by the use of τ ∞ . Using matrix semi-norm τ 1 , we obtain m 7 = τ 1 (G) = 6 = |λ 2 | and no computation is needed for second or third power of the matrix G. We observe that for this particular example, the bound given by τ 1 (G) is perfect and the bound given by τ ∞ is relatively good if applied to G 2 .
In general, at the cost of a some matrix power computations, the bounds given by Corollary 6.5 outperform any other bound since their convergence is ensured by (20).

Conclusion
In this work we have shown how the location of eigenvalues can be improved if a real eigenpair of the real matrix is known. A significant improvement of the original Gershgorin region of the transpose of the matrix is obtained if this matrix is large, dense and the elements in each row (or in each column if A T is considered) are close to each other. The ideas being discussed give rise to some questions such as: 1. Are we able to find similar locations in the more general case of complex matrices?
2. If two or more independent eigenvectors are known to be associated with the same eigenvalue, then how does this affect the location of the other eigenvalues of the matrix?