The spectrum of two interesting stochastic matrices

The spectrum of two interesting stochastic matrices appearing in an engineering paper is completely determined. As a result, an inequality conjectured in that paper, involving two second largest eigenvalues, is easily proved.


Introduction
Two interesting matrices, A and B below, appear in the paper [1], by Roya Norouzi Kandalan et al. The paper investigates the leader-follower model in decision making, by using matrix analysis techniques associated to bipartite graphs. The two (n × n) matrices of interest are right stochastic with rational entries. Therefore, the where λ is the second largest eigenvalue of the respective matrices. In this note we validate the inequality (1) via a complete determination of the spectra of A and B. Our approach is elementary, and is successful due to the large ((n − )-dimensional) null-spaces exhibited by the two matrices.

The Matrices
To the end of introducing the two matrices A and B it is more economical to use block-matrix notation. Fix rst two positive integers, n and k, < k < n. Then de ne in block-matrix notation matrices A and B in R n×n by

The Results
In preparation for stating our key lemma and main result we denote the standard basis of R n by e n , e n , . . . , . We also denote a generic vector in R n by with multiplicity , and , with multiplicity n − .
n i=k x i = is a routine veri cation, based on the actual expression of A. Therefore, dim R N(A) = n − , and so we have that is an eigenvalue of A with geometric multiplicity n − . As a stochastic matrix A also possesses as a (necessarily largest) eigenvalue, for which un is an eigenvector. Notice now that we have the direct sum decomposition We seek the last eigenvector of A to be of the form v = αun + e n + x, for suitable α ∈ R and x ∈ N(A).
Setting now y = λx, the components of y as given by Equations (4) follow immediately from (6) and the expressions of α and λ in terms of n and k. Notice that < λ < , therefore λ is indeed the second largest eigenvalue of A.
b) The proof of b) can mimic exactly that of a), and we encourage the reader to do it. However, there is an easy way to conclude b) from a), via the observation that the matrices A and B are similar, with similarity matrix P in column form given by P = e k+ n , e n , . . . , e k n , e n , e k+ n , . . . , e n n P is a nontrivial involution, P = Id, and so P − = P. Right multiplication by P of any matrix swaps the rst and the (k + )th columns of the matrix, while leaving the other columns unchanged, and left multiplication does similar things to the rows. This being said, notice that indeed A is obtained from B by rst swapping columns and k + , and then swapping rows and k + of the resulting matrix. where a n is the rst column of A. Equivalently, We conclude that λ = − k − (n − k + ) , and α = − n + .
However, BA continues to be diagonalizable in those cases too, and the eigenvalue has geometric multiplicity n − .