# The complete positivity of symmetric tridiagonal and pentadiagonal matrices

Lei Cao , Darian McLaren and Sarah Plosker
From the journal Special Matrices

## Abstract

We provide a decomposition that is sufficient in showing when a symmetric tridiagonal matrix A is completely positive. Our decomposition can be applied to a wide range of matrices. We give alternate proofs for a number of related results found in the literature in a simple, straightforward manner. We show that the cp-rank of any completely positive irreducible tridiagonal doubly stochastic matrix is equal to its rank. We then consider symmetric pentadiagonal matrices, proving some analogous results and providing two different decompositions sufficient for complete positivity. We illustrate our constructions with a number of examples.

MSC 2010: 05C38; 05C50; 15B51; 15B57

## 1 Preliminaries

All matrices herein are real valued and in particular are entrywise non-negative. Let A be an n × n symmetric tridiagonal matrix:

A = a 1 b 1 b 1 a 2 b 2 b n 3 a n 2 b n 2 b n 2 a n 1 b n 1 b n 1 a n .

We are often interested in the case where A is also doubly stochastic, in which case we have a i = 1 b i 1 b i for all i = 1 , 2 , , n , with the convention that b 0 = b n = 0 . It is easy to see that if a tridiagonal matrix is doubly stochastic, it must be symmetric, so the additional hypothesis of symmetry can be dropped in that case.

We are interested in positivity conditions for symmetric tridiagonal and pentadiagonal matrices. A stronger condition than positive semidefiniteness, known as complete positivity, has applications in a variety of areas of study, including block designs, maximin efficiency-robust tests, modelling DNA evolution, and more [8, Chapter 2], as well as recent use in mathematical optimisation and quantum information theory (see [24] and the references therein).

With this motivation in mind, we study the positivity (in various forms) of symmetric tridiagonal and pentadiagonal matrices, where we highlight the important case when the matrix is also doubly stochastic. Although it is NP-hard to determine if a given matrix is completely positive [17], in Section 2.2, we provide a construction that is sufficient to show that a given symmetric tridiagonal matrix is completely positive. We provide a number of examples illustrating the utility of this construction. The literature on completely positive matrices often considers the cp-rank, or the factorisation index, of a completely positive matrix, which is the minimal number of rank-one matrices in the decomposition showing complete positivity; e.g., Chapter 3 of [8] is devoted to this topic. We show that for irreducible tridiagonal doubly stochastic matrices, our decomposition is minimal. It should be noted that it is known that acyclic doubly non-negative matrices are completely positive [7], and this result has been generalised to bipartite doubly non-negative matrices [6]. Our Proposition 3 is an independent discovery of a special case of this result, using a simpler method of proof.

As a natural extension of the tridiagonal case, we generalize many of our results to symmetric pentadiagonal matrices in Section 3. While a construction analogous to that for the tridiagonal setting works in the pentadiagonal setting, we also provide an alternate, more involved, construction that works in many cases when the original construction does not. A characterisation used to determine the complete positivity of a matrix with a particular graph is given in [3], with respect to the complete positivity of smaller matrices; they consider a particular non-crossing cycle, which is the graph of a pentadiagonal matrix.

## 2 Tridiagonal matrices

### 2.1 Basic properties of tridiagonal doubly stochastic matrices

Tridiagonal doubly stochastic matrices arise in the literature in a number of areas, in particular with respect to the study of Markov chains and in majorisation theory. The facial structure of the set of all tridiagonal doubly stochastic matrices, which is a subpolytope of the Birkhoff polytope of n × n doubly stochastic matrices, is explored in [15] with a connection to majorisation. In [26], the author develops relations involving sums of Jensen functionals to compare tuples of vectors; a tridiagonal doubly stochastic matrix is used to demonstrate their results. In the study of mixing rates for Markov chains, the assumption of symmetry in the transition matrix is sometimes seen, as in [10]. Other times, the Markov chain is assumed to be a path [9,11], leading to a tridiagonal transition matrix. The properties of symmetric doubly stochastic matrices are explored in [27], where majorisation relations are given for the eigenvalues. Properties related to the facial structure of the polytope of tridiagonal doubly stochastic matrices can be found in [12,13,14]. In the former, alternating parity sequences are used to express the number of vertices of a given face, and in the latter, the number of q -faces of the polytope for arbitrary n is determined for q = 1 , 2 , 3 .

Many factorisation techniques for tridiagonal matrices have been proposed in the literature; for example, [19,20,22] are concerned with L X L T factorisations and pivoting algorithms, where L is unit lower triangular and X is block diagonal with 1 × 1 or 2 × 2 blocks, factorisation using parallel computers is studied in [1], and a factorisation method for a symmetric singular value decomposition (Takagi factorisation) is given for real and complex symmetric tridiagonal matrices in [18,30], respectively. Here, we present an algorithm to factor entrywise non-negative, symmetric tridiagonal matrices to show complete positivity, discussed later.

One can ask under what conditions is a tridiagonal doubly stochastic matrix A positive semidefinite. It is known that a symmetric diagonally dominant matrix A with non-negative diagonal entries is positive semidefinite. Thus, in our case, if

(1) b i 1 + b i 0.5

for all i = 1 , 2 , , n , with b 0 = b n = 0 , then A is diagonally dominant, and hence, A is positive semidefinite. So (1) is sufficient for positive semidefiniteness of a tridiagonal doubly stochastic matrix. However, the following matrix is a tridiagonal doubly stochastic matrix that is positive semidefinite, showing that (1) is not necessary:

0.6 0.4 0 0 0.4 13 / 30 1 / 6 0 0 1 / 6 13 / 30 0.4 0 0 0.4 0.6 .

We note that since tridiagonal doubly stochastic matrices are symmetric, their eigenvalues are real. Further, since the matrices are doubly stochastic, they always have 1 as an eigenvalue (at least once), with corresponding eigenvector 1 (the all-ones vector). If λ is an eigenvalue of a stochastic matrix, it is well known that λ C such that λ 1 . In our context, we note further that 1 λ 1 (i.e., λ R ); this follows immediately from the assumption that our matrix is symmetric. In fact, we can say something stronger, as in the following proposition.

### Proposition 1

Let n 2 . λ is an eigenvalue of an n × n tridiagonal doubly stochastic matrix if and only if λ [ 1 , 1 ] .

### Proof

Suppose λ [ 1 , 1 ] is arbitrary. The 2 × 2 tridiagonal doubly stochastic matrix A = a b b a with a + b = 1 , a [ 0 , 1 ] has eigenvalues 1 and 2 a 1 . So choose a such that 2 a 1 = λ , i.e., a = ( λ + 1 ) / 2 . Then λ is an eigenvalue of the constructed matrix A . For n > 2 , note that we can construct an n × n tridiagonal doubly stochastic matrix via A B , where B is an ( n 2 ) × ( n 2 ) tridiagonal doubly stochastic matrix, and the constructed matrix A B has λ as an eigenvalue (if v is an eigenvector corresponding to λ for the matrix A , then v 0 n 2 , where 0 n 2 is the ( n 2 ) -dimensional zero vector, is an eigenvector corresponding to λ for A B ). Thus, one can construct a tridiagonal doubly stochastic matrix of arbitrary size having the prescribed eigenvalue λ .

The converse follows from the discussion prior to this proposition: that the eigenvalues of a tridiagonal doubly stochastic matrix A all lie in [ 1 , 1 ] .□

### Definition 1

An n × n real matrix A is completely positive if it can be decomposed as A = V V T , where V is an n × k entrywise non-negative matrix, for some k .

Equivalently, one can define A to be completely positive provided A = i = 1 k v i v i T , where v i are entrywise non-negative vectors (namely, the columns of V ).

Completely positive matrices are positive semidefinite and symmetric entrywise non-negative; such matrices are called doubly non-negative. Doubly non-negative matrices are completely positive for n 4 , while doubly non-negative matrices that are not completely positive exist for all n 5 ; see [4] and the references therein. In other words, the set of all completely positive matrices forms a strict subset of the set of all doubly non-negative matrices for n 5 .

We outline below a construction producing the completely positive decomposition A = i v i v i T , which can be found by assuming that, since A is tridiagonal, each v i should have only two non-zero entries (the i th and ( i + 1 ) th entries), and brute-force solving for these entries from the equation A = V V T ; these values can also be found somewhat indirectly, assuming our initial condition is zero, through a construction of pairwise completely positive matrices in [24, Theorem 4] by taking both matrices to be A .

For a given n × n symmetric tridiagonal matrix A , define the set { v i } i = 0 n of cardinality n + 1 , whose elements are n -dimensional vectors where the j th component of v i , denoted ( v i ) j , is recursively defined by

(2) ( v i ) j = a i ( ( v i 1 ) i ) 2 j = i b i / ( v i ) i j = i + 1 0 otherwise

with initial condition v 0 = a 0 0 0 T . This construction yields

v 1 = a 1 a 0 2 b 1 a 1 a 0 2 0 0 T v 2 = 0 a 2 b 1 2 a 1 a 0 2 b 2 a 2 b 1 2 a 1 a 0 2 0 0 T v 3 = 0 0 a 3 b 2 2 a 2 b 1 2 a 1 a 0 2 b 3 a 3 b 2 2 a 2 b 1 2 a 1 a 0 2 0 0 T , etc.

The constant a 0 must satisfy a 0 0 ; however, it is worth noting that certain values of a 0 (the most obvious case being a 0 2 = a 1 ) can lead to some of the v i vectors being ill defined.

### Proposition 2

Let A be an n × n symmetric tridiagonal matrix and a 0 0 . If the v i as defined in equation (2) are well defined, then A = i = 0 n v i v i T . If the entries for each v i are non-negative numbers, then A is completely positive.

We note that if A is entrywise non-negative, which includes the case of A being doubly stochastic, and the entries of the v i are all real, then they are automatically non-negative.

### Proof

Consider a symmetric tridiagonal matrix A such that the vectors in equation (2) are well defined. Let V i = v i v i T for all i = 0 , 1 , , n and A ˜ = i = 0 n V i . We wish to show that A ˜ = A . From the definition of the v i given in equation (2), each V i is tridiagonal with only up to four non-zero entries, and so A ˜ itself is tridiagonal. Now, consider a component a ˜ j , j + 1 of A ˜ , where j = 1 , 2 , , n 1 . The only V i that has a non-zero entry in the ( j , j + 1 ) th component is V j as v j is the only vector with both the j and ( j + 1 ) th components being non-zero. The ( j , j + 1 ) th component of V j is in fact b j and so a ˜ j , j + 1 = b j . By symmetry, we also have a ˜ j + 1 , j = b j . Now consider a component on the diagonal of A ˜ : a ˜ j j , where j = 1 , 2 , , n . The only V i that have non-zero entries in the ( j , j ) th component are V j and V j 1 , with respective values a j ( ( v j 1 ) j ) 2 and ( ( v j 1 ) j ) 2 . Clearly, then a ˜ j j = a j for all j = 1 , 2 , , n . Therefore, A = A ˜ = i = 0 n v i v i T ; If the entries for each v i are non-negative numbers, then A is completely positive.□

There is a number of other related algorithms for finding the decomposition of a completely positive matrix. Method 5.3 of [16] equates showing that a circular matrix A is completely positive with finding a solution to an optimisation problem using a recurrence relation involving a i and b i , not unlike equation (1). The algorithm in [25], which uses vertex-edge incidence matrices, applies to non-negative diagonally dominant symmetric matrices. In the case of tridiagonal matrices, our algorithm is in fact more general than that of [25], as our algorithm applies to non-negative positive semidefinite (tridiagonal) matrices. Our algorithm also gives the minimal completely positive decomposition (i.e., it attains the cp-rank; see Corollary 3).

### Example 1

Consider the 5 × 5 case, which is the first (in terms of smallest dimension) non-trivial case. For the matrices

A = 3 / 4 1 / 4 0 0 0 1 / 4 1 / 2 1 / 4 0 0 0 1 / 4 1 / 2 1 / 4 0 0 0 1 / 4 1 / 2 1 / 4 0 0 0 1 / 4 3 / 4 and B = 7 / 9 2 / 9 0 0 0 2 / 9 5 / 9 2 / 9 0 0 0 2 / 9 7 / 9 0 0 0 0 0 8 / 9 1 / 9 0 0 0 1 / 9 8 / 9 ,

our construction with a 0 = 0 gives A = V V T and B = W W T , where

V = 1 2 3 0 0 0 0 1 2 3 1 2 5 3 0 0 0 0 1 2 3 5 1 2 7 5 0 0 0 0 1 2 5 7 3 2 7 0 0 0 0 1 6 7 1 3 5 and W = 1 3 7 0 0 0 0 2 3 7 1 3 31 7 0 0 0 0 2 3 7 31 21 31 0 0 0 0 0 2 3 2 0 0 0 0 1 6 2 1 2 7 2 .

Therefore, the matrices A and B are completely positive. Note that V and W should be 5 × 6 matrices; however, the selection of a 0 = 0 forces v 0 to be the zero vector and as such the first column of both V and W is all zeroes and so can be omitted. For this reason, choosing a 0 = 0 often leads to a much simpler decomposition.

It is important to emphasise here that a decomposition proving that a matrix A is completely positive is in general not unique. In particular, the choice of a 0 can lead to different decompositions, assuming they are still well defined. For example, if we had instead chosen a 0 = 3 / 4 , the matrix

W ˜ = 3 4 1 12 31 0 0 0 0 0 8 3 31 1 3 91 31 0 0 0 0 0 2 3 31 91 57 91 0 0 0 0 0 0 2 3 2 0 0 0 0 0 1 6 2 1 2 7 2

works in the decomposition of B .

If the given matrix is in block form but our decomposition does not work, we may employ the technique illustrated in the following example: treating each block separately.

### Example 2

Consider the matrix

C = 1 0 0 0 0 0 1 / 2 1 / 2 0 0 0 1 / 2 1 / 2 0 0 0 0 0 1 / 2 1 / 2 0 0 0 1 / 2 1 / 2 .

Since we have b 1 = 0 , this gives ( v 1 ) 2 = 0 . Therefore, we also have ( v 3 ) 3 = a 3 b 2 2 a 2 = 1 / 2 1 / 2 = 0 . Hence, regardless of our choice of a 0 the component ( v 3 ) 4 is never well defined. To get around this fact, consider C as the block matrix:

C = C 1 0 3 , 2 0 2 , 3 C 2 .

where 0 n , m denotes the n × m all-zeros matrix and

C 1 = 1 0 0 0 1 / 2 1 / 2 0 1 / 2 1 / 2 and C 2 = 1 / 2 1 / 2 1 / 2 1 / 2 .

In the case of matrices C 1 and C 2 , we have no issues with decomposing. By choosing a 0 = 0 , we obtain

V 1 = 1 0 0 0 1 2 0 0 1 2 0 and V 2 = 1 2 0 1 2 0 ,

where C 1 = V 1 V 1 T and C 2 = V 2 V 2 T . Therefore,

V = V 1 0 3 , 2 0 2 , 3 V 2 = 1 0 0 0 0 0 1 2 0 0 0 0 1 2 0 0 0 0 0 0 1 2 0 0 0 0 1 2 0 ,

where C = V V T , and hence, C is completely positive.

The decomposition given by equation (2) leads to the following result, which includes tridiagonal doubly stochastic positive definite matrices. Corollary 4.11 of [7] (doubly non-negative tridiagonal matrices are completely positive) encompasses this result, but its proof relies on deletion of a leaf from a connected acyclic graph; our method of proof is more direct, does not involve graph theory, and relies solely on equation (2) and observing the leading principal minors of the matrix.

### Proposition 3

If A is a symmetric tridiagonal positive definite entrywise non-negative matrix, then A is completely positive.

### Proof

Sylvester’s criterion tells us that for a real symmetric matrix A , positive definiteness is equivalent to all leading principal minors of A being positive.

Note that all square roots in the denominators of the entries in equation (2) being well defined with a 0 = 0 imply that all leading principal minors are positive. Indeed, taking a 0 = 0 in the construction of equation (2), we find the following. For v 1 to be well defined, we have a 1 > 0 , which is the 1 × 1 leading principal minor.

For v 2 to be well defined, we have a 2 b 1 2 a 1 > 0 , which is equivalent to a 1 a 2 b 1 2 > 0 , which is the 2 × 2 leading principal minor.

For v 3 to be well defined, we have a 3 b 2 2 a 2 b 1 2 a 1 > 0 , which is equivalent to a 1 a 2 a 3 a 3 b 1 2 a 1 b 2 2 > 0 , which is the 3 × 3 leading principal minor.

Continuing in this manner, the result follows.□

The construction of [23] provides a method to construct a symmetric doubly stochastic matrix with prescribed eigenvalues; however, it becomes trivial (most eigenvalues need to be 1) if we further restrict to tridiagonal matrices. The positive definiteness of tridiagonal matrices was considered in [2], which also makes use of chain sequences:

### Definition 2

Let k be a positive integer. A sequence { α k } k > 0 is called a (positive) chain sequence if there exists a parameter sequence { g k } k > 0 such that

α k = g k ( 1 g k 1 ) ,

with 0 g 0 < 1 and 0 < g k < 1 , for k > 0 .

In Theorem 1, we take advantage of the doubly stochastic structure of the matrix A to efficiently determine the existence of a unique tridiagonal doubly stochastic matrix with prescribed diagonal entries, rather than prescribed eigenvalues. We then use Sylvester’s criterion, the Wall-Wetzel theorem ([2, Theorem 3.3], [29]), and our construction presented in equation (2) to show the equivalence of positive definiteness, complete positivity, and a set of inequalities involving the principal minors of the given matrix.

### Theorem 1

Let A be a tridiagonal doubly stochastic matrix of the form

A = a 1 b 1 b 1 a 2 b 2 b n 2 a n 1 b n 1 b n 1 a n .

Denote a = ( a 1 , a 2 , , a n 1 ) R n 1 . Then A is uniquely determined by the vector a if and only if a fulfills equations (3) and (4):

(3) 0 a i 1

for all i = 1 , 2 , , n 1 .

(4) 0 < ( a 1 + a 3 + + a 2 k 1 ) ( a 2 + a 4 + a 6 + + a 2 k ) < 1

for all k = 1 , 2 , , n 2 .

Denote b = ( b 1 , b 2 , , b n 1 ) R n 1 . Then A is uniquely determined by the vector b if and only if b fulfills equation (5):

(5) b i = 1 a 1 + a 2 a 3 + a i if i is odd a 1 a 2 + a 3 a i if i is even

for all i = 1 , 2 , 3 , , n 1 .

Moreover, the followings are equivalent:

1. A is positive definite

2. A is completely positive

3. a i > b i 1 2 det ( A i 2 ) det ( A i 1 )

for all i = 2 , 3 , , n , where A i is the upper left i × i submatrix of A (with A n = A ).

### Proof

An n × n symmetric tridiagonal doubly stochastic matrix is uniquely determined by its diagonal entries a = ( a 1 , a 2 , , a n ) : A given vector a R n determines a tridiagonal doubly stochastic matrix if and only if both equations (3) and (4) hold. Indeed, since b i 0 for all i = 1 , 2 , , n 1 , it can easily be seen that equation (5) is equivalent to equation (4), while equation (3) is a necessary condition for the matrix to be doubly stochastic.

Next, we note that the determinant of A n satisfies a three-term recurrence relation

(6) det ( A i ) = a i det ( A i 1 ) b i 1 2 det ( A i 2 )

for all i = 2 , 3 , , n , where det ( A 0 ) is defined to be 1 . Sylvester’s criterion states that A is positive definite if and only if det ( A i ) > 0 for all i , from which it follows that (6) implies that

(7) a i > b i 1 2 det ( A i 2 ) det ( A i 1 )

for i = 2 , 3 , , n if and only if A is positive definite.

We now consider the sequence

α k = b k 2 a k a k + 1 ,

for all k = 1 , 2 , , n , which, assuming entrywise non-negativity of A , is a chain sequence with g 0 = 0 and and

g k = b k 2 a k a k + 1 1 1 g k 1

for k = 1 , 2 , , n . Note that 0 < g k < 1 for k = 1 , 2 , , n . Indeed, it is clear that g 1 , , g n are positive, and to see they are less than one, we note that the inequality g 1 < 1 is equivalent to det a 1 b 1 b 1 a 2 = a 1 a 2 b 1 2 > 0 , the inequality g 2 < 1 is equivalent to det a 1 b 1 0 b 1 a 2 b 2 0 b 2 a 3 = a 1 a 2 a 3 a 1 b 2 2 a 3 b 1 2 > 0 , and in general, g k < 1 is equivalent to det a 1 b 1 b 1 a 2 b 2 b k 2 a k 1 b k 1 b k 1 a k > 0

In our algorithm to construct vectors v i with components given by equation (2), we require all radicands to be positive. Letting a 0 = 0 , we have the radicand in ( v k ) k is

a k b k 1 2 a k 1 ( 1 g k 2 ) = a 3 1 b 2 2 a 2 ( 1 g 1 ) = a k ( 1 g k 1 )

for k = 2 , 3 , , n . The Wall–Wetzel theorem [2,29] states that a real symmetric tridiagonal matrix with positive diagonal entries is positive definite if and only if b k 2 a k a k + 1 k = 1 n 1 is a chain sequence. Note that a matrix cannot be positive definite if there is at least one diagonal entry equal to 0. This implies that the radicand in ( v k ) k is positive for all k . Our construction therefore provides well defined vectors v k and therefore shows that A is completely positive. Thus, we have (a) (b); that is, positive definiteness implies complete positivity.

It is clear that (b) implies (a). Thus, the equivalence of (a), (b), and (c) follows.□

Note that A i only contains a 1 , a 2 , , a i for i = 1 , 2 , , n , so (7) in the aforementioned proof gives a condition (namely, a lower bound) that each a i has to satisfy and only depends on the entries a 1 , a 2 , , a i 1 . For example,

a 4 > b 3 2 det ( A 2 ) det ( A 3 ) = ( 1 a 1 + a 2 a 3 ) 2 det a 1 1 a 1 1 a 1 a 2 det a 1 1 a 1 0 1 a 1 a 2 a 1 a 2 0 a 1 a 2 a 3 ,

which only contains a 1 , a 2 , and a 3 on the righthand side.

### Definition 3

Let A be an n × n matrix. A is said to be reducible if it can be transformed via row and column permutations to a block upper triangular matrix, with block sizes < n . A is irreducible if it is not reducible.

Note that in the context of tridiagonal doubly stochastic matrices, irreducibility is equivalent to b i > 0 for all i = 1 , , n , i.e., that A cannot be written as the direct sum of smaller tridiagonal doubly stochastic matrices. When considering whether a tridiagonal doubly stochastic matrix A is completely positive, we may assume A is irreducible. Indeed, if A were a direct sum of smaller doubly stochastic matrices – implying that some b i = 0 – we could consider these smaller doubly stochastic matrices separately. The V corresponding to A in the decomposition would have the same direct sum structure: it would be the direct sum of the V ’s corresponding to the smaller doubly stochastic matrices. If some b i = 0 , then the corresponding v i only has one non-zero element. B in Example 1 is a direct sum of two doubly stochastic matrices and C in Example 2 is a direct sum of three doubly stochastic matrices.

It is clear that if a matrix A is completely positive, then it is automatically positive semidefinite. Taussky’s theorem [28, Theorem II] allows us to use the eigenvalues of a given tridiagonal doubly stochastic matrix to characterize a partial converse statement.

### Theorem 2

(Taussky’s theorem) Let A be an n × n irreducible matrix. An eigenvalue of A cannot lie on the boundary of a Gershgorin disk unless it lies on the boundary of every Gershgorin disk.

Equivalently, Taussky’s theorem states that if A is an irreducible, diagonally dominant matrix with at least one inequality of the diagonal dominance being strict (in the context of tridiagonal doubly stochastic matrices, this means that (1) holds with strict inequality for at least one i ), then A is non-singular. This is in fact the original formulation of the theorem in [28].

Recall that if a matrix A is symmetric, diagonally dominant, and all its diagonal entries are non-negative, then A is positive semidefinite.

### Proposition 4

Let A be an n × n irreducible tridiagonal doubly stochastic matrix. If n 3 and A is diagonally dominant, then A is positive definite or, equivalently, A is non-singular.

### Proof

If A is diagonally dominant, then b i + b i + 1 0.5 for all i = 0 , 1 , 2 , , n with b 0 = b n = 0 . Suppose 0 is an eigenvalue of A , then by Gorshgorin circle theorem, there exists some i such that b i + b i + 1 = 0.5 meaning that 0 is an eigenvalue on the boundary of a disk, and so by Taussky’s theorem, every disk must have boundary at 0; that is, b i + b i + 1 = 0.5 for all i . We have b 1 = 0.5 , which makes b 2 = 0 , which contradicts with the assumption that A is irreducible.□

Note that the tridiagonal doubly stochastic matrix 1 / 2 1 / 2 1 / 2 1 / 2 is the only 2 × 2 tridiagonal doubly stochastic matrix that is positive semidefinite, without being positive definite (i.e., it is the only 2 × 2 positive semidefinite tridiagonal doubly stochastic matrix with zero as an eigenvalue). One can see this from (1) and the equivalent formulation of Taussky’s theorem. One can verify that it is completely positive with V = 1 / 2 ( 1 1 ) T .

A number of corollaries follow from Proposition 4.

### Corollary 1

Let A be an n × n tridiagonal doubly stochastic matrix with n 3 . If A is diagonally dominant, with zero as an eigenvalue, then A must be reducible with at least one block of the form 1 / 2 1 / 2 1 / 2 1 / 2 .

The following Corollary is a more general statement than our previous Proposition 3. This result appears to be known (e.g., it is mentioned in [15, Section 3]), yet we are unaware of a proof in the literature. Given some subtleties in, and the length of, the proof, we have provided the details herein, which culminate in the following corollary. We note that [5, Example 2] states that all tridiagonal doubly stochastic matrices are completely positive, which is not true in general without the assumption of diagonal dominance.

### Corollary 2

Let A be an n × n tridiagonal doubly stochastic matrix. If A is diagonally dominant, then A is completely positive.

### Proof

From the discussion following Definition 3, we can assume that A is irreducible. Proposition 4 in fact holds for n 2 except for the special case of the 2 × 2 matrix A = 1 / 2 1 / 2 1 / 2 1 / 2 . However, this matrix is completely positive with V = 1 / 2 ( 1 1 ) T . In all other cases, it follows from Proposition 4 that A is positive definite, and hence, A is completely positive due to Theorem 1.□

Proposition 3.2 of [8] states that the cp-rank of a matrix A (i.e., the minimal number k in Definition 1) is greater than or equal to the rank of A . We show that the cp-rank of any tridiagonal doubly stochastic matrix is the same as the rank, or equivalently, equation (2) provides a way to construct a minimal rank-one decomposition.

### Corollary 3

The cp-rank of any positive semi-definite irreducible tridiagonal doubly stochastic matrix is equal to its rank.

### Proof

Let A be an n × n irreducible tridiagonal doubly stochastic matrix. We can let a 0 = 0 in the construction of the v i in equation (2) as long as the matrix A is positive definite by the proof of Proposition 3. Therefore, the number of non-zero summands in equation (2) is at most n .

An irreducible tridiagonal doubly stochastic matrix is singular if and only if it is the 2 × 2 matrix with all entries the same (equal to 1/2). Indeed, singular means that the rows are linearly dependent, but this is impossible for n 3 since irreducibility of a tridiagonal doubly stochastic matrix is equivalent to b i > 0 for all i = 1 , , n (and thus the rank of such a matrix is n ). As previously stated, in the case of the 2 × 2 all-1/2 matrix (which has rank 1), our decomposition gives a 2 × 1 matrix V = 1 / 2 ( 1 1 ) T (equivalently, a single vector v = 1 / 2 ( 1 1 ) T ). For n 3 , an irreducible tridiagonal doubly stochastic matrix has rank n , and our decomposition gives n vectors v 1 , , v n .□

Let A be an n × n symmetric pentadiagonal matrix:

A = a 1 b 1 c 1 b 1 a 2 b 2 c 2 c 1 b 2 a 3 b 3 c 3 c n 4 b n 3 a n 2 b n 2 c n 2 c n 3 b n 2 a n 1 b n 1 c n 2 b n 1 a n .

We are interested again in the setting where A is entrywise non-negative. If A is also doubly stochastic, we have a i = 1 ( b i 1 + b i + c i 2 + c i ) for i = 1 , 2 , , n , where b k = 0 for k 0 or k n and c = 0 for 0 or n 1 .

Unlike in the tridiagonal matrix setting, the property of being doubly stochastic does not immediately imply symmetry, and thus, we assume as a hypothesis this extra condition.

### 3.1 Basic properties of symmetric pentadiagonal doubly stochastic matrices

Many of the arguments from Section 2.1 carry through into the pentadiagonal setting. As in the tridiagonal case, the eigenvalues of a symmetric pentadiagonal doubly stochastic matrix are bounded between 1 and 1.

Again, as in the case of tridiagonal doubly stochastic matrices (Proposition 1), any value in [ 1 , 1 ] can be realised as an eigenvalue of an n × n symmetric pentadiagonal doubly stochastic matrix. That is, the statement of Proposition 1 reads the same when “tridiagonal” is replaced with “symmetric pentadiagonal.” One can, if desired, use the bonafide pentadiagonal matrix

A = a b b b a b b b a

for the n 3 cases.

### 3.2 Complete positivity

We now provide a construction similar to that for tridiagonal doubly stochastic matrices to provide a sufficient condition for when a symmetric pentadiagonal doubly stochastic matrix A is completely positive. Define the set { v i } i = 1 n of cardinality n + 2 , whose elements are n -dimensional vectors where the j th component of v i , denoted ( v i ) j (where j = 1 , , n ), is recursively defined by

(8) ( v i ) j = a i [ ( ( v i 1 ) i ) 2 + ( ( v i 2 ) i ) 2 ] j = i b i ( v i 1 ) j ( v i 1 ) j 1 ( v i ) i j = i + 1 c i / ( v i ) i j = i + 2 0 otherwise

with initial conditions v 1 = a 1 0 0 T and v 0 = a 0 b 0 0 0 T . This construction yields

v 1 = a 1 ( a 0 2 + a 1 2 ) b 1 b 0 a 0 a 1 ( a 0 2 + a 1 2 ) c 1 a 1 ( a 0 2 + a 1 2 ) 0 0 T v 2 = a 2 0 ( b 1 b 0 a 0 ) 2 a 1 ( a 0 2 + a 1 2 ) + b 0 2 b 2 c 1 ( b 1 b 0 a 0 ) a 1 ( a 0 2 + a 1 2 ) a 2 ( b 1 b 0 a 0 ) 2 a 1 ( a 0 2 + a 1 2 ) + b 0 2 c 2 a 2 ( b 1 b 0 a 0 ) 2 a 1 ( a 0 2 + a 1 2 ) + b 0 2 0 0 T , etc.

Similar to the tridiagonal case, the constants a 1 , a 0 , and b 0 are taken to be non-negative numbers with the caveat that there is some collection of initial values that leads to the decomposition being ill defined. In fact, let a 1 = b 0 = 0 and v 1 be the all-zeros vector, then the aforementioned construction reduces to the construction for tridiagonal matrices.

### Proposition 5

Let A be a symmetric pentadiagonal matrix. If the v i as defined in equation (8) are well defined, then A = i = 1 n v i v i T . If the entries for each v i are non-negative numbers, then A is completely positive.

### Proof

The proof is similar to the tridiagonal case. Consider a symmetric pentadiagonal n × n matrix A such that the vectors in equation (8) are well defined. Let V i = v i v i T for i = 1 , , n and A ˜ = i = 1 n V i . We wish to show that A ˜ = A . From the definition of the v i given in equation (8), each V i is symmetric and pentadiagonal with only up to nine non-zero entries and so A ˜ itself is symmetric and pentadiagonal.

Now, consider a component a ˜ j , j + 1 of A ˜ , where j = 1 , 2 , , n 1 . The only V i that have a non-zero entry in the ( j , j + 1 ) th component are V j 1 and V j as v j 1 and v j are the only vectors with both the j and ( j + 1 ) th components being non-zero. The ( j , j + 1 ) th component of V j 1 + V j is ( v j 1 ) j ( v j 1 ) j + 1 + ( v j ) j ( v j ) j + 1 , which, after simplifying, is in fact b j and so a ˜ j , j + 1 = b j . By symmetry, we also have a ˜ j + 1 , j = b j .

Now, consider a component a ˜ j , j + 2 of A ˜ , where j = 1 , 2 , , n 2 . The only V i that has a non-zero entry in the ( j , j + 2 ) th component is V j as v j and v j + 2 are the only vectors with both the j and ( j + 2 ) th components being non-zero. The value of ( v j ) j is the same as the denominator of ( v j ) j + 2 , and so we simply obtain a ˜ j , j + 2 = c j . By symmetry, we also have a ˜ j + 2 , j = c j .

Now consider a component on the diagonal of A ˜ : a ˜ j j , where j = 1 , 2 , , n . The only V i that have non-zero entries in the ( j , j ) th component are V j 2 , V j 1 , and V j , and the sum of the respective values is precisely a ˜ j j = a j for j = 1 , 2 , , n . Therefore, A = A ˜ = i = 1 n v i v i T ; If the entries for each v i are non-negative numbers, then A is completely positive.□

When using equation (8) to find a decomposition of a pentadiagonal matrix, it is simplest to choose the initial vectors v 1 and v 0 to both be the zero vector. However, Example 3 shows that it is sometimes necessary to choose non-zero initial conditions to prove that the given matrix is completely positive.

### Example 3

Consider the matrix

A = 3 / 4 1 / 8 1 / 8 0 0 1 / 8 3 / 4 0 1 / 8 0 1 / 8 0 1 / 2 13 / 40 1 / 20 0 1 / 8 13 / 40 1 / 2 1 / 20 0 0 1 / 20 1 / 20 9 / 10 .

By using equation (8) with initial vectors v 1 and v 0 , both taken to be the zero vector, we compute the matrix V such that A = V V T to be

V = 0 0 3 2 0 0 0 0 0 0 1 4 3 35 3 4 0 0 0 0 0 1 4 3 1 4 105 67 35 2 0 0 0 0 0 3 35 2 23 2,345 339 335 2 0 0 0 0 0 7 335 2 7 3 37,855 2 101 113 .

We note that the component ( v 2 ) 3 is negative, and hence, this decomposition cannot be used to prove that A is completely positive. It is not surprising that taking the initial conditions to be all zero does not work: if both v 1 and v 0 are zero vectors, i.e., a 1 = a 0 = b 0 = 0 , then ( v 2 ) 3 > 0 is equivalent to a 1 b 2 b 1 c 1 . But in A , b 1 = c 1 = 1 / 8 while b 2 = 0 , so a 1 b 2 < b 1 c 1 .

If we instead use the initial conditions v 1 = 0 0 0 T and v 0 = 1 2 1 4 0 0 T , we obtain the decomposition A = W W T , where

W = 0 1 2 1 2 0 0 0 0 0 1 4 0 11 4 0 0 0 0 0 1 4 2 0 15 2 4 0 0 0 0 0 1 2 11 13 5 30 4,157 165 10 0 0 0 0 0 2 15 5 23 11 62,355 10 14,861 4,157 2 .

This decomposition shows that A is in fact completely positive.

### Example 4

As an analogue to Example 2 consider the matrix

1 / 2 1 / 4 1 / 4 0 0 0 1 / 4 1 / 2 1 / 4 0 0 0 1 / 4 1 / 4 1 / 2 0 0 0 0 0 0 1 / 2 1 / 2 0 0 0 0 1 / 2 1 / 2 0 0 0 0 0 0 1 .

For this matrix, the construction we outline through equation (8) never gives a well defined decomposition regardless of the choice of initial conditions. To see why this is the case, note that c 2 = b 3 = c 3 = 0 . Here, we may assume that we have chosen initial conditions such that ( v 1 ) 1 , ( v 2 ) 2 , and ( v 3 ) 3 are non-zero (otherwise the decomposition would already be ill defined). From this, we immediately obtain

 ( v 2 ) 4 = c 2 ( v 2 ) 2 = 0 , ( v 3 ) 4 = b 3 − ( v 2 ) 4 ( v 2 ) 3 ( v 3 ) 3 = 0 , ( v 3 ) 5 = c 3 ( v 3 ) 3 = 0 .

Therefore, we can compute the following components of v 4 to be:

 ( v 4 ) 4 = a 4 − ( ( v 3 ) 4 ) 2 + ( ( v 2 ) 4 ) 2 = a 4 = 1 2 , ( v 4 ) 5 = b 4 − ( v 3 ) 5 ( v 3 ) 4 ( v 4 ) 4 = b 4 a 4 = 1 2 , ( v 4 ) 6 = c 4 ( v 4 ) 4 = 0 .

Now, all six of the components that have been calculated so far are completely independent of the initial conditions (except for the requirement that all previous components were well defined). Therefore, the vector v 5 is independent of the initial conditions. We then find that

( v 5 ) 5 = a 5 ( ( ( v 4 ) 5 ) 2 + ( ( v 3 ) 5 ) 2 ) = 1 2 1 2 2 + 0 = 0 .

Hence, ( v 5 ) 6 is not well defined.

Similar to Example 2, we can still make use of our construction to prove that A is completely positive by considering A as the block diagonal matrix

A = A 1 0 3 , 2 0 3 , 1 0 2 , 3 A 2 0 2 , 1 0 1 , 3 0 1 , 2 A 3 ,

where

A 1 = 1 / 2 1 / 4 1 / 4 1 / 4 1 / 2 1 / 4 1 / 4 1 / 4 1 / 2 , A 2 = 1 / 2 1 / 2 1 / 2 1 / 2 , A 3 = 1 .

From here we can find a decomposition for the three matrices A 1 , A 2 , and A 3 separately. We find that

V 1 = 0 0 1 2 0 0 0 0 1 2 2 3 2 2 0 0 0 1 2 2 1 2 6 1 3 , V 2 = 0 1 2 0 0 1 2 0 , V 3 = 1 ,

where A 1 = V 1 V 1 T , A 2 = V 2 V 2 T , and A 3 = V 3 V 3 T . A decomposition for A can then be formed by creating the block diagonal matrix

V = V 1 0 3 , 3 0 3 , 1 0 2 , 5 V 2 0 2 , 1 0 1 , 5 0 1 , 3 V 3 = 0 0 1 2 0 0 0 0 0 0 0 0 1 2 2 3 2 2 0 0 0 0 0 0 0 1 2 2 1 2 6 1 3 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 1

and noting that A = V V T . This proves that A is completely positive.

Similar to Example 1, we note that for any matrix M , if M has columns consisting entirely of zeros these columns can be removed from the matrix M without changing the value of M M T . Therefore, we can simplify V to be the 6 × 5 matrix below, rather than the 6 × 9 matrix earlier.

V = 1 2 0 0 0 0 1 2 2 3 2 2 0 0 0 1 2 2 1 2 6 1 3 0 0 0 0 0 1 2 0 0 0 0 1 2 0 0 0 0 0 1 .

We leave a result analogous to Proposition 3 in the setting of symmetric pentadiagonal doubly stochastic matrices as an open problem. Example 3 shows that there is a connection between elements in A and how should one choose v 1 and v 0 , however it is not immediately clear in general. Consider the matrix A which is equal to A except for the following entries:

a 11