Omar Alomari, Mohammad Abudayah and Torsten Sander

The non-negative spectrum of a digraph

Open Access
De Gruyter | Published online: February 19, 2020

Abstract

Given the adjacency matrix A of a digraph, the eigenvalues of the matrix AAT constitute the so-called non-negative spectrum of this digraph. We investigate the relation between the structure of digraphs and their non-negative spectra and associated eigenvectors. In particular, it turns out that the non-negative spectrum of a digraph can be derived from the traditional (adjacency) spectrum of certain undirected bipartite graphs.

MSC 2010: 05C50; 15A18

1 Introduction

Spectral graph theory is a wide field of research where we study the spectral properties of matrices associated with graphs and in particular try to link them to the structural properties of those graphs. The most classic example is the adjacency matrix A(G) of a graph G, which is the 0-1-matrix reflecting the vertex adjacency relation of the graph. Other frequently studied matrices are the Laplacian matrix L(G) := D(G) – A(G) and the signless Laplacian matrix Q(G) := D(G) + A(G), where D(G) is the diagonal matrix of the respective vertex degrees. Changing the order in which the vertices of G are indexed will change each of these matrices, but only by a permutation similarity transformation. Hence, both eigenvalues and eigenvectors (when viewed as real functions on the vertex set) will remain unchanged and can be attributed to the given graph G.

Perhaps the most important advantage of the mentioned matrices is their symmetry (because the adjacency relation of an undirected graph is obviously symmetric), so the matrices are diagonizable. Thus, the eigenspace dimensions match to the multiplicities of the respective roots of the characteristic polynomial of the matrix. Moreover, the spectrum is real. Turning to digraphs, we find that the adjacency relation of a digraph is usually not symmetric. So, studying the spectral properties of its adjacency matrix may reveal interesting insights [1], but we may not rely on the mentioned advantages. As a consequence, some researchers have considered alternative matrix choices, such as the (complex valued) Hermitian-adjacency matrix (see [2, 3]). Opposed to that, in [4] the author considers the (real) matrices Nout(D) = A(D)A(D)T and Nin(D) = A(D)TA(D), where A(D) is the adjacency matrix of the given digraph D. By construction, these matrices are symmetric and their spectra do not depend on the chosen vertex order.

Note that the spectrum of Nin and Nout is the same (for the sake of brevity, we will omit the reference to the given graph if there is no danger of confusion). Moreover, inverting the orientations of all edges in a given digraph transforms Nin into Nout, and vice versa. Hence we may restrict our attention to only one of these matrices. Following [4], let us only consider Nout. Given some digraph D, the spectrum of Nout is called the non-negative spectrum or, shortly, the N-spectrum of D. It is the multi-set of N-eigenvalues of D, which in turn are the roots of the N-characteristic polynomial ND(x) = χ(Nout(D), x) = det(Nout(D) – xI), appearing in the spectrum according to their multiplicity. In the same manner, we will speak of N-eigenvectors, N-nullity (the multiplicity of N-eigenvalue 0) and N-integrality (meaning the N-spectrum consists only of integers). By construction, Nout is positive semi-definite, so N-eigenvalues are always non-negative.

It seems that [4] is the first source that studies the N-spectra of digraphs in some detail. Besides deriving some general basic facts to start with, the author of [4] focuses mainly on regular digraphs. Our goal is to generally explore how N-eigenvalues and N-eigenvectors are linked to the structure of digraphs, in particular under certain transformations. In [4] a first step in this direction is made, by studying the change of the characteristic polynomial under two simple operations, namely attaching a pendant source to a source (the latter will no longer be a source after this) and attaching a pendant sink to some arbitrary vertex. Here, a source is a vertex with no incoming edges (but at least one outgoing edge) and a sink is a vertex without outgoing edges (but at least one incoming edge). Our research will start where the author left off in [4].

2 Common out-neighbor partition

Unless stated otherwise, the digraphs we study hereafter are tacitly assumed to be simple, loopless and weakly connected.

In order to deal with the Nout matrix we need to understand the meaning of its entries. Let v1, …, vn be the vertices of a given digraph D. Then it is easily seen that, for all i, j ∈ {1, …, n}, the entry found at position (i, j) of Nout is equal to the number of common out-neighbors[1] of the vertices vi and vj (cf. Proposition 2.1 in [4]). In particular, entry (i, i) counts the number of out-neighbors of vertex vi. Hence the trace of Nout is exactly the number of edges of D, which in turn is equal to the sum of all eigenvalues of Nout (counting each eigenvalue according to its multiplicity). So the N-spectrum consists only of zeroes if and only if D contains only isolated vertices.

Let us consider the simple case of attaching a pendant source vn+1 to a source vi of D (as a result, vi is not a source any longer). Clearly, this operation does not change any common out-neighbors nor the out-degrees of the vertices of D. So the matrix Nout(D′) of the resulting digraph D′ can be obtained as the block diagonal matrix diag(Nout(D), 11×1), where rn×m denotes the (n × m)-matrix with all entries equal to r. Consequently, ND(x) = (x – 1)ND(x) (cf. Prop. 4.5 in [4]). Given a basis of N-eigenvectors of D (spanning ℝn), we can construct a basis of N-eigenvectors of D′ (spanning ℝn+1) by means of trivially embedding the given basis for D, alongside with a unit vector en+1.

Next, consider attaching a pendant sink vn+1 to a sink vi of D. As before, vi does not have any common out-neighbors. But the operation changes the number of out-neighbors of vi to one. The i-th column of Nout has changed from a zero column to a unit vector ei. Moreover, for the sink vn+1 itself we add a zero row/column to Nout. All in all, determinant expansion along the i-th column of Nout(D′) readily yields ND(x) = (x – 1)ND(x).

These two example operations were fairly simple. But what about even the slightest generalization, say, attaching a sink to multiple sinks? This operation indeed does change common out-neighbor relations in the digraph. Now we can ask ourselves: What is the effect on the N-spectrum and N-eigenvectors? In particular, is it possible to preserve some of the original N-eigenvectors by means of trivial embedding? In a sense, we want to be able to judge whether the effects of a somewhat “local” modification of the given digraph result in predictably “local” changes of the spectral properties. To this end, we will introduce a partition of the vertices of D.

In view of the entries of Nout we need to analyze which vertices have common out-neighbors. Two vertices v1 and v3 have a common out-neighbor v2 if and only if there exists a trail between them that consists of a forward edge followed by a reverse edge, i.e. a trail v1v2v3. If vertices v3 and v5 have a common out-neighbor v4, then the trail can be extended to v1v2v3v4v5. Given two vertices x and y of D, a trail between x and y is a zig-zag trail if it has even length and (from either end) it starts with a forward edge, then a reverse edge, then again a reverse and so on (with strictly alternating directions). Note that, trivially, a path of length zero is also considered a zig-zag trail.

To study the extents of zig-zag trails, let us establish a relation on the vertex set V of D. Let any two vertices be related whenever there exists a zig-zag trail between them. Clearly, this relation is reflexive, symmetric and transitive. So we have an equivalence relation that partitions the vertices of D into equivalence classes 𝓑1, …, 𝓑k. The associated partition { B i } i = 1 k shall be called the common out-neighbor partition of D. Given some vertex v, let 𝓑(v) denote the class that contains v. Note that sinks always form singleton classes, but the reverse is not necessarily true. Moreover, if D contains no mutually adjacent vertices (i.e. it is an orientation of some undirected graph), then the common out-neighbor partition contains at least two classes.

Now consider any class 𝓑i of the common out-neighbor partition. By construction, none of its vertices have common out-neighbors with vertices external to 𝓑i. Hence we conclude:

Proposition 1

Let D be a digraph with common out-neighbor partition { B i } i = 1 k . If we renumber the vertices of D such that we enumerate the vertices of 𝓑1 first, those of 𝓑2 next, and so on, then Nout assumes block diagonal form Nout(D) = diag(B1, …, Bk), where Bi denotes the block associated with the class 𝓑i.

Example 1

Figure 1 shows an example digraph. The gray vertices form the class 𝓑(0) = {0, 1, 2, 3, 4}, by virtue of the zig-zag trails

Figure 1 
Common out-neighbor partition of a digraph.

Figure 1

Common out-neighbor partition of a digraph.

  1. 4 → 7 ← 0 → 1 ← 2 → 0 ← 1,

  2. 0 → 2 ← 3.

Note that non-bold numbers are merely “helpers” (common out-neighbors) that establish the zig-zag relation. The black vertices form a class 𝓑(5) = {5, 6}. The white vertices form a singleton class each.

The vertex numbers have already been chosen such that they match the common out-neighbor partition. Hence, with respect to this numbering, the matrix Nout assumes block diagonal form with blocks of sizes 5, 2, 1, 1, 1, 1. This is shown in Figure 2.

Figure 2 
Block diagonal form according to common out-neighbor partition.

Figure 2

Block diagonal form according to common out-neighbor partition.

It is important to realize that the block Bi associated with a class 𝓑i does not directly correspond to any subdigraph of D. The reason is that the vertices of a class may have external out-neighbors. On the other hand, if we construct a subdigraph D′ of D by keeping only the edges (including their endpoints) emanating from the vertices of 𝓑i, then 𝓑i is also a class of the common out-neighbor partition of the resulting digraph D′ – with exactly the same block Bi in Nout(D′). The rest of Nout(D′) is zero, by construction. This is the minimal subdigraph of D containing the class 𝓑i, with exactly the same block Bi in its Nout matrix (cf. Figure 3).

Figure 3 
Minimal subdigraph with same block for given class.

Figure 3

Minimal subdigraph with same block for given class.

In what follows, we will make use of the Kronecker product ⊗ of real matrices. Given two matrices A = (aij) ∈ ℝp×q and B = ∈ ℝr×s, we obtain AB ∈ ℝpr×qs by replacing each entry aij of A by the block aijB. This definition naturally generalizes to vectors.

An immediate benefit of the block diagonal form Nout = diag(B1, …, Bk) generated by the common out-neighbor partition is that we may directly construct the N-eigenvectors of D on a per-block basis:

Theorem 1

Let Nout(D) = diag(B1, …, Br), according to the common out-neighbor partition { B i } i = 1 r of a given digraph D. For any eigenvector x of Bi (for eigenvalue λ), xei is an N-eigenvector of D (for N-eigenvalue λ).

Proof

N o u t ( D ) ( x e i ) = j = 1 r B j ( e j e j T ) ( x e i ) = j = 1 r ( B j x ) ( ( e j e j T ) e i ) = ( B j x ) e i = ( λ x ) e i = λ ( x e i ) .

Theorem 2

Let Nout(D) = diag(B1, …, Br), according to the common out-neighbor partition { B i } i = 1 r of a given digraph D. If x is an N-eigenvector of D (for N-eigenvalue λ) that is non-zero on the vertex v ∈ 𝓑i, then x|𝓑i is an eigenvector of Bi (for eigenvalue λ). Here, x|𝓑i ∈ ℝ|𝓑i| denotes the restriction of x to the vertices of class 𝓑i.

Proof

We have

N o u t ( D ) x = i = 1 r B i ( e i e i T ) j = 1 r x | B j e j = i = 1 r B i ( e i e i T ) x | B i e i = i = 1 r B i x | B i ( e i e i T ) e i = i = 1 r B i x | B i e i

and

λ x = λ i = 1 r ( x | B i e i ) = i = 1 r ( λ x | B i ) e i .

Since Nout(D)x = λx by assumption, it follows that Bix|𝓑i = λx|𝓑i for all i = 1, …, r. If x is non-zero on the vertex v ∈ 𝓑i, then x|𝓑i ≠ 0, so it is an eigenvector of Bi for eigenvalue λ.□

Corollary 1

If x is a nowhere-zero N-eigenvector of D for N-eigenvalue λ, then λ is a common eigenvalue of all blocks Bi, i = 1, …, r, of Nout.

Corollary 2

Given a digraph D with n vertices, a unit vector ei ∈ ℝn is an N-eigenvector of D if and only if the unique vertex v on which ei is non-zero does not have any common out-neighbors with other vertices. The corresponding N-eigenvalue equals the out-degree of v.

Returning to the questions posed at the beginning of this section, let us now consider the case of connecting two digraphs by a new sink:

Theorem 3

Let D1, D2 be two disjoint digraphs and S1, S2 two sets of sinks of D1 and D2, respectively. Further, let Dbe the digraph obtained by connecting all the vertices of S1S2 to a new sink η. Then,

N D ( x ) = N D 1 ( x ) N D 2 ( x ) x s x ,

where s = | S1S2|.

Proof

First of all, observe that the vertices of S1 and S2 each form singleton blocks in D1 and D2, respectively. By connecting these vertices to the new sink η they will be united to a block S1S2 of D′ (with η being the unique common out-neighbor of any two vertices in this block). Apart from that, all other classes of D1 and D2 are also classes of the common out-neighbor partition of D′ (with exactly the same blocks as before). For j ∈ {1, 2}, let N out ( D j ) = diag ( B 1 ( j ) , , B r j ( j ) ) , according to the common out-neighbor partition { B i ( j ) } i = 1 r j of Dj. Let sj = | Sj|. Then N out ( D j ) = diag ( B 1 ( j ) , , B r j s j ( j ) , 0 s j × s j ) . Hence we may number the vertices of D′ such that

N out ( D ) = diag ( B 1 ( 1 ) , , B r 1 s 1 ( 1 ) , B 1 ( 2 ) , , B r 2 s 2 ( 2 ) , 1 s × s , 0 1 × 1 ) .

Observing χ(1s×s, x) = xs–1(xs) and keeping in mind that the loss of s sinks effectively contributes a corrective factor xs to the N-characteristic polynomial, the result now follows easily by comparing the three block diagonal forms.□

Theorem 4

Let D1 and D2 be two disjoint digraphs. Choose two arbitrary vertices u of D1 and v of D2. Join D1 and D2 by connecting u, v to a new sink η and let Dbe the resulting digraph. Then,

N D ( x ) = x B ( u ) + e i u e i u T x I e i u e i v T e i v e i u T B ( v ) + e i v e i v T x I N D 1 ( x ) N D 2 ( x ) χ ( B ( u ) , x ) χ ( B ( v ) , x ) ,

where B(u) and B(v) are the blocks associated with the classes of u, v in D1, D2 (respectively) and where iu, iv are the respective row/column indices of u and v within these blocks.

Proof

The key observation is that connecting u and v to η will unite the classes of u and v. Further, η as a new sink will form a singleton cell (with an associated zero block). But apart from these two effects the common out-neighbor partition of D′ will be exactly the union of the partitions of D1 and D2, with the same associated blocks for the cells. What remains is to determine the block B of u, v in Nout(D′). Suppose that the vertices of D1 and D2 are ordered such that their Nout matrices assume block diagonal form according to the respective common out-neighbor partition. Without loss, we may assume that B(u) is the lower-right block in Nout(D1) and that B(v) is the top-left block in Nout(D2). We index the vertices of D′ such that first we enumerate the vertices of D1, then those of D2 (both in the same order as before). Then B is essentially diag(B(u), B(v)), but we have to increment the main diagonal for u and v to reflect that both of them now have an additional out-neighbor, further we have to symmetrically place two off-diagonal ones to reflect that both u, v now have a common out-neighbor.□

Corollary 3

Let D1 and D2 be two disjoint digraphs. Choose an arbitrary vertex v of D1 and a sink u of D2. Join D1 and D2 by connecting u, v to a new sink η and let Dbe the resulting digraph. Then,

N D ( x ) = B + e i e i T x I e i e i T 1 x N D 1 ( x ) N D 2 ( x ) χ ( B , x ) ,

where B is the block associated with the class of v in D1 and i is the row (resp. column) index of v within this block.

We see that the common out-neighbor partition provides a valuable tool for understanding the spectral effects of changes to a digraph, in particular with respect to locality. Whenever changes affect some classes or their associated blocks we have to recompute their eigenvectors and eigenvalues, but the information previously gained for the unaffected blocks can be retained.

3 The Square Theorem

Next, we relate the N-eigenvalues of certain directed bipartite graphs to the eigenvalues of their undirected counterparts. For the following theorem we introduce two new terms. Given a digraph D of order n such that each vertex is either a source or a sink, for any vector x ∈ ℝn we may construct its source part by setting all those entries of x to zero which correspond to the non-sources (i.e. sinks) of D. Likewise, we construct the sink part of x.

Theorem 5

(Square Theorem). Let D be a bipartite digraph such that each vertex is either a source or a sink. Let k be the number of sources and l be the number of sinks in D. Further, let G be the underlying undirected graph of D.

  1. Given an eigenspace basis for eigenvalue λ ≠ 0 of G, the source parts of these vectors form an N-eigenspace basis for N-eigenvalue λ2 of D and their sink parts are all N-eigenvectors for N-eigenvalue 0 of D.

  2. Every eigenvector for eigenvalue 0 of G is also an N-eigenvector for N-eigenvalue 0 of D.

  3. If the source part of any N-eigenvector for N-eigenvalue 0 of D is not null, then this source part is an eigenvector for eigenvalue 0 of G.

  4. Given a basis ofk+l of eigenvectors of G, an N-eigenspace basis for N-eigenvalue 0 of D can be constructed as follows. Collect the sink parts of all the vectors associated with positive eigenvalues of G, together with the vectors associated with eigenvalue 0. Alternatively, collect the source parts of all basis vectors associated with eigenvalue 0 and determine a maximal linearly independent subset of the resulting set, together with l unit vectors, one for each sink (such that it is non-zero exactly on the considered sink).

  5. If η is the nullity of G and ν the N-nullity of D, then 1 ≤ νη ≤ min(k, l) and ν ≥ max(k, l).

Proof

We assume that the vertices of D are ordered such that the sources are numbered before the sinks (G shall inherit this vertex order). Since G is bipartite we have

A ( G ) = 0 k × k B B T 0 l × l , A ( D ) = 0 k × k B 0 l × k 0 l × l ,

for some matrix B ∈ ℝk×l. Hence

N out ( G ) = B B T 0 k × l 0 l × k B T B , N out ( D ) = B B T 0 k × l 0 l × k 0 l × l ,

where we regard G as a (fully bidirected) digraph.

In order to prove (i), suppose that A(G)(x, y)T = λ(x, y)T with x ∈ ℝk, y ∈ ℝl. Since

A ( G ) x y = 0 B B T 0 x y = B y B T x (1)

we get

N out ( D ) x y = B B T 0 0 0 x y = B B T x 0 = λ 2 x 0 ,

so an eigenvector (x, y)T of G for eigenvalue λ is an N-eigenvector of D for N-eigenvalue λ2 ≠ 0 if and only if x ≠ 0 and y = 0, and an N-eigenvector of D for N-eigenvalue 0 if and only if either λ = 0 or both x = 0 and y ≠ 0:

N out ( D ) x 0 = λ 2 x 0 , N out ( D ) 0 y = 0 0 .

Next, suppose that A(G)(x, y)T = (0, 0)T. Then BTx = 0 in (1), so that

N out ( D ) x 0 = B B T 0 0 0 x 0 = B B T x 0 = 0 0 ,

which shows (ii).

For proving (iii) suppose that Nout(D)(x, y)T = (0, 0)T. With respect to the block diagonal form of Nout(D) we immediately deduce BBTx = 0. Using Theorem 3.9-4 (f) from [5] it follows that Bx = 0. Therefore,

A ( G ) x 0 = 0 B B T 0 x 0 = 0 B T x = 0 0 .

Now we turn to claims (iv) and (v). Assume that we have determined a basis of ℝk+l of eigenvectors of G. With respect to linear independence, note that the spectrum of a bipartite graph is symmetric around zero and that for each eigenvector (x, y)T for eigenvalue λ of G we have a twin eigenvector (x, –y)T for –λ (cf. [6]). We modify the given basis as follows. For λ > 0 let Eλ and Eλ be the two eigenspaces for eigenvalues λ and –λ of G, respectively. Select those vectors (x(1), y(1))T, …, (x(r), y(r))T from the overall basis that form a basis of Eλ. By suitable linear combination we find that their source and sink parts

x ( 1 ) 0 , , x ( r ) 0 , 0 y ( 1 ) , , 0 y ( r )

form a basis of the space Eλ + Eλ. In the overall basis we replace the eigenvectors for eigenvalues λ and –λ with these vectors. If we do this for all positive eigenvalues of G, then we still have basis of ℝk+l. Consequently, the source parts inserted for any eigenvalue λ of G constitute an N-eigenspace basis for N-eigenvalue λ2 of D.

Note that x(i) ∈ ℝk, so the final basis may contain at most k source parts. Likewise, it may contain at most l sink parts. Since the number of introduced source and sink parts is the same, we deduce that the number of positive eigenvalues of G is at most min(k, l). Further, the N-nullity of D exceeds the nullity of G by exactly the number of positive eigenvalues of G. Moreover, G contains not only isolated vertices (actually none at all, because D has only sources and sinks), so there exists at least one positive eigenvalue for G (cf. Corollary 2.7 in [6]). This proves the first part of claim (v).

Next, observe that we may construct a linearly independent set of l sink parts that are N-eigenvectors for N-eigenvalue 0 of D, by simply taking l sink unit vectors (i.e. for each sink choose the unit vector that is non-zero on exactly that sink). Hence the N-nullity of D is at least l. Moreover, we may reverse the orientation of D and apply the same argument again, with the sinks turned into sources and vice versa. Equivalently, we may consider Nout instead of Nin. Since these matrices have the same spectra it follows that the N-nullity of D is at least max(k, l). Now the proof of claim (v) is complete.

Using suitable linear combinations of the sink unit vectors on the vectors of the original eigenspace basis for eigenvalue 0 of G, we may convert them into source parts. This may cause linear dependence among the newly created source parts, so we reduce them to a maximal linearly independent subset. This achieves the basis proposed in the second part of claim (iv).□

Example 2

In order to demonstrate some aspects of Theorem 5 we consider the bipartite digraph depicted in Figure 4. This digraph D has N-spectrum

0 ( 7 ) , 0.18 , 1.21 , 1.61 , 2.86 , 6.14 .

Figure 4 
Example bipartite digraph having only sources and sinks.

Figure 4

Example bipartite digraph having only sources and sinks.

Its undirected counterpart G has the traditional spectrum

0 ( 2 ) , ± 0.42 , ± 1.10 , ± 1.27 , ± 1.69 , ± 2.48 .

To illustrate part (i) of the theorem we determine an eigenvector for simple eigenvalue 2.48 of G, see Figure 5. Now we form the source and sinks partsas shown in Figure 6and readily verify that the source part is an N-eigenvector of D for N-eigenvalue 6.14 = (2.48)2, whereas the sink part is an N-eigenvector for N-eigenvalue 0. Note that for the simple eigenvalue –2.48 of G we can get an eigenvector by taking the vector from Figure 5 and simply inverting the signs on all the sink vertices. Naturally, the source part remains the same, so we see that the N-eigenvalue 6.14 of D must be simple.

Figure 5 
Eigenvector for simple eigenvalue 2.48.

Figure 5

Eigenvector for simple eigenvalue 2.48.

Figure 6 
Source and sink parts forming eigenvectors N-eigenvalues 6.14 resp. 0.00.

Figure 6

Source and sink parts forming eigenvectors N-eigenvalues 6.14 resp. 0.00.

With respect to part (iv) of the theorem observe that, since G is missing eigenvalue 0, the easiest way of finding an N-eigenspace basis for N-eigenvalue 0 is given by forming a unit vector basis with respect to the seven sinks of D.

Remark 1

From the proof of part (i) of the Square Theorem we also conclude that the nullity of any bipartite graph G with bipartition set sizes k, l is at least k + l – 2 min(k, l) = |kl|. This is the “Corollary” to Theorem 3 in [7].

A graph is bipartite if and only if it contains no odd cycles. So, given an undirected connected bipartite graph with at least one edge, we can choose exactly two orientations such that the resulting digraph contains only sources or sinks. With respect to the two sets of the vertex bipartition, the vertices of one set will become the sources while the other vertices become the sinks. We call such an orientation a zig-zag orientation. Clearly, only bipartite graphs have zig-zag orientations since an odd circuit would prevent this.

Corollary 4

Let Pn be a directed path with n vertices that has zig-zag orientation. Then

N ( P n ) = x n 2 j = 1 n 2 x 4 cos 2 π j n + 1 .

Proof

According to [8], the eigenvalues of an undirected path with n vertices are the numbers

2 cos π j n + 1 , for j = 1 , , n .

Clearly, these numbers are all distinct. For j = 1, …, n 2 we get the positive eigenvalues. So their squares will occur in the N-spectrum of P. Moreover, it contains n 2 additional zero N-eigenvalues. For odd n the underlying undirected path already has a (single) eigenvalue zero, so altogether we have n 2 zero N-eigenvalues.□

Corollary 5

Let C2n be a cycle with 2n vertices that has zig-zag orientation. Then

N ( C 2 n ) = x n j = 1 n x 4 cos 2 π j n .

Proof

According to [8], the eigenvalues of an undirected cycle with 2n vertices are the numbers

2 cos π j n , for j = 1 , , 2 n .

None of these numbers equals zero. For j = 1, …, n we get one item of each pair of eigenvalues λ, –λ. Hence the result follows.□

A special topic in spectral graph theory is integrality, in particular giving sufficient or necessary conditions such that a graph from a certain class is integral. Even for trees, integrality is a challenging task but, nonetheless, various interesting results have been obtained, including the identification of many families of integral trees, cf. [9, 10, 11, 12]. Let us therefore consider N-integrality of directed trees. It follows from Example 3.7 in [4] that rooted trees are N-integral. The next corollary shows how to construct arbitrarily many N-integral non-rooted trees:

Corollary 6

Let T be an integral tree. Obtain T′ by zig-zag orienting T. Then T′ is N-integral.

Many researchers have studied eigenspaces of graphs in detail and tried to characterize when graphs afford eigenspace bases with certain properties. One particular goal is to choose a basis such that its vectors only contain entries from a certain (small) prescribed set (cf. [13, 14, 15, 16, 17]). A particularly small such set would be {0, 1, –1}. We call a basis simply structured if its vectors have entries only from this set. With the help of the Square Theorem 5 we may transfer knowledge about the structure of eigenspace bases of a bipartite graph to knowledge about N-eigenspace bases of the zig-zag oriented digraphs that can be derived from it. We will now investivate simply structured N-eigenspace bases.

Corollary 7

Given a zig-zag oriented bipartite digraph D, if the underlying undirected graph G has an eigenspace basis for eigenvalue 0 whose vectors assume only values from {0, 1, –1} on the sources, then D has a simply structured N-eigenspace basis for N-eigenvalue 0.

Proof

This follows directly from the second part of claim (iv).□

A particularly obvious case when the previous corollary can be applied is when the underlying undirected graph G has a simply structured eigenspace basis for eigenvalue 0. One tool that may help with the identification of bipartite graphs with suitable bases is total unimodularity. Recall that a matrix is totally unimodular if every square submatrix has determinant 0, 1 or –1. For such a matrix it then follows easily from Cramer’s rule that its null space has a simply structured basis.

Corollary 8

Let G be a forest or a unicyclic graph whose cycle length is divisible by 4. Obtain D by zig-zag orienting G. Then D has a simply structured N-eigenspace basis for eigenvalue 0.

Proof

Proposition 1 of [18] states that all forests (or, rather, their adjacency matrices) are totally unimodular. Moreover unicyclic graphs are totally unimodular if and only if their cycle length is divisible by 4.□

Actually, the previous corollary can be refined because we know a little more about the eigenspace bases of forests:

Corollary 9

Let T be a tree. Obtain D by zig-zag orienting T. Depending on which of the two possible zig-zag orientations was chosen, either every simply structured eigenspace basis for eigenvalue 0 of T is also a simply structured N-eigenspace basis for N-eigenvalue 0 of D or we can take a sink unit vector basis instead.

Proof

It is a consequence of Lemma 19 in [19], that every null space basis of a tree completely vanishes on exactly the same of the two sets of the vertex bipartition of the tree. Depending on the chosen zig-zag orientation, we see that either any simply structured eigenspace basis for eigenvalue 0 of T will also be a simply structured N-eigenspace basis for N-eigenvalue 0 of D or that a sink unit vector basis will serve the purpose.□

Moreover, Corollary 8 may be extended to even more unicyclic graphs:

Corollary 10

Let G be a unicyclic graph with even cycle length. Obtain D by zig-zag orienting G. If the cycle length of G is divisible by 4 or if there exists not exactly one vertex v on the cycle in G such that v is not covered by all maximum matchings of the unique tree emanating from v, then D has a simply structured N-eigenspace basis for eigenvalue 0.

Proof

Theorem 4.51 in [16] states that the above condition on G exactly characterizes those unicyclic graphs which have a simply structured null space basis.□

Actually, one can even conclude from the results presented in [15] or [16] that, in the case excluded in the condition of the previous corollary, the unicyclic graph has at least a null space basis with entries from the set {0, 1, –1, 2, –2} and that its non-zero entries only occur on exactly one set of the vertex bipartition. Orienting the graph such that these vertices become sources, we may trivially choose a sink unit vector basis for the N-eigenspace for N-eigenvalue 0, cf. Corollary 7. Hence:

Corollary 11

Let G be a unicyclic graph with even cycle length. Then at least one of the zig-zag orientations of G affords a simply structured N-eigenspace basis for N-eigenvalue 0.

4 Block separation

We have seen in Section 2 and its Example 1 that zig-zag trails are the key to forming the blocks of the common out-neighbor partition of a digraph. Every second vertex of a zig-zag trail is a “helper” vertex that certifies a common out-neighbor relationship of two cell vertices.

Moreover, we have discussed the formation of minimal subgraphs containing a certain class of interest, with the same associated block in its Nout matrix, cf. Figure 3. What is unlucky about these minimal subgraphs is that we do not immediately see the “helper” vertices as they may act in a double role, being both helper and original member of the cell. We will now present an intuitive construction that will separate the given digraph into constituents such that each contains exactly one of the original cells, with the same block as before, and some artificially added singleton cells (with zero blocks). Moreover, no vertex will act in a double role.

We introduce the block separation of a given digraph D: For every vertex v of D, create a new vertex v′ that will take over the incoming neighbor connections of v, i.e. for each edge wv we add an edge wv′ and delete the edge wv. The resulting digraph has the following properties:

Theorem 6

Let Dbe a digraph obtained by performing block separation on a given digraph D of order n. Then

  1. The common out-neighbor partition is obtained by extending the partition of D with singleton blocks, one for each newly introduced vertex.

  2. Number the vertices of D according to its common out-neighbor partition such that Nout(D) = diag(B1, …, Bk). Keeping the original vertex order of D for Dand numbering the newly introduced vertices after the original vertices, we have

    N out ( D ) = diag ( B 1 , , B k , B k + 1 , , B 2 n )

    with Bk+1 = … = B2n = 01×1.

  3. N(D′, x) = xnN(D, x).

Proof

With respect to the matrix Nout(D′) of the resulting graph D′ we find that each of the original vertices v of D has the same number of out-neighbors as before. The newly introduced vertices are all sinks, by construction. Moreover, the number of common out-neighbors of v and some other vertex w is the same as in D if w is one of the original vertices of D and zero otherwise. Hence, using the proposed vertex numbering, the matrix Nout(D) is a principal submatrix of Nout(D′). Clearly, the rest of Nout(D′) contains only zero entries.□

Example 3

Let us revisit Example 1. The result of block separation performed on the digraph presented there is shown in Figure 7. The vertices are labeled so that it is easy to see the pairs of original and new vertices. Note that the results of Theorem 6 remain valid (just changing the counts related to the newly introduced new singleton cells) if we do not duplicate vertices of D that do not have any incoming neighbors. This helps us prevent unnecessary bloat. Even more, we may refrain from duplicating any vertices belonging to singleton cells since this will only lead to zero blocks in Nout.

Figure 7 
Result of block separation for the digraph of Figure 1.

Figure 7

Result of block separation for the digraph of Figure 1.

Remark 2

By construction, every helper vertex in a component of a block separated digraph is a sink and every vertex of some original cell is a source. Hence the overall block separated digraph is a zig-zag oriented bipartite graph with exactly the same blocks as before, plus some zero blocks. So we may apply the Square Theorem 5 on each component separately to determine its N-eigenvectors and N-eigenspaces. The results can be trivially projected to the original digraph. Hence, the conjunction of block separation and the Square Theorem permits us to fully predict the N-spectral properties of a digraph from the spectral properties of certain associated bipartite graphs.

Example 4

It is easily checked that the example digraph depicted in Figure 4 is isomorphic the largest component of the block separated digraph shown in Figure 7. With respect to Remark 2 we see that Example 2 also demonstrates the combination of block separation and the Square Theorem.

In the following, we apply block separation to analyze the N-spectral radius of directed paths and cycles. Here, the N-spectral radius σN(D) of a digraph D means the largest modulus among all its N-eigenvalues. Likewise, the spectral radius σ(G) of a graph G denotes the largest modulus among all its eigenvalues. Given a connected graph lacking a zig-zag orientation, we define a nearly zig-zag orientation as an orientation such that exactly one vertex is neither a source nor a sink.

Corollary 12

Among all orientations of a given path (or cycle), the maximum N-spectral radius is achieved by exactly the zig-zag orientations (or the nearly zig-zag orientation if the given graph lacks a zig-zag orientation).

Proof

With respect to block separation, note that the N-spectral radius of a given digraph is determined by the maximal spectral radius among the underlying bipartite graphs of the components of the separation digraph. Orienting a graph does not introduce mutual adjacency in the resulting digraph. Therefore, block separation essentially decomposes the given digraph into directed paths (ignoring any isolated vertices). Next we consider the maximum block separation component size for (nearly) zig-zag orientations of paths and cycles. A zig-zag orientation of Pn or a nearly zig-zag orientation C2n+1 will introduce a directed path of the same order in the block separation digraph, plus some isolated vertices. A zig-zag orientation of C2n will introduce a directed cycle of the same order, plus some isolated vertices. Obviously, any other orientation of a given path or cycle will result in further decomposition of the maximal components of the block separation digraph. But careful analysis of the eigenvalue formula given in the proof of Corollary 4 (resp. Corollary 5) reveals the well-known fact that σ(Pn–1) < σ(Pn) < σ(Cn) < σ(Cn+1) (for n ≥ 1, setting σ(P0) := 0). Hence the proof is complete.□

In the introduction we mentioned the signless Laplacian matrix Q(G) of a graph G. Its definition naturally generalizes to multi-graphs. If we construct the matrix Q(M) of some multi-graph M and if this matrix coincides with Nout(D) for some digraph D, then we have an interesting link between the N-spectrum of D and the signless Laplacian spectrum of M. It is therefore not surprising that [4] investigates pairs (D, M) such that Nout(D) = Q(M). Let us clarify how to construct such pairs.

Theorem 7

Let D be a digraph and let M be a loopless multi-graph, both of order n. Define R(D) as the multi-relation of distinct vertices of D having common out-neighbors, i.e. the multiplicity of each pair (v, w) ∈ R(D) equals the number of common out-neighbors of the vertices vw in D. Then Nout(D) = Q(M) if and only if the following conditions are satisfied:

  1. M represents the multi-relation R(D).

  2. For every vertex v of D, the out-degree of v equals the number of instances in which there exist vertices w, zV(D) such that wv and z is a common out-neighbor of v and w.

Proof

By the definition of the matrix Nout(D), each of its off-diagonal entries specifies the number of common out-neighbors of the vertices associated with the row/column indices of the respective considered entry. So the off-diagonal part of Nout(D) exactly represents R(D). Note here that, by construction, R(D) contains no pairs (v, v).

It follows that the off-diagonal entries of Nout(D) and Q(M) = A(M) + D(M) coincide if and only if A(M) is the adjacency matrix of (the multi-graph associated with) the relation R(D) – which is equivalent to condition (i).

The diagonal entries of Nout(D) are the out-degrees of the respective vertices of D, whereas the diagonal entries of Q(M) are the degrees of the vertices of M. But with respect to R(D) the degree of a vertex v of M in turn equals the number of instances in which there exist vertices w, zV(D) such that wv and z is a common out-neighbor of v and w. It follows, under condition (i), that the diagonal entries of Nout(D) and Q(M) coincide if and only if condition (ii) holds.□

Remark 3

Finding pairs (D, M) of digraphs D and loopless multi-graphs M such that Nout(D) = Q(M) is actually very easy. Start with an arbitrary digraph. In view of condition (ii) of Theorem 7, identify all vertices for which the associated diagonal entry of Nout(D) is “too large”, i.e. strictly greater (by a difference of, say, d), than the sum of the other entries in the same row/column. For each such vertex v execute the following step exactly d times: Add an incoming sink w to any of its out-neighbors. This step will create a new instance of common out-neighborship for v, hence extending the aforementioned row/column of Nout by a new entry 1. Moreover, by construction, in the resulting digraph the diagonal entry of Nout associated with w is less than or equal to the sum of the other entries in the same row/column. After carrying out the mentioned process for all identified vertices none of the associated diagonal entries of Nout is too large any more. Further, for every vertex whose associated diagonal entry is too small we may simply add a suitable number of pendant sinks. All in all, we achieve that condition (ii) of Theorem 7 is satisfied. Hence M is now easily derived by means of condition (i).

The multi-graph representing the multi-relation R(D) mentioned in Theorem 7 can also be constructed from the block separation of D:

Proposition 2

Given a digraph D and its block separation digraph D′, create a new undirected multi-graph M as follows:

  1. Let M initially have the vertices of D but no edges.

  2. Let every duplicated vertex in Drepresent a clique formed by the original neighbors of that vertex. For each duplicated vertex from Daugment M by introducing new edges for the associated clique, skipping any 1-cliques.

Then M represents R(D).

Note that step (ii) adds multiple edges between vertices according to the number of cliques they are involved in.

Proof

The construction yields the desired result since the duplicated vertices in the block separation of D are exactly the sinks of the block separation, which in turn are exactly those vertices of D which act as common out-neighbors. Hence the sinks of the block separation introduce cliques in the relation R(D).□

Example 5

Using the block separation digraph in Figure 7, the graph M constructed in Proposition 2 has vertices 0, …, 10. Vertex 0′ introduces a 2-clique among the vertices {1, 2}. Likewise, vertices 1′, 7′ and 9′ introduce 2-cliques among {0, 2}, {0, 4} and {5, 6}, respectively. The vertex 2′ gives rise to a 3-clique among {0, 3, 4}. Note that, altogether, we get a double edge between vertices 0 and 4.

References

[1] Richard A. Brualdi, Spectra of digraphs, Linear Algebra Appl. 432 (2010), no. 9,2181–2213, 10.1016/j.laa.2009.02.033. Search in Google Scholar

[2] Krystal Guo and Bojan Mohar, Hermitian adjacency matrix of digraphs and mixed graphs, J. Graph Theory 85 (2017), no. 1, 217–248, 10.1002/jgt.22057. Search in Google Scholar

[3] Jianxi Liu and Xueliang Li, Hermitian-adjacency matrices and Hermitian energies of mixed graphs, Linear Algebra Appl. 466 (2015), 182–207, 10.1016/j.laa.2014.10.028. Search in Google Scholar

[4] Irena M. Jovanović, Non-negative spectrum of a digraph, Ars Math. Contemp. 12 (2017), no. 1, 167–182, 10.26493/1855-3974.682.065. Search in Google Scholar

[5] Erwin Kreyszig, Introductory functional analysis with applications, John Wiley & Sons, New York, 1978. Search in Google Scholar

[6] Norman Biggs, Algebraic graph theory, 2nd ed., Cambridge: Cambridge University Press, 1994. Search in Google Scholar

[7] Dragoš M. Cvetković and Ivan M. Gutman, The algebraic multiplicity of the number zero in the spectrum of a bipartite graph, Mat. Vesn., N. Ser. 9 (1972), 141–150. Search in Google Scholar

[8] Andries E. Brouwer and Willem H. Haemers, Spectra of graphs, Berlin: Springer, 2012. Search in Google Scholar

[9] Krystyna T. Balińska, Dragoš M. Cvetković, Zoran S. Radosavljević, Slobodan K. Simić, and Dragan Stevanović, A survey on integral graphs, Publ. Elektroteh. Fak., Univ. Beogr., Ser. Mat. 13 (2002), 42–65, 10.2298/PETF0213042B. Search in Google Scholar

[10] Andries E. Brouwer, Small integral trees, Electron. J. Comb. 15 (2008), research paper n1,8. Search in Google Scholar

[11] Pavel Hic and Milan Pokorny, There are integral trees of diameter 7, Publ. Elektroteh. Fak., Univ. Beogr., Ser. Mat. 18 (2007), 59–63, 10.2298/PETF0718059H. Search in Google Scholar

[12] Ligong Wang, Xueliang Li, and Shenggui Zhang, Families of integral trees with diameters 4, 6, and 8, Discrete Appl. Math. 136 (2004), no. 2-3, 349–362, 10.1016/S0166-218X(03)00450-5. Search in Google Scholar

[13] Ljiljana Branković and Dragoš Cvetković, The eigenspace of the eigenvalue2 in generalized line graphs and a problem in security of statistical databases, Publ. Elektroteh. Fak., Univ. Beogr., Ser. Mat. 14 (2003), 37–48, 10.2298/PETF0314037B. Search in Google Scholar

[14] Daniel A. Jaume, Gonzalo Molina, Adrián Pastine, and Martín D. Safe, A {–1, 0, 1}- and sparsest basis for the null space of a forest in optimal time, Linear Algebra Appl. 549 (2018), 53–66, 10.1016/j.laa.2018.03.019. Search in Google Scholar

[15] Milan Nath and Bhaba K. Sarma, On the null-spaces of acyclic and unicyclic singular graphs, Linear Algebra Appl. 427 (2007), no. 1, 42–54, 10.1016/j.laa.2007.06.017. Search in Google Scholar

[16] Torsten Sander and Jürgen W. Sander, On simply structured kernel bases of unicyclic graphs, AKCE J. Graphs. Combin. 4 (2007), 61–82. Search in Google Scholar

[17] Dragan Stevanović, On ±1 eigenvectors of graphs, Ars Math. Contemp. 11 (2016), no. 2, 415–423, 10.26493/1855-3974.1021.c0a. Search in Google Scholar

[18] Saieed Akbari and Stephen J. Kirkland, On unimodular graphs., Linear Algebra Appl. 421 (2007), no. 1, 3–15, 10.1016/j.laa.2006.10.017. Search in Google Scholar

[19] Torsten Sander and Jürgen W. Sander, Tree decomposition by eigenvectors, Linear Algebra Appl. 430 (2009), 133–144, 10.1016/j.laa.2008.07.015. Search in Google Scholar

Received: 2019-07-08
Accepted: 2020-01-03
Published Online: 2020-02-19

© 2019 Omar Alomari et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.