## 1 Introduction

Spectral graph theory is a wide field of research where we study the spectral properties of matrices associated with graphs and in particular try to link them to the structural properties of those graphs. The most classic example is the adjacency matrix *A*(*G*) of a graph *G*, which is the 0-1-matrix reflecting the vertex adjacency relation of the graph. Other frequently studied matrices are the Laplacian matrix *L*(*G*) := *D*(*G*) – *A*(*G*) and the signless Laplacian matrix *Q*(*G*) := *D*(*G*) + *A*(*G*), where *D*(*G*) is the diagonal matrix of the respective vertex degrees. Changing the order in which the vertices of *G* are indexed will change each of these matrices, but only by a permutation similarity transformation. Hence, both eigenvalues and eigenvectors (when viewed as real functions on the vertex set) will remain unchanged and can be attributed to the given graph *G*.

Perhaps the most important advantage of the mentioned matrices is their symmetry (because the adjacency relation of an undirected graph is obviously symmetric), so the matrices are diagonizable. Thus, the eigenspace dimensions match to the multiplicities of the respective roots of the characteristic polynomial of the matrix. Moreover, the spectrum is real. Turning to digraphs, we find that the adjacency relation of a digraph is usually not symmetric. So, studying the spectral properties of its adjacency matrix may reveal interesting insights [1], but we may not rely on the mentioned advantages. As a consequence, some researchers have considered alternative matrix choices, such as the (complex valued) Hermitian-adjacency matrix (see [2, 3]). Opposed to that, in [4] the author considers the (real) matrices *N*_{out}(*D*) = *A*(*D*)*A*(*D*)^{T} and *N*_{in}(*D*) = *A*(*D*)^{T}*A*(*D*), where *A*(*D*) is the adjacency matrix of the given digraph *D*. By construction, these matrices are symmetric and their spectra do not depend on the chosen vertex order.

Note that the spectrum of *N*_{in} and *N*_{out} is the same (for the sake of brevity, we will omit the reference to the given graph if there is no danger of confusion). Moreover, inverting the orientations of all edges in a given digraph transforms *N*_{in} into *N*_{out}, and vice versa. Hence we may restrict our attention to only one of these matrices. Following [4], let us only consider *N*_{out}. Given some digraph *D*, the spectrum of *N*_{out} is called the *non*-*negative spectrum* or, shortly, the *N*-*spectrum* of *D*. It is the multi-set of *N*-*eigenvalues* of *D*, which in turn are the roots of the *N*-*characteristic polynomial* N_{D}(*x*) = *χ*(*N*_{out}(*D*), *x*) = det(*N*_{out}(*D*) – *xI*), appearing in the spectrum according to their multiplicity. In the same manner, we will speak of N-eigenvectors, N-nullity (the multiplicity of N-eigenvalue 0) and N-integrality (meaning the N-spectrum consists only of integers). By construction, *N*_{out} is positive semi-definite, so N-eigenvalues are always non-negative.

It seems that [4] is the first source that studies the N-spectra of digraphs in some detail. Besides deriving some general basic facts to start with, the author of [4] focuses mainly on regular digraphs. Our goal is to generally explore how N-eigenvalues and N-eigenvectors are linked to the structure of digraphs, in particular under certain transformations. In [4] a first step in this direction is made, by studying the change of the characteristic polynomial under two simple operations, namely attaching a pendant source to a source (the latter will no longer be a source after this) and attaching a pendant sink to some arbitrary vertex. Here, a source is a vertex with no incoming edges (but at least one outgoing edge) and a sink is a vertex without outgoing edges (but at least one incoming edge). Our research will start where the author left off in [4].

## 2 Common out-neighbor partition

Unless stated otherwise, the digraphs we study hereafter are tacitly assumed to be simple, loopless and weakly connected.

In order to deal with the *N*_{out} matrix we need to understand the meaning of its entries. Let *v*_{1}, …, *v _{n}* be the vertices of a given digraph

*D*. Then it is easily seen that, for all

*i*,

*j*∈ {1, …,

*n*}, the entry found at position (

*i*,

*j*) of

*N*

_{out}is equal to the number of common out-neighbors

^{1}of the vertices

*v*and

_{i}*v*(cf. Proposition 2.1 in [4]). In particular, entry (

_{j}*i*,

*i*) counts the number of out-neighbors of vertex

*v*. Hence the trace of

_{i}*N*

_{out}is exactly the number of edges of

*D*, which in turn is equal to the sum of all eigenvalues of

*N*

_{out}(counting each eigenvalue according to its multiplicity). So the N-spectrum consists only of zeroes if and only if

*D*contains only isolated vertices.

Let us consider the simple case of attaching a pendant source *v*_{n+1} to a source *v _{i}* of

*D*(as a result,

*v*is not a source any longer). Clearly, this operation does not change any common out-neighbors nor the out-degrees of the vertices of

_{i}*D*. So the matrix

*N*

_{out}(

*D*′) of the resulting digraph

*D*′ can be obtained as the block diagonal matrix diag(

*N*

_{out}(

*D*), 1

_{1×1}), where

*r*

_{n×m}denotes the (

*n*×

*m*)-matrix with all entries equal to

*r*. Consequently, N

_{D′}(

*x*) = (

*x*– 1)N

_{D}(

*x*) (cf. Prop. 4.5 in [4]). Given a basis of N-eigenvectors of

*D*(spanning ℝ

^{n}), we can construct a basis of N-eigenvectors of

*D*′ (spanning ℝ

^{n+1}) by means of trivially embedding the given basis for

*D*, alongside with a unit vector

*e*

_{n+1}.

Next, consider attaching a pendant sink *v*_{n+1} to a sink *v _{i}* of

*D*. As before,

*v*does not have any common out-neighbors. But the operation changes the number of out-neighbors of

_{i}*v*to one. The

_{i}*i*-th column of

*N*

_{out}has changed from a zero column to a unit vector

*e*. Moreover, for the sink

_{i}*v*

_{n+1}itself we add a zero row/column to

*N*

_{out}. All in all, determinant expansion along the

*i*-th column of

*N*

_{out}(

*D*′) readily yields N

_{D′}(

*x*) = (

*x*– 1)N

_{D}(

*x*).

These two example operations were fairly simple. But what about even the slightest generalization, say, attaching a sink to multiple sinks? This operation indeed does change common out-neighbor relations in the digraph. Now we can ask ourselves: What is the effect on the N-spectrum and N-eigenvectors? In particular, is it possible to preserve some of the original N-eigenvectors by means of trivial embedding? In a sense, we want to be able to judge whether the effects of a somewhat “local” modification of the given digraph result in predictably “local” changes of the spectral properties. To this end, we will introduce a partition of the vertices of *D*.

In view of the entries of *N*_{out} we need to analyze which vertices have common out-neighbors. Two vertices *v*_{1} and *v*_{3} have a common out-neighbor *v*_{2} if and only if there exists a trail between them that consists of a forward edge followed by a reverse edge, i.e. a trail *v*_{1} → *v*_{2} ← *v*_{3}. If vertices *v*_{3} and *v*_{5} have a common out-neighbor *v*_{4}, then the trail can be extended to *v*_{1} → *v*_{2} ← *v*_{3} → *v*_{4} ← *v*_{5}. Given two vertices *x* and *y* of *D*, a trail between *x* and *y* is a *zig*-*zag trail* if it has even length and (from either end) it starts with a forward edge, then a reverse edge, then again a reverse and so on (with strictly alternating directions). Note that, trivially, a path of length zero is also considered a zig-zag trail.

To study the extents of zig-zag trails, let us establish a relation on the vertex set *V* of *D*. Let any two vertices be related whenever there exists a zig-zag trail between them. Clearly, this relation is reflexive, symmetric and transitive. So we have an equivalence relation that partitions the vertices of *D* into equivalence classes 𝓑_{1}, …, 𝓑_{k}. The associated partition *common out*-*neighbor partition* of *D*. Given some vertex *v*, let 𝓑(*v*) denote the class that contains *v*. Note that sinks always form singleton classes, but the reverse is not necessarily true. Moreover, if *D* contains no mutually adjacent vertices (i.e. it is an orientation of some undirected graph), then the common out-neighbor partition contains at least two classes.

Now consider any class 𝓑_{i} of the common out-neighbor partition. By construction, none of its vertices have common out-neighbors with vertices external to 𝓑_{i}. Hence we conclude:

*Let**D**be a digraph with common out*-*neighbor partition**If we renumber the vertices of**D**such that we enumerate the vertices of* 𝓑_{1}*first*, *those of* 𝓑_{2}*next*, *and so on*, *then**N*_{out}*assumes block diagonal form**N*_{out}(*D*) = diag(*B*_{1}, …, *B _{k}*),

*where*

*B*

_{i}*denotes the block associated with the class*𝓑

_{i}.

*Figure 1 shows an example digraph. The gray vertices form the class* 𝓑(0) = {0, 1, 2, 3, 4}, *by virtue of the zig*-*zag trails*

- –
**4**→ 7 ←**0**→ 1 ←**2**→ 0 ←**1**, - –
**0**→ 2 ←**3**.

*Note that non*-*bold numbers are merely “helpers” (common out*-*neighbors) that establish the zig*-*zag relation. The black vertices form a class* 𝓑(5) = {5, 6}. *The white vertices form a singleton class each*.

*The vertex numbers have already been chosen such that they match the common out*-*neighbor partition. Hence*, *with respect to this numbering*, *the matrix**N*_{out}*assumes block diagonal form with blocks of sizes* 5, 2, 1, 1, 1, 1. *This is shown in Figure 2*.

It is important to realize that the block *B _{i}* associated with a class 𝓑

_{i}does not directly correspond to any subdigraph of

*D*. The reason is that the vertices of a class may have external out-neighbors. On the other hand, if we construct a subdigraph

*D*′ of

*D*by keeping only the edges (including their endpoints) emanating from the vertices of 𝓑

_{i}, then 𝓑

_{i}is also a class of the common out-neighbor partition of the resulting digraph

*D*′ – with exactly the same block

*B*in

_{i}*N*

_{out}(

*D*′). The rest of

*N*

_{out}(

*D*′) is zero, by construction. This is the minimal subdigraph of

*D*containing the class 𝓑

_{i}, with exactly the same block

*B*in its

_{i}*N*

_{out}matrix (cf. Figure 3).

In what follows, we will make use of the Kronecker product ⊗ of real matrices. Given two matrices *A* = (*a _{ij}*) ∈ ℝ

^{p×q}and

*B*= ∈ ℝ

^{r×s}, we obtain

*A*⊗

*B*∈ ℝ

^{pr×qs}by replacing each entry

*a*of

_{ij}*A*by the block

*a*. This definition naturally generalizes to vectors.

_{ij}BAn immediate benefit of the block diagonal form *N*_{out} = diag(*B*_{1}, …, *B _{k}*) generated by the common out-neighbor partition is that we may directly construct the N-eigenvectors of

*D*on a per-block basis:

*Let**N*_{out}(*D*) = diag(*B*_{1}, …, *B _{r}*),

*according to the common out*-

*neighbor partition*

*of a given digraph*

*D*.

*For any eigenvector*

*x*

*of*

*B*,

_{i}(for eigenvalue λ)*x*⊗

*e*

_{i}*is an N*-

*eigenvector of*

*D (for N*-

*eigenvalue*

*λ)*.

□

*Let**N*_{out}(*D*) = diag(*B*_{1}, …, *B _{r}*),

*according to the common out*-

*neighbor partition*

*of a given digraph*

*D*.

*If*

*x*

*is an N*-

*eigenvector of*

*D (for N*-

*eigenvalue*

*λ) that is non*-

*zero on the vertex*

*v*∈ 𝓑

_{i},

*then x*|

_{𝓑i}

*is an eigenvector of*

*B*

_{i}(for eigenvalue*λ*).

*Here*,

*x*|

_{𝓑i}∈ ℝ

^{|𝓑i|}

*denotes the restriction of*

*x*

*to the vertices of class*𝓑

_{i}.

We have

and

Since *N*_{out}(*D*)*x* = *λx* by assumption, it follows that *B _{i}x*|

_{𝓑i}=

*λx*|

_{𝓑i}for all

*i*= 1, …,

*r*. If

*x*is non-zero on the vertex

*v*∈ 𝓑

_{i}, then

*x*|

_{𝓑i}≠ 0, so it is an eigenvector of

*B*for eigenvalue

_{i}*λ*.□

*If**x**is a nowhere*-*zero N*-*eigenvector of**D**for N*-*eigenvalue**λ*, *then**λ**is a common eigenvalue of all blocks**B _{i}*,

*i*= 1, …,

*r*,

*of*

*N*

_{out}.

*Given a digraph**D**with**n**vertices*, *a unit vector**e _{i}* ∈ ℝ

^{n}

*is an N*-

*eigenvector of*

*D*

*if and only if the unique vertex*

*v*

*on which*

*e*

_{i}*is non*-

*zero does not have any common out*-

*neighbors with other vertices. The corresponding N*-

*eigenvalue equals the out*-

*degree of*

*v*.

Returning to the questions posed at the beginning of this section, let us now consider the case of connecting two digraphs by a new sink:

*Let**D*_{1}, *D*_{2}*be two disjoint digraphs and**S*_{1}, *S*_{2}*two sets of sinks of**D*_{1}*and**D*_{2}, *respectively. Further*, *let**D*′ *be the digraph obtained by connecting all the vertices of**S*_{1} ∪ *S*_{2}*to a new sink**η*. *Then*,

*where**s* = | *S*_{1} ∪ *S*_{2}|.

First of all, observe that the vertices of *S*_{1} and *S*_{2} each form singleton blocks in *D*_{1} and *D*_{2}, respectively. By connecting these vertices to the new sink *η* they will be united to a block *S*_{1} ∪ *S*_{2} of *D*′ (with *η* being the unique common out-neighbor of any two vertices in this block). Apart from that, all other classes of *D*_{1} and *D*_{2} are also classes of the common out-neighbor partition of *D*′ (with exactly the same blocks as before). For *j* ∈ {1, 2}, let *D _{j}*. Let

*s*= |

_{j}*S*|. Then

_{j}*D*′ such that

Observing *χ*(1_{s×s}, *x*) = *x*^{s–1}(*x* – *s*) and keeping in mind that the loss of *s* sinks effectively contributes a corrective factor *x*^{–s} to the N-characteristic polynomial, the result now follows easily by comparing the three block diagonal forms.□

*Let**D*_{1}*and**D*_{2}*be two disjoint digraphs. Choose two arbitrary vertices**u**of**D*_{1}*and**v**of**D*_{2}. *Join**D*_{1}*and**D*_{2}*by connecting**u*, *v**to a new sink**η**and let**D*′ *be the resulting digraph. Then*,

*where**B*^{(u)}*and**B*^{(v)}*are the blocks associated with the classes of**u*, *v**in**D*_{1}, *D*_{2}*(respectively) and where**i _{u}*,

*i*

_{v}*are the respective row/column indices of*

*u*

*and*

*v*

*within these blocks*.

The key observation is that connecting *u* and *v* to *η* will unite the classes of *u* and *v*. Further, *η* as a new sink will form a singleton cell (with an associated zero block). But apart from these two effects the common out-neighbor partition of *D*′ will be exactly the union of the partitions of *D*_{1} and *D*_{2}, with the same associated blocks for the cells. What remains is to determine the block *B* of *u*, *v* in *N*_{out}(*D*′). Suppose that the vertices of *D*_{1} and *D*_{2} are ordered such that their *N*_{out} matrices assume block diagonal form according to the respective common out-neighbor partition. Without loss, we may assume that *B*^{(u)} is the lower-right block in *N*_{out}(*D*_{1}) and that *B*^{(v)} is the top-left block in *N*_{out}(*D*_{2}). We index the vertices of *D*′ such that first we enumerate the vertices of *D*_{1}, then those of *D*_{2} (both in the same order as before). Then *B* is essentially diag(*B*^{(u)}, *B*^{(v)}), but we have to increment the main diagonal for *u* and *v* to reflect that both of them now have an additional out-neighbor, further we have to symmetrically place two off-diagonal ones to reflect that both *u*, *v* now have a common out-neighbor.□

*Let**D*_{1}*and**D*_{2}*be two disjoint digraphs. Choose an arbitrary vertex**v**of**D*_{1}*and a sink**u**of**D*_{2}. *Join**D*_{1}*and**D*_{2}*by connecting**u*, *v**to a new sink**η**and let**D*′ *be the resulting digraph. Then*,

*where**B**is the block associated with the class of**v**in**D*_{1}*and**i**is the row (resp. column) index of**v**within this block*.

We see that the common out-neighbor partition provides a valuable tool for understanding the spectral effects of changes to a digraph, in particular with respect to locality. Whenever changes affect some classes or their associated blocks we have to recompute their eigenvectors and eigenvalues, but the information previously gained for the unaffected blocks can be retained.

## 3 The Square Theorem

Next, we relate the N-eigenvalues of certain directed bipartite graphs to the eigenvalues of their undirected counterparts. For the following theorem we introduce two new terms. Given a digraph *D* of order *n* such that each vertex is either a source or a sink, for any vector *x* ∈ ℝ^{n} we may construct its *source part* by setting all those entries of *x* to zero which correspond to the non-sources (i.e. sinks) of *D*. Likewise, we construct the *sink part* of *x*.

(Square Theorem). *Let**D**be a bipartite digraph such that each vertex is either a source or a sink. Let**k**be the number of sources and**l**be the number of sinks in**D*. *Further*, *let**G**be the underlying undirected graph of**D*.

*Given an eigenspace basis for eigenvalue**λ*≠ 0*of**G*,*the source parts of these vectors form an N*-*eigenspace basis**for N*-*eigenvalue**λ*^{2}*of**D**and their sink parts are all N*-*eigenvectors for N*-*eigenvalue*0*of**D*.*Every eigenvector for eigenvalue*0*of**G**is also an N*-*eigenvector for N*-*eigenvalue*0*of**D*.*If the source part of any N*-*eigenvector for N*-*eigenvalue*0*of**D**is not null*,*then this source part is an eigenvector for eigenvalue*0*of**G*.*Given a basis of*ℝ^{k+l}*of eigenvectors of**G*,*an N*-*eigenspace basis for N*-*eigenvalue*0*of**D**can be constructed as follows. Collect the sink parts of all the vectors associated with positive eigenvalues of**G*,*together with the vectors associated with eigenvalue*0.*Alternatively*,*collect the source parts of all basis vectors associated with eigenvalue*0*and determine a maximal linearly independent subset of the resulting set, together with**l**unit vectors*,*one for each sink (such that it is non-zero exactly on the considered sink)*.*If**η**is the nullity of**G**and**ν**the N*-*nullity of**D*,*then*1 ≤*ν*–*η*≤ min(*k*,*l*)*and**ν*≥ max(*k*,*l*).

We assume that the vertices of *D* are ordered such that the sources are numbered before the sinks (*G* shall inherit this vertex order). Since *G* is bipartite we have

for some matrix *B* ∈ ℝ^{k×l}. Hence

where we regard *G* as a (fully bidirected) digraph.

In order to prove (i), suppose that *A*(*G*)(*x*, *y*)^{T} = *λ*(*x*, *y*)^{T} with *x* ∈ ℝ^{k}, *y* ∈ ℝ^{l}. Since

we get

so an eigenvector (*x*, *y*)^{T} of *G* for eigenvalue *λ* is an N-eigenvector of *D* for N-eigenvalue *λ*^{2} ≠ 0 if and only if *x* ≠ 0 and *y* = 0, and an N-eigenvector of *D* for N-eigenvalue 0 if and only if either *λ* = 0 or both *x* = 0 and *y* ≠ 0:

Next, suppose that *A*(*G*)(*x*, *y*)^{T} = (0, 0)^{T}. Then *B ^{T}x* = 0 in (1), so that

which shows (ii).

For proving (iii) suppose that *N*_{out}(*D*)(*x*, *y*)^{T} = (0, 0)^{T}. With respect to the block diagonal form of *N*_{out}(*D*) we immediately deduce *BB ^{T}x* = 0. Using Theorem 3.9-4 (f) from [5] it follows that

*Bx*= 0. Therefore,

Now we turn to claims (iv) and (v). Assume that we have determined a basis of ℝ^{k+l} of eigenvectors of *G*. With respect to linear independence, note that the spectrum of a bipartite graph is symmetric around zero and that for each eigenvector (*x*, *y*)^{T} for eigenvalue *λ* of *G* we have a twin eigenvector (*x*, –*y*)^{T} for –*λ* (cf. [6]). We modify the given basis as follows. For *λ* > 0 let *E*_{λ} and *E*_{–λ} be the two eigenspaces for eigenvalues *λ* and –*λ* of *G*, respectively. Select those vectors (*x*^{(1)}, *y*^{(1)})^{T}, …, (*x*^{(r)}, *y*^{(r)})^{T} from the overall basis that form a basis of *E*_{λ}. By suitable linear combination we find that their source and sink parts

form a basis of the space *E*_{λ} + *E*_{–λ}. In the overall basis we replace the eigenvectors for eigenvalues *λ* and –*λ* with these vectors. If we do this for all positive eigenvalues of *G*, then we still have basis of ℝ^{k+l}. Consequently, the source parts inserted for any eigenvalue *λ* of *G* constitute an N-eigenspace basis for N-eigenvalue *λ*^{2} of *D*.

Note that *x*^{(i)} ∈ ℝ^{k}, so the final basis may contain at most *k* source parts. Likewise, it may contain at most *l* sink parts. Since the number of introduced source and sink parts is the same, we deduce that the number of positive eigenvalues of *G* is at most min(*k*, *l*). Further, the N-nullity of *D* exceeds the nullity of *G* by exactly the number of positive eigenvalues of *G*. Moreover, *G* contains not only isolated vertices (actually none at all, because *D* has only sources and sinks), so there exists at least one positive eigenvalue for *G* (cf. Corollary 2.7 in [6]). This proves the first part of claim (v).

Next, observe that we may construct a linearly independent set of *l* sink parts that are N-eigenvectors for N-eigenvalue 0 of *D*, by simply taking *l* sink unit vectors (i.e. for each sink choose the unit vector that is non-zero on exactly that sink). Hence the N-nullity of *D* is at least *l*. Moreover, we may reverse the orientation of *D* and apply the same argument again, with the sinks turned into sources and vice versa. Equivalently, we may consider *N*_{out} instead of *N*_{in}. Since these matrices have the same spectra it follows that the N-nullity of *D* is at least max(*k*, *l*). Now the proof of claim (v) is complete.

Using suitable linear combinations of the sink unit vectors on the vectors of the original eigenspace basis for eigenvalue 0 of *G*, we may convert them into source parts. This may cause linear dependence among the newly created source parts, so we reduce them to a maximal linearly independent subset. This achieves the basis proposed in the second part of claim (iv).□

*In order to demonstrate some aspects of Theorem 5 we consider the bipartite digraph depicted in Figure 4. This digraph**D**has N*-*spectrum*

*Its undirected counterpart**G**has the traditional spectrum*

*To illustrate part (i) of the theorem we determine an eigenvector for simple eigenvalue* 2.48 *of**G*, *see Figure 5. Now we form the source and sinks parts* – *as shown in Figure 6* – *and readily verify that the source part is an N*-*eigenvector of**D**for N*-*eigenvalue* 6.14 = (2.48)^{2}, *whereas the sink part is an N*-*eigenvector for N*-*eigenvalue* 0. *Note that for the simple eigenvalue* –2.48 *of**G**we can get an eigenvector by taking the vector from Figure 5 and simply inverting the signs on all the sink vertices. Naturally*, *the source part remains the same*, *so we see that the N*-*eigenvalue* 6.14 *of**D**must be simple*.

*With respect to part (iv) of the theorem observe that*, *since**G**is missing eigenvalue* 0, *the easiest way of finding an N*-*eigenspace basis for N*-*eigenvalue* 0 *is given by forming a unit vector basis with respect to the seven sinks of**D*.

*From the proof of part (i) of the Square Theorem we also conclude that the nullity of any bipartite graph**G**with bipartition set sizes**k*, *l**is at least**k* + *l* – 2 min(*k*, *l*) = |*k* – *l*|. *This is the “Corollary” to Theorem 3 in [7]*.

A graph is bipartite if and only if it contains no odd cycles. So, given an undirected connected bipartite graph with at least one edge, we can choose exactly two orientations such that the resulting digraph contains only sources or sinks. With respect to the two sets of the vertex bipartition, the vertices of one set will become the sources while the other vertices become the sinks. We call such an orientation a *zig*-*zag orientation*. Clearly, only bipartite graphs have zig-zag orientations since an odd circuit would prevent this.

*Let**P _{n}*

*be a directed path with*

*n*

*vertices that has zig*-

*zag orientation. Then*

According to [8], the eigenvalues of an undirected path with *n* vertices are the numbers

Clearly, these numbers are all distinct. For *j* = 1, …, *P*. Moreover, it contains *n* the underlying undirected path already has a (single) eigenvalue zero, so altogether we have

*Let**C*_{2n}*be a cycle with* 2*n**vertices that has zig*-*zag orientation. Then*

According to [8], the eigenvalues of an undirected cycle with 2*n* vertices are the numbers

None of these numbers equals zero. For *j* = 1, …, *n* we get one item of each pair of eigenvalues *λ*, –*λ*. Hence the result follows.□

A special topic in spectral graph theory is integrality, in particular giving sufficient or necessary conditions such that a graph from a certain class is integral. Even for trees, integrality is a challenging task but, nonetheless, various interesting results have been obtained, including the identification of many families of integral trees, cf. [9, 10, 11, 12]. Let us therefore consider N-integrality of directed trees. It follows from Example 3.7 in [4] that rooted trees are N-integral. The next corollary shows how to construct arbitrarily many N-integral non-rooted trees:

*Let**T**be an integral tree. Obtain T′ by zig*-*zag orienting**T*. *Then T′ is N*-*integral*.

Many researchers have studied eigenspaces of graphs in detail and tried to characterize when graphs afford eigenspace bases with certain properties. One particular goal is to choose a basis such that its vectors only contain entries from a certain (small) prescribed set (cf. [13, 14, 15, 16, 17]). A particularly small such set would be {0, 1, –1}. We call a basis *simply structured* if its vectors have entries only from this set. With the help of the Square Theorem 5 we may transfer knowledge about the structure of eigenspace bases of a bipartite graph to knowledge about N-eigenspace bases of the zig-zag oriented digraphs that can be derived from it. We will now investivate simply structured N-eigenspace bases.

*Given a zig*-*zag oriented bipartite digraph**D*, *if the underlying undirected graph**G**has an eigenspace basis for eigenvalue* 0 *whose vectors assume only values from* {0, 1, –1} *on the sources*, *then**D**has a simply structured N*-*eigenspace basis for N*-*eigenvalue* 0.

This follows directly from the second part of claim (iv).□

A particularly obvious case when the previous corollary can be applied is when the underlying undirected graph *G* has a *simply structured* eigenspace basis for eigenvalue 0. One tool that may help with the identification of bipartite graphs with suitable bases is total unimodularity. Recall that a matrix is totally unimodular if every square submatrix has determinant 0, 1 or –1. For such a matrix it then follows easily from Cramer’s rule that its null space has a simply structured basis.

*Let**G**be a forest or a unicyclic graph whose cycle length is divisible by* 4. *Obtain**D**by zig*-*zag orienting**G*. *Then**D**has a simply structured N*-*eigenspace basis for eigenvalue* 0.

Proposition 1 of [18] states that all forests (or, rather, their adjacency matrices) are totally unimodular. Moreover unicyclic graphs are totally unimodular if and only if their cycle length is divisible by 4.□

Actually, the previous corollary can be refined because we know a little more about the eigenspace bases of forests:

*Let**T**be a tree. Obtain**D**by zig*-*zag orienting**T*. *Depending on which of the two possible zig*-*zag orientations was chosen*, *either every simply structured eigenspace basis for eigenvalue* 0 *of**T**is also a simply structured N*-*eigenspace basis for N*-*eigenvalue* 0 *of**D**or we can take a sink unit vector basis instead*.

It is a consequence of Lemma 19 in [19], that every null space basis of a tree completely vanishes on exactly the same of the two sets of the vertex bipartition of the tree. Depending on the chosen zig-zag orientation, we see that either any simply structured eigenspace basis for eigenvalue 0 of *T* will also be a simply structured N-eigenspace basis for N-eigenvalue 0 of *D* or that a sink unit vector basis will serve the purpose.□

Moreover, Corollary 8 may be extended to even more unicyclic graphs:

*Let**G**be a unicyclic graph with even cycle length. Obtain**D**by zig*-*zag orienting**G*. *If the cycle length of**G**is divisible by* 4 *or if there exists not exactly one vertex**v**on the cycle in**G**such that**v**is not covered by all maximum matchings of the unique tree emanating from**v*, *then**D**has a simply structured N*-*eigenspace basis for eigenvalue* 0.

Theorem 4.51 in [16] states that the above condition on *G* exactly characterizes those unicyclic graphs which have a simply structured null space basis.□

Actually, one can even conclude from the results presented in [15] or [16] that, in the case excluded in the condition of the previous corollary, the unicyclic graph has at least a null space basis with entries from the set {0, 1, –1, 2, –2} and that its non-zero entries only occur on exactly one set of the vertex bipartition. Orienting the graph such that these vertices become sources, we may trivially choose a sink unit vector basis for the N-eigenspace for N-eigenvalue 0, cf. Corollary 7. Hence:

*Let**G**be a unicyclic graph with even cycle length. Then at least one of the zig*-*zag orientations of**G**affords a simply structured N*-*eigenspace basis for N*-*eigenvalue* 0.

## 4 Block separation

We have seen in Section 2 and its Example 1 that zig-zag trails are the key to forming the blocks of the common out-neighbor partition of a digraph. Every second vertex of a zig-zag trail is a “helper” vertex that certifies a common out-neighbor relationship of two cell vertices.

Moreover, we have discussed the formation of minimal subgraphs containing a certain class of interest, with the same associated block in its *N*_{out} matrix, cf. Figure 3. What is unlucky about these minimal subgraphs is that we do not immediately see the “helper” vertices as they may act in a double role, being both helper and original member of the cell. We will now present an intuitive construction that will separate the given digraph into constituents such that each contains exactly one of the original cells, with the same block as before, and some artificially added singleton cells (with zero blocks). Moreover, no vertex will act in a double role.

We introduce the *block separation* of a given digraph *D*: For every vertex *v* of *D*, create a new vertex *v*′ that will take over the incoming neighbor connections of *v*, i.e. for each edge *wv* we add an edge *wv*′ and delete the edge *wv*. The resulting digraph has the following properties:

*Let**D*′ *be a digraph obtained by performing block separation on a given digraph**D**of order**n*. *Then*

*The common out*-*neighbor partition is obtained by extending the partition of**D**with singleton blocks*,*one for each newly introduced vertex*.*Number the vertices of**D**according to its common out*-*neighbor partition such that**N*_{out}(*D*) = diag(*B*_{1}, …,*B*)._{k}*Keeping the original vertex order of**D**for**D*′*and numbering the newly introduced vertices after the original vertices*,*we have*$\begin{array}{c}{N}_{\mathrm{out}}\left(D\right)=\mathrm{diag}({B}_{1},\dots ,{B}_{k},{B}_{k+1},\dots ,{B}_{2n})\end{array}$ *with**B*_{k+1}= … =*B*_{2n}= 0_{1×1}.*N*(*D*′,*x*) =*x*N(^{n}*D*,*x*).

With respect to the matrix *N*_{out}(*D*′) of the resulting graph *D*′ we find that each of the original vertices *v* of *D* has the same number of out-neighbors as before. The newly introduced vertices are all sinks, by construction. Moreover, the number of common out-neighbors of *v* and some other vertex *w* is the same as in *D* if *w* is one of the original vertices of *D* and zero otherwise. Hence, using the proposed vertex numbering, the matrix *N*_{out}(*D*) is a principal submatrix of *N*_{out}(*D*′). Clearly, the rest of *N*_{out}(*D*′) contains only zero entries.□

*Let us revisit Example 1. The result of block separation performed on the digraph presented there is shown in Figure 7. The vertices are labeled so that it is easy to see the pairs of original and new vertices. Note that the results of Theorem 6 remain valid (just changing the counts related to the newly introduced new singleton cells) if we do not duplicate vertices of**D**that do not have any incoming neighbors. This helps us prevent unnecessary bloat. Even more*, *we may refrain from duplicating any vertices belonging to singleton cells since this will only lead to zero blocks in**N*_{out}.

*By construction*, *every helper vertex in a component of a block separated digraph is a sink and every vertex of some original cell is a source. Hence the overall block separated digraph is a zig*-*zag oriented bipartite graph with exactly the same blocks as before*, *plus some zero blocks. So we may apply the Square Theorem 5 on each component separately to determine its N*-*eigenvectors and N*-*eigenspaces. The results can be trivially projected to the original digraph. Hence, the conjunction of block separation and the Square Theorem permits us to fully predict the N*-*spectral properties of a digraph from the spectral properties of certain associated bipartite graphs*.

*It is easily checked that the example digraph depicted in Figure 4 is isomorphic the largest component of the block separated digraph shown in Figure 7. With respect to Remark 2 we see that Example 2 also demonstrates the combination of block separation and the Square Theorem*.

In the following, we apply block separation to analyze the N-spectral radius of directed paths and cycles. Here, the *N*-*spectral radius**σ _{N}*(

*D*) of a digraph

*D*means the largest modulus among all its N-eigenvalues. Likewise, the

*spectral radius*

*σ*(

*G*) of a graph

*G*denotes the largest modulus among all its eigenvalues. Given a connected graph lacking a zig-zag orientation, we define a

*nearly zig*-

*zag*orientation as an orientation such that exactly one vertex is neither a source nor a sink.

*Among all orientations of a given path (or cycle)*, *the maximum N*-*spectral radius is achieved by exactly the zig*-*zag orientations (or the nearly zig*-*zag orientation if the given graph lacks a zig*-*zag orientation)*.

With respect to block separation, note that the N-spectral radius of a given digraph is determined by the maximal spectral radius among the underlying bipartite graphs of the components of the separation digraph. Orienting a graph does not introduce mutual adjacency in the resulting digraph. Therefore, block separation essentially decomposes the given digraph into directed paths (ignoring any isolated vertices). Next we consider the maximum block separation component size for (nearly) zig-zag orientations of paths and cycles. A zig-zag orientation of *P _{n}* or a nearly zig-zag orientation

*C*

_{2n+1}will introduce a directed path of the same order in the block separation digraph, plus some isolated vertices. A zig-zag orientation of

*C*

_{2n}will introduce a directed cycle of the same order, plus some isolated vertices. Obviously, any other orientation of a given path or cycle will result in further decomposition of the maximal components of the block separation digraph. But careful analysis of the eigenvalue formula given in the proof of Corollary 4 (resp. Corollary 5) reveals the well-known fact that

*σ*(

*P*

_{n–1}) <

*σ*(

*P*) <

_{n}*σ*(

*C*) <

_{n}*σ*(

*C*

_{n+1}) (for

*n*≥ 1, setting

*σ*(

*P*

_{0}) := 0). Hence the proof is complete.□

In the introduction we mentioned the signless Laplacian matrix *Q*(*G*) of a graph *G*. Its definition naturally generalizes to multi-graphs. If we construct the matrix *Q*(*M*) of some multi-graph *M* and if this matrix coincides with *N*_{out}(*D*) for some digraph *D*, then we have an interesting link between the N-spectrum of *D* and the signless Laplacian spectrum of *M*. It is therefore not surprising that [4] investigates pairs (*D*, *M*) such that *N*_{out}(*D*) = *Q*(*M*). Let us clarify how to construct such pairs.

*Let**D**be a digraph and let**M**be a loopless multi*-*graph*, *both of order**n*. *Define**R*(*D*) *as the multi*-*relation of distinct vertices of**D**having common out*-*neighbors*, *i.e. the multiplicity of each pair* (*v*, *w*) ∈ *R*(*D*) *equals the number of common out*-*neighbors of the vertices**v* ≠ *w**in**D*. *Then**N*_{out}(*D*) = *Q*(*M*) *if and only if the following conditions are satisfied*:

*M**represents the multi*-*relation**R*(*D*).*For every vertex**v**of**D*,*the out*-*degree of**v**equals the number of instances in which there exist vertices**w*,*z*∈*V*(*D*)*such that**w*≠*v**and**z**is a common out*-*neighbor of**v**and**w*.

By the definition of the matrix *N*_{out}(*D*), each of its off-diagonal entries specifies the number of common out-neighbors of the vertices associated with the row/column indices of the respective considered entry. So the off-diagonal part of *N*_{out}(*D*) exactly represents *R*(*D*). Note here that, by construction, *R*(*D*) contains no pairs (*v*, *v*).

It follows that the off-diagonal entries of *N*_{out}(*D*) and *Q*(*M*) = *A*(*M*) + *D*(*M*) coincide if and only if *A*(*M*) is the adjacency matrix of (the multi-graph associated with) the relation *R*(*D*) – which is equivalent to condition (i).

The diagonal entries of *N*_{out}(*D*) are the out-degrees of the respective vertices of *D*, whereas the diagonal entries of *Q*(*M*) are the degrees of the vertices of *M*. But with respect to *R*(*D*) the degree of a vertex *v* of *M* in turn equals the number of instances in which there exist vertices *w*, *z* ∈ *V*(*D*) such that *w* ≠ *v* and *z* is a common out-neighbor of *v* and *w*. It follows, under condition (i), that the diagonal entries of *N*_{out}(*D*) and *Q*(*M*) coincide if and only if condition (ii) holds.□

*Finding pairs* (*D*, *M*) *of digraphs**D**and loopless multi*-*graphs**M**such that**N*_{out}(*D*) = *Q*(*M*) *is actually very easy. Start with an arbitrary digraph. In view of condition (ii) of Theorem 7*, *identify all vertices for which the associated diagonal entry of**N*_{out}(*D*) *is “too large”*, *i.e. strictly greater (by a difference of*, *say*, *d)*, *than the sum of the other entries in the same row/column. For each such vertex**v**execute the following step exactly**d**times: Add an incoming sink**w**to any of its out*-*neighbors. This step will create a new instance of common out*-*neighborship for**v*, *hence extending the aforementioned row/column of**N*_{out}*by a new entry* 1. *Moreover*, *by construction*, *in the resulting digraph the diagonal entry of**N*_{out}*associated with**w**is less than or equal to the sum of the other entries in the same row/column. After carrying out the mentioned process for all identified vertices none of the associated diagonal entries of**N*_{out}*is too large any more. Further*, *for every vertex whose associated diagonal entry is too small we may simply add a suitable number of pendant sinks. All in all*, *we achieve that condition (ii) of Theorem 7 is satisfied. Hence**M**is now easily derived by means of condition (i)*.

The multi-graph representing the multi-relation *R*(*D*) mentioned in Theorem 7 can also be constructed from the block separation of *D*:

*Given a digraph**D**and its block separation digraph**D*′, *create a new undirected multi*-*graph**M**as follows*:

*Let**M**initially have the vertices of**D**but no edges*.*Let every duplicated vertex in**D*′*represent a clique formed by the original neighbors of that vertex. For each duplicated vertex from**D*′*augment**M**by introducing new edges for the associated clique*,*skipping any*1-*cliques*.

*Then**M**represents**R*(*D*).

Note that step (ii) adds multiple edges between vertices according to the number of cliques they are involved in.

The construction yields the desired result since the duplicated vertices in the block separation of *D* are exactly the sinks of the block separation, which in turn are exactly those vertices of *D* which act as common out-neighbors. Hence the sinks of the block separation introduce cliques in the relation *R*(*D*).□

*Using the block separation digraph in Figure 7*, *the graph**M**constructed in Proposition 2 has vertices* 0, …, 10. *Vertex* 0′ *introduces a* 2-*clique among the vertices* {1, 2}. *Likewise, vertices* 1′, 7′ *and* 9′ *introduce* 2-*cliques among* {0, 2}, {0, 4} *and* {5, 6}, *respectively. The vertex* 2′ *gives rise to a* 3-*clique among* {0, 3, 4}. *Note that*, *altogether*, *we get a double edge between vertices* 0 *and* 4.

## References

- [2]↑
Krystal Guo and Bojan Mohar, Hermitian adjacency matrix of digraphs and mixed graphs, J. Graph Theory 85 (2017), no. 1, 217–248, .

- [3]↑
Jianxi Liu and Xueliang Li, Hermitian-adjacency matrices and Hermitian energies of mixed graphs, Linear Algebra Appl. 466 (2015), 182–207, .

- [4]↑
Irena M. Jovanović, Non-negative spectrum of a digraph, Ars Math. Contemp. 12 (2017), no. 1, 167–182, .

- [5]↑
Erwin Kreyszig, Introductory functional analysis with applications, John Wiley & Sons, New York, 1978.

- [7]↑
Dragoš M. Cvetković and Ivan M. Gutman, The algebraic multiplicity of the number zero in the spectrum of a bipartite graph, Mat. Vesn., N. Ser. 9 (1972), 141–150.

- [9]↑
Krystyna T. Balińska, Dragoš M. Cvetković, Zoran S. Radosavljević, Slobodan K. Simić, and Dragan Stevanović, A survey on integral graphs, Publ. Elektroteh. Fak., Univ. Beogr., Ser. Mat. 13 (2002), 42–65, .

- [11]↑
Pavel Hic and Milan Pokorny, There are integral trees of diameter 7, Publ. Elektroteh. Fak., Univ. Beogr., Ser. Mat. 18 (2007), 59–63, .

- [12]↑
Ligong Wang, Xueliang Li, and Shenggui Zhang, Families of integral trees with diameters 4, 6, and 8, Discrete Appl. Math. 136 (2004), no. 2-3, 349–362, .

- [13]↑
Ljiljana Branković and Dragoš Cvetković, The eigenspace of the eigenvalue –2 in generalized line graphs and a problem in security of statistical databases, Publ. Elektroteh. Fak., Univ. Beogr., Ser. Mat. 14 (2003), 37–48, .

- [14]↑
Daniel A. Jaume, Gonzalo Molina, Adrián Pastine, and Martín D. Safe, A {–1, 0, 1}- and sparsest basis for the null space of a forest in optimal time, Linear Algebra Appl. 549 (2018), 53–66, .

- [15]↑
Milan Nath and Bhaba K. Sarma, On the null-spaces of acyclic and unicyclic singular graphs, Linear Algebra Appl. 427 (2007), no. 1, 42–54, .

- [16]↑
Torsten Sander and Jürgen W. Sander, On simply structured kernel bases of unicyclic graphs, AKCE J. Graphs. Combin. 4 (2007), 61–82.

- [17]↑
Dragan Stevanović, On ±1 eigenvectors of graphs, Ars Math. Contemp. 11 (2016), no. 2, 415–423, .

- [18]↑
Saieed Akbari and Stephen J. Kirkland, On unimodular graphs., Linear Algebra Appl. 421 (2007), no. 1, 3–15, .

- [19]↑
Torsten Sander and Jürgen W. Sander, Tree decomposition by eigenvectors, Linear Algebra Appl. 430 (2009), 133–144, .

## Footnotes

^{1}

Instead of the terms incoming and outgoing neighbor we use the short forms in-neighbor and out-neighbor, respectively.