Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access October 21, 2022

Eigenvalues of transition weight matrix for a family of weighted networks

  • Jing Su , Xiaomin Wang , Mingjun Zhang EMAIL logo and Bing Yao
From the journal Open Mathematics

Abstract

In this article, we design a family of scale-free networks and study its random target access time and weighted spanning trees through the eigenvalues of transition weight matrix. First, we build a type of fractal network with a weight factor r and a parameter m . Then, we obtain all the eigenvalues of its transition weight matrix by revealing the recursive relationship between eigenvalues in every two consecutive time steps and obtain the multiplicities corresponding to these eigenvalues. Furthermore, we provide a closed-form expression of the random target access time for the network studied. The obtained results show that the random target access is not affected by the weight; it is only affected by parameters m and t . Finally, we also enumerate the weighted spanning trees of the studied networks through the obtained eigenvalues.

MSC 2010: 05C81

1 Introduction

In the research related to complex networks, in addition to the topological properties, it is also necessary to describe their dynamic processes. Random walk is one of the powerful research tools that can describe various dynamical behaviors in complex systems, such as remote sensing image segmentation [1], epidemic spreading [2], community detecting [3], cell sampling [4], just to name but a few. Random walks have been studied for many decades on both regular lattices and networks with a variety of structures [5]. For undirected weighted networks, the weight matrices of several classes of local random walk dynamics have been extensively studied, including normal random walks, biased random walks and preferential navigation, random walks in the context of digital image processing and maximum entropy random walks, and non-local random walks, including applications in the context of human mobility [6].

In the research methods of random walks, the random target access time [7], mean first passage time [8], mixing time, and other indicators [9] can be used to measure the efficiency of propagation [10,11]. The random target access time is a manifestation of the global characteristics of the network; it refers to the average of mean first passage time from one node to another node over all target nodes according to the stationary state; it is formulated as the sum of reciprocals of every nonzero eigenvalue of the normalized Laplacian matrix for the researched network. For example, some related studies on biased random walks of non-fractal and fractal structure [12].

The eigenvalues and eigenvectors of the transition matrix play a very vital role [13], as they are closely related to determining the aforementioned measures. There has been some research related to the eigenvalues of transition matrices on common networks in this work, such as Sierpinski gaskets and extended T-fractals, polymer networks, and it is worth emphasizing that these studies are based on networks without weight [14,15,16]. However, the weight of the network has important research significance in biological neural network, railway and air transportation [17,18]; therefore, finding the relationship between random walks and weight factor on different weighted networks is meaningful. Zhang et al. [19] studied the spectra of a weighted scale-free network and proved that random target access time H t is closely related to a weight factor r . Dai et al. [20] obtained H t = N t ln ( 4 r + 4 ) / ln 4 for a class of weighted scale-free triangulation networks. Zou et al. [21] proved the H t N t log 5 ( 1 + 4 r ) for r > 1 of weighted network with two hub nodes. These studies show that weight has a great influence on random target access time of the networks.

The main research content of this article is arranged as follows. In the next section, we propose a graphic operation called the weighted edge multi-division operation and give the generation algorithm about a family of weighted networks. In Section 3, we find all eigenvalues of the transition weight matrix and determine their multiplicities. Then, we deduce all the eigenvalues of the normalized Laplacian matrix of our networks, and then derive the closed-form expressions for random target access time and enumerate the weighted spanning trees in Section 4. In Section 5, we summarize the work of this article.

2 Design-weighted network models

Many phenomena can be reduced to graphs for research [22,23,24], so we are about to show the corresponding model of a family of weighted scale-free networks. But, before that, we define a graphic operation called the weighted edge multi-division operation and obtain our weighted networks through the recursion of this graphic operation.

Weighted edge multi-division operation: For a given edge i i with two end-nodes i and i with weight w , replace the original edge i i with m roads i q j q i q ( q = 1 , 2 , m ); the end-nodes i q and i q of these roads coincide with i and i , respectively. The weight of the new edges i j q and j q i is r times the original edge i i , and the diagram of a weighted edge multi-division operation is shown in Figure 1.

Figure 1 
               A weighted edge multi-division operation.
Figure 1

A weighted edge multi-division operation.

With the preparation of the weighted edge multi-division operation, the weighted scale-free networks are constructed in an iterative way as depicted. Let G ( m , r , t ) ( m 2 , r > 0 , t 0 ) denote the weighted scale-free networks after t time steps. Initially, t = 0 , G ( m , r , 0 ) is an edge with unit weight connecting two nodes. For t 1 , G ( m , r , t ) can be obtained from G ( m , r , t 1 ) by performing weighted edge multi-division operation on each existing edge in G ( m , r , t 1 ) . If the weight of edge in G ( m , r , t 1 ) is w ( t 1 ) , then the weight of edge in G ( m , r , t ) becomes r w ( t 1 ) according to the above graphic operation. Figure 2 illustrates the two particular examples G ( m , r , t ) when m = 3 , r = 1 , t = 3 and m = 2 , r = 1 , t = 4 , the weight of each edge in these two network examples is 1, which is omitted in Figure 2.

Figure 2 
               Illustration for two networks 
                     
                        
                        
                           G
                           
                              (
                              
                                 3
                                 ,
                                 1
                                 ,
                                 3
                              
                              )
                           
                        
                        G\left(3,1,3)
                     
                   and 
                     
                        
                        
                           G
                           
                              (
                              
                                 2
                                 ,
                                 1
                                 ,
                                 4
                              
                              )
                           
                        
                        G\left(2,1,4)
                     
                  .
Figure 2

Illustration for two networks G ( 3 , 1 , 3 ) and G ( 2 , 1 , 4 ) .

The iterative growth of the network allows us to precisely analyze its topological characteristics, such as the order and the size, the sum of weights of all edges, and so on. At each time step t i ( t i 1 ) , the number of newly added nodes is recorded as V t i = m ( 2 m ) t i 1 . Suppose the network order N t represents the total number of nodes in G ( m , r , t ) , we have

(1) N t = V 0 + t i = 1 t V t i = m ( 2 m ) t + 3 m 2 2 m 1 .

The network size E t is the number of total edges in G ( m , r , t ) , then

(2) E t = 2 m E t 1 = ( 2 m ) t .

Let symbol d i ( t ) be the degree of node i in G ( m , r , t ) that was generated at time step t i , then we can obtain a recursive formula for the degree of node as d i ( t + 1 ) = m d i ( t ) = 2 m t t i .

The networks studied in this article display some representative topological characteristics observed in different real-life systems. They obey a significant power-law degree distribution with exponent γ = 2 + ln 2 / ln m , and have a fractal dimension f B = ln ( 2 m ) / ln 2 , both of which show that the networks are fractal and scale-free [25,26].

Let Q t represent the total weight of all edges in G ( m , r , t ) , by construction, we have

(3) Q t = 2 m r Q t 1 ,

then by considering the initial condition Q 0 = 1 , we can solve

(4) Q t = ( 2 m r ) t .

For an edge e = i j connecting two nodes i and j in G ( m , r , t ) , w i j ( t ) denote the weight of edge i j , let s i ( t ) be the strength of node i in time step t , which represents the sum of the weights of all adjacent edges of node i , it can be formulated as

(5) s i ( t ) = j N ( i ) w i j ( t ) = m r s i ( t 1 ) = ( m r ) t t i s i ( t i ) ,

where N ( i ) is the set of all neighbors of node i in the network G ( m , r , t ) .

3 Eigenvalues and their multiplicities of transition weight matrix

In this section, we determine all eigenvalues and multiplicities of transition weight matrix in our networks, according to the relationship between the eigenvalues of the transition weight matrix of two consecutive generations.

3.1 Relationship between eigenvalues

Let W t be the weighted adjacency matrix of network G ( m , r , t ) and its entries W t ( i , j ) are defined as W t ( i , j ) = w i j ( t ) if nodes i and j are connected by an edge with weight w i j ( t ) in G ( m , r , t ) , or W t ( i , j ) = 0 if there is no edge between node i and node j . Another important matrix T t involved in describing weighted random walks is called transition weight matrix, which is defined as T t = S t 1 W t , where S t is the diagonal strength matrix of network G ( m , r , t ) with its i th diagonal entry being the strength s i ( t ) of i . Thus, the ( i , j ) th element of T t is written as T t ( i , j ) = w i j ( t ) / s i ( t ) , which represents the corresponding transition probability for a particle going from starting node i to ending node j . Since T t is asymmetric, we introduce a real and symmetric matrix P t to assist our research,

(6) P t = S t 1 2 W t S t 1 2 = S t 1 2 T t S t 1 2 ,

which shows that matrix P t is similar to matrix T t , and they have the same eigenvalue set. In addition, the entry in the i th row and j th column of P t is essentially P t ( i , j ) = w i j ( t ) / s i ( t ) s j ( t ) according to equation (6). Furthermore, we set ϕ as an eigenvector of matrix P t associated with eigenvalue λ , then the eigenvector corresponding to eigenvalue λ in T t can be written as S 1 2 ϕ . Therefore, we only need to find all the eigenvalues of matrix P t , and the eigenvalues of the another matrix T t can be obtained through transformation. Furthermore, we introduce the normalized Laplacian matrix L t = I t P t , where I t is the N t × N t identity matrix. In the following, we use the decimation approach [27] to enumerate all eigenvalues of normalized Laplacian matrix in our network.

For G ( m , r , t ) , we suppose that λ i ( t ) is an eigenvalue of P t , ϕ = ( ϕ 1 , ϕ 2 , , ϕ N t ) T , which denote the eigenvector corresponding to the eigenvalue λ i ( t ) , where ϕ i is the element corresponding to node i . In order to distinguish between the newly generated nodes at time step t and the existing old nodes (they were generated before time step t ), we let ϕ represent a vector with N t 1 dimension that is obtained from ϕ by restricting its components to the old nodes at time step t , with the set of all old nodes in G ( m , r , t ) is denoted as V ( t 1 ) , then

(7) λ i ( t ) ϕ = P t ϕ .

ϕ as an eigenvector of matrix P t 1 , it associated with the eigenvalue λ i ( t 1 ) , similar to equation (7), we can obtain

(8) λ i ( t 1 ) ϕ = P t 1 ϕ .

Let i V ( t 1 ) be an old node in G ( m , r , t ) , we also have the following equation according to equation (7),

(9) λ i ( t ) ϕ i = j N ( i ) P t ( i , j ) ϕ j .

Given two old nodes i , i V ( t 1 ) in network G ( m , r , t ) , and the weight of the connecting edge between these two nodes in G ( m , r , t 1 ) is equal to w i i ( t 1 ) . According to the way the network grows in two consecutive time steps, let j 1 , j 2 , , j m be the m neighbors of nodes i and i , then expand equation (9) in detail to obtain the following equation, let N i ( t ) be the set of neighbors of node i ,

(10) λ i ( t ) ϕ i = j 1 , , j m N i ( t ) w i j 1 ( t ) s i ( t ) s j 1 ( t ) ϕ j 1 + w i j 2 ( t ) s i ( t ) s j 2 ( t ) ϕ j 2 + + w i j m ( t ) s i ( t ) s j m ( t ) ϕ j m .

For each element ϕ j q ( q = 1 , 2 , , m ) corresponding to a new node, for every new neighbor j q , we can obtain

(11) λ i ( t ) ϕ j 1 = w i j 1 ( t ) s i ( t ) s j 1 ( t ) ϕ i + w i j 1 ( t ) s i ( t ) s j 1 ( t ) ϕ i λ i ( t ) ϕ j 2 = w i j 2 ( t ) s i ( t ) s j 2 ( t ) ϕ i + w i j 2 ( t ) s i ( t ) s j 2 ( t ) ϕ i λ i ( t ) ϕ j m = w i j m ( t ) s i ( t ) s j m ( t ) ϕ i + w i j m ( t ) s i ( t ) s j m ( t ) ϕ i .

We know that w i j q ( t ) = w i j q ( t ) = r w i i ( t 1 ) for q = 1 , 2 , , m , s i ( t ) = m r s i ( t 1 ) , s i ( t ) = m r s i ( t 1 ) , and s j q ( t ) = m r w i i ( t 1 ) , and substituting equation (11) into equation (10), equation (10) becomes

(12) λ i ( t ) ϕ i = i N i ( t 1 ) w i i ( t 1 ) 2 λ i ( t ) s i ( t 1 ) ϕ i + w i i ( t 1 ) 2 λ i ( t ) s i ( t 1 ) s i ( t 1 ) ϕ i = 1 2 λ i ( t ) ϕ i + i N i ( t 1 ) w i i ( t 1 ) s i ( t 1 ) s i ( t 1 ) ϕ i = 1 2 λ i ( t ) ϕ i + 1 2 λ i ( t ) i N i ( t 1 ) P t 1 ( i , i ) ϕ i .

For an old node i V ( t 1 ) , we can further simplify equation (12) to obtain

(13) 2 λ i ( t ) λ i ( t ) 1 2 λ i ( t ) ϕ i = i N i ( t 1 ) P t 1 ( i , i ) ϕ i .

Comparing equations (13) and (8) corresponding to any old node i , we obtain the following equation:

(14) 2 λ i ( t ) λ i ( t ) 1 2 λ i ( t ) = λ i ( t 1 ) ;

furthermore, the solution of λ i ( t ) is expressed by λ i ( t 1 ) as

(15) λ i ( t ) = ± λ i ( t 1 ) + 1 2 .

The above result shows that for an eigenvalue λ i ( t 1 ) of matrix P t 1 , we can obtain two eigenvalues λ i , 1 ( t ) and λ i , 2 ( t ) of the matrix P t through the recursive relationship as shown in equation (15). Therefore, if all the eigenvalues of P t 1 are known, then we can calculate all the eigenvalues of P t . If there are also eigenvalues λ i ( t ) that cannot be derived from equation (15), then they are zero eigenvalues.

3.2 Multiplicities of eigenvalues

On the basis of the eigenvalues obtained above, we can determine their corresponding multiplicity for matrix P t . For convenience, let D t mul ( λ i ( t ) ) be the degeneracy of eigenvalue λ i ( t ) for matrix P t , because P t 1 is a real and symmetrical matrix; hence, every eigenvalue λ i ( t 1 ) of P t 1 has D t 1 mul ( λ i ( t 1 ) ) linearly independent eigenvectors.

For small networks, we can calculate their eigenvalues and corresponding multiplicities directly, for instance, the set of eigenvalues of P 0 is { 1 , 1 } . For P 1 , its eigenvalues are 1 , 0, 1, where two eigenvalues 1 and 1 are generated by eigenvalue 1 of P 0 , and eigenvalue 0 is generated by eigenvalue 1 of P 0 . Similarly, we can obtain five different eigenvalues 1 , 2 2 , 0 , 2 2 , 1 of P 2 according to the eigenvalues of P 1 .

We can observe the fact that in addition to the eigenvalues λ i ( t ) = 0 , all eigenvalues of a given time step t i must exist in the subsequent time step t i + 1 , and each eigenvalue keeps the degeneracy of the previous time step. So, for t 2 , each eigenvalue in network G ( m , r , t 1 ) will bring about two eigenvalues in G ( m , r , t 1 ) in the light of equation (15), except for eigenvalue 1 and 1, because 1 can only produce eigenvalue 0, and 1 of P t 1 produce 1 and 1 in the next generation. Therefore, we need to individually consider the multiplicity of eigenvalue 0, as well as the multiplicities of its descendants.

In general, we usually use r ( M ) to represent the rank of a matrix M , and the multiplicity of the zero eigenvalues for matrix P t is expressed as

(16) D t mul ( 0 ) = N t r ( P t ) .

For the set of all nodes in G ( m , r , t ) , let α be the set of all nodes in G ( m , r , t 1 ) , and β denote the set of nodes newly added at time step t . Now, we identify that the rank of P t , P t can be written in a block form due to the topology of the network,

(17) P t = P α , α P α , β P β , α P β , β = 0 P α , β P β , α 0 ,

where the fact that P α , α is the N t 1 × N t 1 zero matrix and P β , β is the ( N t N t 1 ) × ( N t N t 1 ) zero matrix are substituted into equation (17).

Because the matrix P β , α is the transpose of the matrix P α , β , then we only analyze the rank of the matrix P α , β , P α , β , which is an N t 1 × ( N t N t 1 ) -order matrix, taking the form

(18) P α , β = p 1 p 1 0 0 0 0 p 1 p 1 p 2 p 2 p 2 p 2 p 2 p 2 p 2 p 2 ,

where the first ( N t N t 1 ) / 2 entries in the first row of the matrix P α , β are p 1 ; the last ( N t N t 1 ) / 2 entries of the second row are p 1 ; there are 2 m elements p 2 in each row from the third row to the N t 1 th row; the “ p 2 p 2 ” in equation (18) means that it contains m entries p 2 . The unmarked entries in matrix equation (18) are zeros. Performing some elementary row operations on matrix P α , β consists of adding the entries of the second row to the first row and multiplying the entries of the third row to the N t 1 th rows by 1 , and then adding it to the first row. The result of the elementary row operations shows that the entries in the first row are all zeros, and the matrix P α , β has following form:

(19) P α , β = 0 0 0 0 0 0 p 1 p 1 p 2 p 2 p 2 p 2 p 2 p 2 p 2 p 2 ,

from which we obtain r ( P α , β ) = N t 1 1 , and consider P β , α = P α , β T , then,

(20) r ( P α , β ) = r ( P β , α ) = N t 1 1 = m ( 2 m ) t 1 + 3 m 2 2 m 1 1 .

In addition, we have

(21) r ( P t ) = r 0 P α , β P β , α 0 = 2 N t 1 2 = 2 m ( 2 m ) t 1 + 6 m 4 2 m 1 2 .

Therefore, we obtain D t mul ( 0 ) = N t 2 N t 1 + 2 , which indicates that the multiplicity of eigenvalues zero of P t is

(22) D t mul ( 0 ) = 0 , t = 0 , ( 2 m ) t ( m 1 ) + m 2 m 1 , t 1 .

The total number of eigenvalue 0 and its descendants in P t ( t 1 ) is denoted as N t seed ( 0 ) , then

(23) N t seed ( 0 ) = i = 1 t D i mul ( 0 ) 2 t i = m 1 2 m 1 i = 1 t ( 2 m ) i 2 t i + m 2 m 1 i = 1 t 2 t i = m ( 2 m ) t m 2 m 1 .

For eigenvalues 1 and 1, we found that the eigenvalue 1 in P t 1 produces eigenvalue 0 according to equation (15); hence, for eigenvalue 1 , we have included the number of its descendants in the number of eigenvalue 0, which has been calculated in equation (23). Next, considering the multiplicity of eigenvalues 1 and 1 in P t ( t 0 ) , we have

(24) N t mul ( 1 ) = N t mul ( 1 ) = 1 .

Apparently, eigenvalues 1 and 1 in P t are generated by the eigenvalue 1 in P t 1 . Adding the number of all the eigenvalues obtained above, we obtain the following result:

(25) N t seed ( 0 ) + N t mul 1 2 + N t mul ( 1 ) = m ( 2 m ) t + 3 m 2 2 m 1 = N t ,

which proves that we have found all the eigenvalues, and the corresponding multiplicities of the matrix P t are determined. In addition, then we also obtain all the eigenvalues and their multiplicities of transition weight matrix T t by simple transformation.

4 Applications of eigenvalues

The eigenvalues of the normalized Laplacian matrix of G ( m , r , t ) can be determined from the eigenvalues obtained in Section 3.2, and it can be used to calculate some quantities for the weighted scale-free networks, such as the random target access time for random walks, and also to enumerate weighted spanning trees [27].

4.1 Random target access time

We set H i j ( t ) as the expected time for a particle starting from staring node i to visit ending node j for the first time in G ( m , r , t ) . Let π = ( π 1 , π 2 , , π N t ) T be the steady-state distribution for random walks on G ( m , r , t ) , where π i = s i ( t ) / ( 2 Q t ) satisfying i = 1 N π i = 1 and π T T t = π T . The random target access time H t for random walks on G ( m , r , t ) is defined as the expected time needed by a particle from a node i to another target node j , chosen randomly from all nodes according to the steady-state distribution [9], then it can be formulated as

(26) H t = j = 1 N t π j H i j ( t ) .

The quantity H t does not change due to different starting nodes, so it can be reexpressed as

(27) H t = i = 1 N t π i j = 1 N t π j H i j ( t ) = j = 1 N t π j i = 1 N t π i H i j ( t ) .

Equation (27) implies that the random target access time H t can be looked upon as the average trapping time of a particular trapping problem; it contains much valuable information about trapping in network G ( m , r , t ) . The random target access time in network G ( m , r , t ) can be obtained by the sum of the reciprocal of 1 minus each eigenvalue of T t , and the eigenvalue 1 is not included here [7].

The normalized Laplacian matrix of G ( m , r , t ) is defined as L t = I t P t , and I t denotes the identity matrix with order N t × N t . Let { λ i ( t ) : 1 i N t } be the N t eigenvalues of matrix P t . By definition, let σ i ( t ) = 1 λ i ( t ) be the eigenvalue of L t for 1 i N t , then there is a one-to-one relationship between σ i ( t ) and λ i ( t ) . It can be proved that H t can be represented in terms of the nonzero eigenvalues of L t [28] as follows:

(28) H t = i = 2 N t 1 σ i ( t ) = i = 2 N t 1 1 λ i ( t ) .

Theorem 1

For t > 0 , the random target access time of weighted network G ( m , r , t ) is

H t = 2 ( 2 m ) t ( m 1 ) + 16 m 2 4 m 1 2 ( 2 m 1 )

when t , and the relationship between H t and the network order N t is

H t m 1 m N t .

Proof

According to equation (15) and σ i ( t ) = 1 λ i ( t ) , we can easily obtain the following recursive relation commanding the eigenvalues of two normalized Laplacian matrices L t 1 and L t ,

(29) σ i ( t ) = 1 ± 2 σ i ( t 1 ) 2 .

We suppose that Ω t contains all eigenvalues of matrix L t at time step t , then equation (29) means that each eigenvalue σ i ( t 1 ) in Ω t 1 gives rise to two eigenvalues σ i , 1 ( t ) and σ i , 2 ( t ) Ω t .

It is easy to obtain the following result due to equation (29): Ω 0 = { 0 , 2 } , 0 Ω 0 generates eigenvalue 0 , 2 Ω 1 ; 2 Ω 0 generates two eigenvalues 1 Ω 1 , so Ω 1 = { 0 , 1 , 1 , 2 } ; 0 Ω 1 generates eigenvalue 0 , 2 Ω 2 ; 2 Ω 1 generates eigenvalue 1 Ω 2 ; and 1 Ω 1 generates two eigenvalues 1 + 2 2 and 1 2 2 , thus we can obtain Ω 2 = { 0 , 1 2 2 , 1 2 2 , 1 , 1 , 1 , 1 , 1 , 1 , 1 + 2 2 , 1 + 2 2 , 2 } . Similarly, Ω 3 = { 0 , 2 , Ω 3 ( 1 ) , Ω 3 ( 2 ) } , where Ω 3 ( 1 ) = { 1 , 1 , , 1 } contains all eigenvalues 1 and Ω 3 ( 2 ) is generated by equation (29) from Ω 2 { 0 , 2 } .

In order to facilitate calculations, we divide the set Ω t into three disjoint subsets represented by Ω t ( 1 ) , Ω t ( 2 ) and the set Ω t ( 3 ) = { 0 , 2 } , which can be decomposed into Ω t = Ω t ( 1 ) Ω t ( 2 ) { 0 , 2 } , where Ω t ( 1 ) consists of eigenvalue 1 with multiplicity ( 2 m ) t ( m 1 ) + m 2 m 1 , and these eigenvalues generated by equation (29) from σ i ( t 1 ) Ω t 1 { 0 , 2 } are involved in Ω t ( 2 ) . First, we consider the subset Ω t ( 1 ) , then

(30) σ i ( t ) Ω t ( 1 ) 1 σ i ( t ) = ( 2 m ) t ( m 1 ) + m 2 m 1 .

Hence, we still need to evaluate σ i ( t ) Ω t ( 2 ) 1 σ i ( t ) in order to determine H t . According to Vieta’s formulas, we have σ i , 1 ( t ) + σ i , 2 ( t ) = 2 and σ i , 1 ( t ) σ i , 2 ( t ) = σ i ( t 1 ) / 2 . In addition, we have

(31) 1 σ i , 1 ( t ) + 1 σ i , 2 ( t ) = 4 σ i ( t 1 ) ,

which indicates that

(32) σ i ( t ) Ω t ( 2 ) 1 σ i ( t ) = 4 σ i ( t 1 ) Ω t 1 { 0 , 2 } 1 σ i ( t 1 ) .

Combining the above results in equations (30) and (32), the relationship between H t and H t 1 can be obtained as follows:

(33) H t = σ i Ω t ( 1 ) 1 σ i ( t ) + σ i Ω t ( 2 ) 1 σ i ( t ) + 1 2 = 4 H t 1 + ( 2 m ) t ( m 1 ) + m 2 m 1 + 1 2 .

Furthermore, using the initial condition H 0 = 1 2 , we can obtain

(34) H t = 4 m + ( 2 m ) t ( m 1 ) + m 2 m 1 + 1 2 = 2 ( 2 m ) t ( m 1 ) + 16 m 2 4 m 1 2 ( 2 m 1 ) .

Finally, for large weighted fractal networks, that is t , we explore how the random target access time of G ( m , r , t ) changes with the increase of the network order, we know N t = m ( 2 m ) t + 3 m 2 2 m 1 , then we can obtain

(35) H t m 1 m N t + 16 m 3 + 2 m 2 11 m + 4 2 m ( 2 m 1 ) .

Equation (35) implies that in the large t limit, the random target access time increases as a linear function of the network order; H t is only affected by the topological parameter m ; and it has no relationship with the weight factor r . In Figure 3, we give the schematic diagram to reveal the relationships between H t and two parameters t and m . In more detail, when the value of m is determined, H t increases monotonically with t for t 0 as shown in Figure 4. Similarly, for a given t , H t increases monotonically with m for m 2 as shown in Figure 5.□

Figure 3 
                  Random target access time 
                        
                           
                           
                              
                                 
                                    H
                                 
                                 
                                    t
                                 
                              
                           
                           {H}_{t}
                        
                      of 
                        
                           
                           
                              G
                              
                                 (
                                 
                                    m
                                    ,
                                    r
                                    ,
                                    t
                                 
                                 )
                              
                           
                           G\left(m,r,t)
                        
                     .
Figure 3

Random target access time H t of G ( m , r , t ) .

Figure 4 
                  When 
                        
                           
                           
                              m
                           
                           m
                        
                      is determined, 
                        
                           
                           
                              
                                 
                                    H
                                 
                                 
                                    t
                                 
                              
                           
                           {H}_{t}
                        
                      increases monotonically with 
                        
                           
                           
                              t
                           
                           t
                        
                      for 
                        
                           
                           
                              t
                              ≥
                              0
                           
                           t\ge 0
                        
                     .
Figure 4

When m is determined, H t increases monotonically with t for t 0 .

Figure 5 
                  When 
                        
                           
                           
                              t
                           
                           t
                        
                      is fixed, 
                        
                           
                           
                              
                                 
                                    H
                                 
                                 
                                    t
                                 
                              
                           
                           {H}_{t}
                        
                      increases monotonically with 
                        
                           
                           
                              m
                           
                           m
                        
                      for 
                        
                           
                           
                              m
                              ≥
                              2
                           
                           m\ge 2
                        
                     .
Figure 5

When t is fixed, H t increases monotonically with m for m 2 .

4.2 Weighted counting of spanning trees

The spanning tree of a connect network G ( m , r , t ) is a subgraph of it, that is, a tree T , and includes all the nodes of G ( m , r , t ) [29]. For a weighted network G ( m , r , t ) , let Λ ( G ( m , r , t ) ) be its spanning trees set, and w ( T ) = e T w e , which is defined to be the product of weight of all edges in T , where w e is the weight of edge e . Let N s t w ( G ( m , r , t ) ) denote the weight counting of spanning trees of weighted fractal network G ( m , r , t ) , that is, N s t w ( G ( m , r , t ) ) = T Λ ( G ( m , r , t ) ) w ( T ) .

Different from using the electrical networks theory to obtain the closed-form formula for the spanning tree enumeration [30,31] and calculating the weighted spanning trees through the special structure of the networks [32], in this subsection, we count N s t w ( G ( m , r , t ) ) by using the eigenvalues of normalized Laplacian matrix L t of G ( m , r , t ) , that is,

(36) N s t w ( G ( m , r , t ) ) = i = 1 N t s i ( t ) i = 2 N t σ i ( t ) i = 1 N t s i ( t ) .

Theorem 2

For t > 0 , the weighted spanning tree of weighted network G ( m , r , t ) is

N s t w ( G ( m , r , t ) ) = 2 m r t ( m r ) i = 1 t N i 1 2 i = 0 t 1 N i .

Proof

We first need to determine the three terms in equation (36). For the sum term in the denominator, we have

(37) i = 1 N t s i ( t ) = 2 Q t = 2 ( 2 m r ) t .

Consider the two product terms in the numerator of equation (36), let Φ t represent the product Π i = 1 N t s i ( t ) , and Θ t represent the product term Π i = 2 N g σ i ( t ) , respectively. According to the calculation formula of the strength of the node, the quantity Φ t obeys the following recursive relations:

(38) Φ t = ( m r ) N t × Φ t 1

Based on the results of the eigenvalues obtained above, the following equation holds:

(39) Θ t = 2 × 1 2 N t 1 1 × Θ t 1 .

Then, multiplying equations (38) by (39) results in

(40) Φ t Θ t = 4 ( m r ) N t 1 2 N t 1 Φ t 1 Θ t 1 .

For the simple case of t = 0 , we can easily obtain Φ 0 = 1 and Θ 0 = 2 , that is, Φ 0 Θ 0 = 2 , then equation (40) is solved to give a solution,

(41) Φ t Θ t = 2 2 t + 1 ( m r ) i = 1 t N i 1 2 i = 0 t 1 N i .

Inserting the results of two equations (37) and (41) into equation (36), we can obtain the following expression of N s t w ( G ( m , r , t ) ) in the studied weighted network G ( m , r , t ) :

(42) N s t w ( G ( m , r , t ) ) = 2 m r t ( m r ) i = 1 t N i 1 2 i = 0 t 1 N i .

The result of equation (42) is consistent with the result of direct enumeration, which verifies that our calculation for the eigenvalues of the transition weight matrix of the weighted network G ( m , r , t ) is correct.□

5 Conclusion

There are many documents that have verified that the weights of some networks have a serious impact on the random target access time. Unlike the existing weighted networks, we have found a family of weighted networks whose random target access time is not controlled by its weight factor, and these networks have been proven to exhibit the remarkable scale-free properties observed in various real-life complex systems. We have listed all eigenvalues for the transition weight matrix of G ( m , r , t ) by giving the explicit recursive expression governing the eigenvalues of networks of two consecutive generations, it means that two eigenvalues of P t can be derived from the λ i ( t 1 ) of P t 1 . On this basis, we have harvested all the eigenvalues and their corresponding multiplicities for transition weight matrix T t of the network G ( m , r , t ) and prove that H t is only controlled by the parameter m . Finally, we also use the obtained eigenvalues of the normalized Laplacian matrix to further enumerate the weighted spanning tree of the network. In the future, we will explore the influence of the weight on the efficiency of random walks in the network with other properties besides scale-free.

  1. Funding information: This research was supported by the National Key Research and Development Plan under Grant No. 2019YFA0706401 and the National Natural Science Foundation of China under Grant Nos. 61872166 and 61662066, the Technological Innovation Guidance Program of Gansu Province: Soft Science Special Project (21CX1ZA285), and the Northwest China Financial Research Center Project of Lanzhou University of Finance and Economics (JYYZ201905).

  2. Author contributions: B.Y. and J.S. created and conceptualized the idea. J.S. wrote the original draft. X.W. and M.Z. reviewed and edited the draft. All authors have accepted responsibility for the entire content of this manuscript and approved its submission. The authors applied the SDC approach for the sequence of authors.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: Data sharing is not applicable to this article as no datasets were generated or analyzed during this study.

References

[1] X. D. Zhao, R. Tao, X. J. Kang, and W. Li, Hierarchical-biased random walk for urban remote sensing image segmentation, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 12 (2019), no. 5, 1–13, https://doi.org/10.1109/JSTARS.2019.2905352. Search in Google Scholar

[2] M. Bestehorn, A. P. Riascos, T. M. Michelitsch, and B. A. Collet, A Markovian random walk model of epidemic spreading, Continuum Mech. Thermodyn. 33 (2021), no. 10, 1207–1221, https://doi.org/10.1007/s00161-021-00970-z. Search in Google Scholar PubMed PubMed Central

[3] B. F. Hu, H. Wang, and Y. J. Zheng, Sign prediction and community detection in directed signed networks based on random walk theory, Int. J. Emb. Syst. 11 (2019), no. 2, 200, https://doi.org/10.1504/IJES.2019.10019713. Search in Google Scholar

[4] H. Zhang, H. Q. Zhu, and X. F. Ling, Polar coordinate sampling-based segmentation of overlapping cervical cells using attention U-Net and random walk, Neurocomputing 383 (2020), no. 3, 212–223, DOI: https://doi.org/10.1016/j.neucom.2019.12.036. 10.1016/j.neucom.2019.12.036Search in Google Scholar

[5] N. Masuda, M. A. Porter, and R. Lambiotte, Random walks and diffusion on networks, Phys. Rep. 716–717 (2017), 1–58, https://doi.org/10.1016/j.physrep.2017.07.007. Search in Google Scholar

[6] A. P. Riascos and J. L. Mateos, Random walks on weighted networks: a survey of local and non-local dynamics, J. Complex Netw. 9 (2021), 1–39, https://doi.org/10.1093/comnet/cnab032. Search in Google Scholar

[7] M. Levene and G. Loizou, Kemeny’s constant and the random surfer, Amer. Math. Monthly. 109 (2002), no. 8, 741–745, https://doi.org/10.1080/00029890.2002.11919905. Search in Google Scholar

[8] Z. Z. Zhang, A. Julaiti, B. Y. Hou, H. J. Zhang, and G. R. Chen, Mean first-passage time for random walks on undirected network, Eur. Phys. J. B. 84 (2011), no. 4, 691–697, https://doi.org/10.1140/epjb/e2011-20834-1. Search in Google Scholar

[9] E. I. Milovanovi, M. M. Mateji, and I. Z. Milovanovi, On the normalized Laplacian spectral radius, Laplacian incidence energy and Kemeny’s constant, Linear Algebra Appl. 582 (2019), no. 1, 181–196, DOI: https://doi.org/10.1016/j.laa.2019.08.004. 10.1016/j.laa.2019.08.004Search in Google Scholar

[10] P. C. Xie, Z. Z. Zhang, and F. Comellas, On the spectrum of the normalized Laplacian of iterated triangulations of graphs, Appl. Math. Comput. 273 (2016), no. 15, 1123–1129, https://doi.org/10.1016/j.amc.2015.09.057. Search in Google Scholar

[11] Y. F. Chen, M. F. Dai, X. Q. Wang, Y. Sun, and W. Y. Su, Spectral analysis for weighted iterated triangulations of graphs, Fractals 26 (2018), no. 1, 1850017, https://doi.org/10.1142/S0218348X18500172. Search in Google Scholar

[12] L. Gao, J. Peng, C. Tang, and A. P. Riascos, Trapping efficiency of random walks in weighted scale-free trees, J. Stat. Mech. Theory Exp. 2021 (2021), 063405, https://doi.org/10.1088/1742-5468/ac02cb. Search in Google Scholar

[13] M. Liu, Z. Xiong, Y. Ma, P. Zhang, J. Wu, and X. Qi, DPRank centrality: Finding important vertices based on random walks with a new defined transition matrix, Future Gener. Comput. Syst. 83 (2017), 376–389, DOI: https://doi.org/10.1016/j.future.2017.10.036. 10.1016/j.future.2017.10.036Search in Google Scholar

[14] N. Bajorin, T. Chen, A. Dagan, C. Emmons, M. Hussein, M. Khalil, et al., Vibration modes of 3n-gaskets and other fractals, J. Phys. A Math. Theor. 41 (2008), no. 1, 015101, https://doi.org/10.1088/1751-8113/41/1/015101. Search in Google Scholar

[15] A. Julaiti, B. Wu, and Z. Z. Zhang, Eigenvalues of normalized Laplacian matrices of fractal trees and dendrimers: Analytical results and applications, J. Chem. Phys. 138 (2013), no. 20, 204116–204116, https://doi.org/10.1063/1.4807589. Search in Google Scholar PubMed

[16] M. F. Dai, X. Q. Wang, Y. Q. Sun, Y. Sun, and W. Su, Eigentime identities for random walks on a family of treelike networks and polymer networks, Phys. A. 484 (2017), no. 15, 132–140, https://doi.org/10.1016/j.physa.2017.04.172. Search in Google Scholar

[17] A. Irmanova, I. Dolzhikova, and A. P. James, Self tuning stochastic weighted neural networks, IEEE Int. Symp. Circuits Syst. 2020 (2020), 1–5, https://doi.org/10.1109/ISCAS45731.2020.9180809. Search in Google Scholar

[18] A. D. Bona, D. Marcelo, K. O. Fonseca, and R. Luders, A reduced model for complex network analysis of public transportation systems, Phys. A. 567 (2021), 125715, https://doi.org/10.1016/j.physa.2020.125715. Search in Google Scholar

[19] Z. Z. Zhang, X. Y. Guo, and Y. H. Yi, Spectra of weighted scale-free networks, Sci. Rep. 5 (2015), no. 1, 17469, https://doi.org/10.1038/srep17469. Search in Google Scholar PubMed PubMed Central

[20] M. F. Dai, J. Y. Liu, J. W. Chang, D. L. Tang, T. T. Ju, Y. Sun, et al., Eigentime identity of the weighted scale-free triangulation networks for weight-dependent walk, Phys. A. 513 (2019), 202–209, https://doi.org/10.1016/j.physa.2018.08.172. Search in Google Scholar

[21] J. H. Zou, M. F. Dai, X. Q. Wang, H. L. Tang, D. He, Y. Sun, et al., Eigenvalues of transition weight matrix and eigentime identity of weighted network with two hub nodes, Canad. J. Phys. 96 (2017), no. 3, 255–261, DOI: https://doi.org/10.1139/cjp-2017-0274. 10.1139/cjp-2017-0274Search in Google Scholar

[22] E. Q. Zhu, F. Jiang, C. J. Liu, and J. Xu, Partition independent set and reduction-based approach for partition coloring problem, IEEE Trans. Cybern. 52 (2022), no. 6, 1–10, https://doi.org/10.1109/TCYB.2020.3025819. Search in Google Scholar PubMed

[23] C. J. Liu, A note on domination number in maximal outerplanar graphs, Discrete Appl. Math. 293 (2021), no. 1, 90–94, https://doi.org/10.1016/j.dam.2021.01.021. Search in Google Scholar

[24] C. J. Liu, E. Q. Zhu, Y. K. Zhang, Q. Zhang, and X. P. Wei, Characterization, verification and generation of strategies in games with resource constraints, Automatica 140 (2022), 110254, DOI: https://doi.org/10.1016/j.automatica.2022.110254. 10.1016/j.automatica.2022.110254Search in Google Scholar

[25] Z. Z. Zhang, S. G. Zhou, and T. Zou, Self-similarity, small-world, scale-free scaling, disassortativity, and robustness in hierarchical lattices, Eur. Phys. J. B. 56 (2007), no. 3, 259–271, https://doi.org/10.1140/epjb/e2007-00107-6. Search in Google Scholar

[26] H. D. Rozenfeld, S. Havlin, and D. Ben-Avraham, Fractal and transfractal recursive scale-free nets, New J. Phys. 9 (2007), 175, https://doi.org/10.1088/1367-2630/9/6/175. Search in Google Scholar

[27] A. Julaiti, B. Wu, and Z. Z. Zhang, Eigenvalues of normalized Laplacian matrices of fractal trees and dendrimers: Analytical results and applications, J. Chem. Phys. 138 (2013), no. 20, 204116–204116, https://doi.org/10.1063/1.4807589. Search in Google Scholar PubMed

[28] J. G. Kemeny, J. L. Snell, J. G. Kemeny, H. Mirkill, J. L. Snell, G. L. Thompson, et al., Finite Markov chains, Amer. Math. Monthly. 31 (1961), no. 67, 2789587, https://doi.org/10.2307/2309264. Search in Google Scholar

[29] Z. Z. Zhang, S. Q. Wu, M. Y. Li, and F. Comellas, The number and degree distribution of spanning trees in the Tower of Hanoi graph, Theoret. Comput. Sci. 609 (2016), no. 2, 443–455, https://doi.org/10.1016/j.tcs.2015.10.032. Search in Google Scholar

[30] W. G. Sun, S. Wang, and J. Y. Zhang, Counting spanning trees in prism and anti-prism graphs, Appl. Anal. Comput. 6 (2016), no. 1, 65–75, https://doi.org/10.11948/2016006. Search in Google Scholar

[31] Y. L. Shang, On the number of spanning trees, the Laplacian eigenvalues, and the Laplacian Estrada index of subdivided-line graphs, Open Math. 14 (2016), no. 1, 641–648, https://doi.org/10.1515/math-2016-0055. Search in Google Scholar

[32] F. Ma and B. Yao, The number of spanning trees of a class of self-similar fractal models, Inform. Process. Lett. 136 (2018), 64–69, https://doi.org/10.1016/j.ipl.2018.04.004. Search in Google Scholar

Received: 2021-11-12
Revised: 2022-04-20
Accepted: 2022-05-16
Published Online: 2022-10-21

© 2022 Jing Su et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 5.12.2023 from https://www.degruyter.com/document/doi/10.1515/math-2022-0464/html
Scroll to top button