Skip to content
BY 4.0 license Open Access Published by De Gruyter November 6, 2021

Critical Galton–Watson Processes with Overlapping Generations

  • Serik Sagitov ORCID logo EMAIL logo

Abstract

A properly scaled critical Galton–Watson process converges to a continuous state critical branching process ξ ( ) as the number of initial individuals tends to infinity. We extend this classical result by allowing for overlapping generations and considering a wide class of population counts. The main result of the paper establishes a convergence of the finite-dimensional distributions for a scaled vector of multiple population counts. The set of the limiting distributions is conveniently represented in terms of integrals ( 0 y ξ ( y - u ) d u γ , y 0 ) with a pertinent γ 0 .

MSC 2010: 60J80

1 Introduction

One of the basic stochastic population models of a self-reproducing system is built upon the following two assumptions:

  1. different individuals live independently from each other according to the same individual life law described in (B);

  2. an individual dies at age one and at the moment of death gives birth to a random number 𝑁 of offspring.

Within this model, the numbers of individuals Z 0 , Z 1 , , born at times t = 0 , 1 , , form a Markov chain, whose transition probabilities are fully described by the distribution of the offspring number 𝑁. The Markov chain { Z t , t 0 } is usually called a Galton–Watson process, or GW-process for short. A GW-process is classified as subcritical, critical, or supercritical, depending on whether the mean offspring number E ( N ) is less than, equal to, or larger than the critical value 1.

It is known that, in the critical case, with

(1.1) E ( N ) = 1 , Var ( N ) = 2 b , b < ,

the finite-dimensional distributions (fdds) of a properly scaled GW-process converge,

(1.2) { n - 1 Z n u , u 0 Z 0 = n } fdd { ξ ( u ) , u 0 ξ ( 0 ) = 1 } , n ,

and the limiting fdds are represented by a continuous state branching process ξ ( ) , which is a continuous time Markov process with a transition law determined by

(1.3) E ( e - λ ξ ( v + u ) ξ ( v ) = x ) = e - λ x 1 + λ b u , v , u , x , λ 0 .

Note how the parameter 𝑏 acts as a time scale: the larger the variance of 𝑁, the faster the change of the population size.

In this paper, we study { Z ( t ) , t 0 } , a Galton–Watson process with overlapping generations, or GWO-process for short, where Z ( t ) is the number of individuals alive at time 𝑡 in a reproduction system satisfying the following two assumptions:

  1. different individuals live independently from each other according to the same individual life law described in (B*);

  2. an individual lives 𝐿 units of time and gives 𝑁 births at random ages τ 1 , , τ N , satisfying

    (1.4) 1 τ 1 τ N L .

Assumption (B*) allows for overlapping generations, when mothers may coexist with their daughters. We focus on the critical case (1.1) and aim at an extension of (1.2) to the GWO-processes.

The process { Z ( t ) , t 0 } , being non-Markov in general, is studied with help of an associated renewal process, introduced in Section 2. The mean inter-arrival time

(1.5) a := E ( τ 1 + + τ N )

of this renewal process gives us the average generation length. It is important to distinguish between the average generation length 𝑎, which in this paper will be assumed finite, and the average life length μ := E ( L ) , allowed to be infinite.

With a more sophisticated reproduction mechanism (1.4), there are many interesting population counts to study, alongside the number of newborns Z t and the number of individuals alive Z ( t ) at the time 𝑡. Observe that Z t is the total number of daughters produced at time 𝑡 by Z ( t - 1 ) individuals alive at time t - 1 . In particular, in the GW setting, a = 1 and Z ( t ) Z t since all alive individuals are newborn.

An interesting case of population counts is treated by Theorem 4 dealing with decomposable multitype GW-processes. Theorem 4 is obtained as an application of the main results of the paper, Theorems 1, 2, 3, stated and proven in Section 5. The following three statements are straightforward corollaries of Theorems 1, 2, and 3 respectively. In these theorems, it is always assumed that the GWO-process stems from a large number Z 0 = n of progenitors born at time zero.

Corollary 1

Consider a GWO-process satisfying (1.1) and a < . If μ < , then

{ n - 1 Z ( n u ) , u > 0 Z 0 = n } fdd { μ a - 1 ξ ( u a - 1 ) , u > 0 ξ ( 0 ) = 1 } , n .

Corollary 2

Consider a GWO-process satisfying (1.1) and a < . If μ = and, for some slowly varying function at infinity L ( ) ,

(1.6) j = 0 t P ( L > j ) = t γ L ( t ) , 0 γ 1 , t ,

then, as n ,

{ n - 1 - γ L - 1 ( n ) Z ( n u ) , u > 0 Z 0 = n } fdd { a γ - 1 ξ γ ( u a - 1 ) , u > 0 ξ ( 0 ) = 1 } .

Corollary 3

Consider a GWO-process satisfying (1.1), a < , and (1.6). Then, as n ,

{ ( n - 1 - γ L - 1 ( n ) Z ( n u ) , n - 1 Z n u ) , u > 0 Z 0 = n } fdd { ( a γ - 1 ξ γ ( u a - 1 ) , a - 1 ξ ( u a - 1 ) ) , u > 0 ξ ( 0 ) = 1 } .

Notice that condition (1.6) holds even in the case μ < , with γ = 0 and L ( t ) μ as t . The family of processes { ξ γ ( ) } γ 0 emerging in our limit theorems can be expressed in the integral form

(1.7) ξ 0 ( u ) := ξ ( u ) for γ = 0 and ξ γ ( u ) := 0 u ξ ( u - v ) d v γ for γ > 0 , u 0 ,

which is treated as a convenient representation of the limiting fdds; see Section 4.

The following remarks comment on relevant literature and mention an interesting open problem.

  1. The GW-process is a basic model of the biologically motivated theory of branching processes; see [1, 7, 17]. The critical GW-process can be viewed as a stochastic model of a sustainable reproduction, when a mother produces on average one daughter; see [12]. On the convergence result (1.2) for the critical GW-processes, see [1, 2, 10, 13].

  2. The GWO-process is a discrete time version of the so-called general branching process, often called the Crump–Mode–Jagers process; see [7, 10, 11, 18]. An earlier mentioning of a discrete time branching process with overlapping generations can be found in [21].

  3. The fruitful concept of population counts, allowing for a variety of individual scores (see Section 2) was first introduced in [9]. The interested reader may find several demographical examples of population counts in [9, 10].

  4. The above-mentioned Theorem 4 deals with the decomposable critical multitype GW-processes. In a more general setting, such processes were studied in [5], addressing related issues by applying a different approach.

  5. Compared to earlier attempts (see [19, 20] and especially [15]), the current treatment of critical age-dependent branching processes is made more accessible by restricting the analysis to the case of finite Var ( N ) and 𝑎, as well as focusing on the discrete time setting.

  6. Our proofs do not use (1.2) as a known fact (unlike for example [8], addressing a related problem). Therefore, convergence (1.2) can be derived from the above-mentioned Corollary 1.

  7. The branching renewal approach, introduced in Section 3, takes its origin in [6].

  8. The idea of studying branching processes starting from a large number of individuals is quite old; see [16] and especially [13]. For a most recent paper in the continuous time setting, see [14].

  9. The definitions and basic properties of slowly and regularly varying functions used in this paper can be found in [4]. We apply some basic facts of the renewal theory from [3].

  10. Our limit theorems are stated in terms of the fdd-convergence. Finding simple conditions on the individual scores, ensuring weak convergence in the Skorokhod sense, is an open problem.

Notational Agreements

  1. To avoid confusion, we set apart discrete and continuous variables:

    i , j , k , l , n , p , q , s , t Z = { 0 , ± 1 , ± 2 , } , u , v , x , y , z , λ [ 0 , ) .

    Mixed products are treated as integer numbers so that n u stands for n u . The latter results in n u n not always being equal to 𝑢.

  2. We distinguish between a stronger and a weaker forms of the uniform convergence

    f ( n ) ( y ) y f ( y ) , f ( n ) ( y ) y f ( y ) , n ,

    which respectively require the relations

    sup 0 y y 1 | f ( n ) ( y ) - f ( y ) | 0 , sup y 0 y y 1 | f ( n ) ( y ) - f ( y ) | 0 , n ,

    to hold for any 0 < y 0 < y 1 < .

  3. We will write

    E n ( ) := E ( Z 0 = n )

    to say that the expected value is computed under the assumption that the GWO-process starts from 𝑛 individuals born at time 0. With a little risk of confusion, we will also write

    E x ( ) := E ( ξ ( 0 ) = x )

    when the expectation deals with the finite-dimensional distributions of the continuous state branching process ξ ( ) .

  4. We will often use the following two shortenings:

    e 1 x := 1 - e - x , e 2 x := x - e 1 x = e - x - 1 + x .

    Note that both these functions are increasing, and for 0 x y ,

    (1.8) 0 e 1 y - e 1 x y - x , 0 e 2 x min ( x , 1 2 x 2 ) ,
    (1.9) e 1 x + y = e 1 x + e 1 y - e 1 x e 1 y , e 2 x + y = e 2 x + e 2 y + e 1 x e 1 y .

  5. In different formulas, the symbols C , C 1 , C 2 , c , c 1 , c 2 represent different positive constants.

2 Population Counts

The number of individuals alive at time 𝑡 can be counted as the sum of individual scores

Z ( t ) = j = 0 t k = 1 Z j 1 { j t < j + L j k } = j = 0 t k = 1 Z j ζ j k ( t - j ) ,

where L j k is the life length of the 𝑘-th individual born at time 𝑗 (according to an arbitrary labelling of the Z j individuals born at time 𝑗) and ζ j k ( t ) = 1 { 0 t < L j k } is its individual score. Here the individual score is 1 if the individual is alive at time 𝑡, and 0 otherwise. This representation leads to the next definition of a population count.

Definition 2.1

For a progenitor of the GWO-process, define its individual score as a vector ( χ ( t ) ) t Z with non-negative, possibly dependent components such that χ ( t ) = 0 for all t < 0 . This random vector is allowed to depend on the individual characteristics (1.4), but it is assumed to be independent from such characteristics of other individuals.

Define a population count X ( t ) = X [ χ ] ( t ) as the sum of time shifted individual scores

(2.1) X ( t ) := j = 0 t k = 1 Z j χ j k ( t - j ) , t Z ,

assuming that the individual scores ( χ j k ( t ) ) t Z are independent copies of ( χ ( t ) ) t Z .

2.1 The Litter Sizes

In terms of (1.4), the litter sizes of a generic individual are defined by ν ( t ) := j = 1 N 1 { τ j = t } , t 1 , so that ν ( 1 ) + + ν ( L ) = N . On the other hand, given the random infinite-dimensional vector

( L , ν ( 1 ) , ν ( 2 ) , ) , L 1 , ν ( t ) 0 , t 1 ,

where ν ( t ) is treated as the litter size at age 𝑡 for an individual with the life length 𝐿, the consecutive ages at childbearing can be found as

τ j = t = 1 L t  1 { N ( t - 1 ) < τ j N ( t ) } , N ( t ) := ( ν ( 1 ) + + ν ( t ) ) 1 { L t } ,

where N ( t ) is the number of daughters produced by a mother of age 𝑡.

In the critical case, the probabilities

A ( t ) := E ( ν ( t ) 1 { L t } ) , t 1 ,

sum up to one since t 1 A ( t ) = E ( ν ( 1 ) + + ν ( L ) ) = E ( N ) = 1 . A renewal process with inter-arrival times having distribution A ( 1 ) , ( A ( 2 ) , plays a crucial role in the analysis of the critical GWO-processes. Observe that the corresponding mean inter-arrival time is indeed given by (1.5),

t = 1 t A ( t ) = E ( t = 1 t ν ( t ) 1 { L t } ) = E ( t = 1 t j = 1 N 1 { τ j = t } ) = E ( j = 1 N t = 1 t 1 { τ j = t } ) = E ( τ 1 + + τ N ) = a .

In order to avoid a possible confusion, we emphasise at this point that ν ( t ) = 0 and A ( t ) = 0 if t 0 .

2.2 Associated Renewal Process

In the GWO setting with Z 0 = 1 , the process Z t conditioned on { N ( t ) = k } , where N ( t ) is the birth count of the founder, can be viewed as the sum of 𝑘 independent daughter copies Z t = Z t - τ 1 ( 1 ) + + Z t - τ k ( k ) . This branching property implies that the expected number of newborns U ( t ) := E 1 ( Z t ) satisfies a recursive relation

U ( t ) = E ( j = 1 N ( t ) U ( t - τ j ) ) = E ( k = 1 t U ( t - k ) ν ( k ) 1 { L k } ) = U * A ( t ) , t 1 ,

where the ∗ symbol stands for a discrete convolution

A 1 * A 2 ( t ) := j = - A 1 ( t - j ) A 2 ( j ) , t Z .

Resolving the obtained recursion U ( t ) = 1 { t = 0 } + U * A ( t ) , we find a familiar expression for the renewal function

U ( t ) = 1 { t = 0 } + k = 1 t A * k ( t ) , A * 1 ( t ) := A ( t ) , A * ( k + 1 ) ( t ) := A * k * A ( t ) ,

so that, by the elementary renewal theorem,

(2.2) U ( t ) 1 a , t .

This says that, in the long run, the underlying reproduction process produces one birth per 𝑎 units of time. In this sense, 𝑎 can be treated as the average generation length.

Later on, we will need the following facts concerning the distribution of W t , the waiting time to the next renewal event:

R t ( j ) := P ( W t = j ) , j 1 , t 0 .

These probabilities satisfy the renewal equation R t ( j ) = A ( t + j ) + R t * A ( t ) , which yields

(2.3) R t ( j ) = k = 0 t A ( t + j - k ) U ( k ) , j 1 , t 0 .

By the key renewal theorem, there exists a stable distribution of the residual time W t , in that

R t ( j ) R ( j ) , t , R ( j ) := a - 1 k = j A ( k ) , j 1 .

Lemma 2.2

Assume (1.1), a < , and suppose a family of non-negative functions r ( n ) ( t ) is such that

sup n 1 , t 1 r ( n ) ( t ) < , r ( n ) ( n y ) y r ( y ) , n .

If r ( y ) r ( 0 ) as y 0 , then

t = 1 r ( n ) ( t ) R n y ( t ) y r ( 0 ) , n .

Proof

Observe that

t = 1 r ( n ) ( t ) R n y ( t ) - r ( 0 ) = t = 1 t 0 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t ) + t = t 0 + 1 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t )

for any t 0 > 0 . From

t = 1 t 0 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t ) = t = 1 t 0 ( r ( n ) ( t ) - r ( t n - 1 ) ) R n y ( t ) + t = 1 t 0 ( r ( t n - 1 ) - r ( 0 ) ) R n y ( t ) ,

we deduce

t = 1 t 0 ( r ( n ) ( t ) - r ( 0 ) ) R n y ( t ) y 0 , n ,

using the assumptions on r ( n ) ( ) and r ( ) . It remains to notice that

t = t 0 + 1 | r ( n ) ( t ) - r ( 0 ) | R n y ( t ) C t = t 0 + 1 R n y ( t ) ,

and

t = t 0 + 1 R n y ( t ) y t = t 0 + 1 R ( t ) 0

as first t and then t 0 . ∎

2.3 Expected Population Counts

If Z 0 = 1 , then X ( t ) , defined by (2.1), can be represented as

(2.4) X ( t ) = χ ( t ) + j = 1 N ( t ) X ( j ) ( t - τ j )

in terms of the independent daughter processes X ( j ) ( ) , where N ( t ) is the birth count of the founder. Taking expectations, we arrive at a recursion

M ( t ) = m ( t ) + E ( j = 1 N ( t ) M ( t - τ j ) ) = m ( t ) + j = 1 t M ( t - j ) A ( j ) ,

where M ( t ) := E 1 ( X ( t ) ) , m ( t ) := E ( χ ( t ) ) . This renewal equation M ( t ) = m ( t ) + M * A ( t ) yields

M ( t ) = m * U ( t ) = j = 0 t m ( t - j ) U ( j ) ,

and applying the key renewal theorem, we conclude

(2.5) E 1 ( X ( t ) ) m χ , t , m χ := a - 1 t = 0 E ( χ ( t ) ) .

The obtained parameter m χ can be viewed as the average 𝜒-score for the population with overlapping generations. The next result goes further than (2.5) by giving a useful asymptotic relation in the case m χ = .

Proposition 2.3

Consider a critical GWO-process with a < . If, for some function L ( ) slowly varying at infinity,

(2.6) j = 0 t E ( χ ( j ) ) = t γ L ( t ) , t , 0 γ < ,

then E 1 ( X ( t ) ) a - 1 t γ L ( t ) as t .

Proof

We have to show that (2.6) implies M ( t ) - a - 1 M t = o ( M t ) as t , where M t := j = 0 t m ( j ) . To this end, observe that the difference

M ( t ) - a - 1 j = 0 t m ( t - j ) = j = 0 t m ( t - j ) ( U ( j ) - a - 1 )

is estimated from above by

j = 0 t m ( t - j ) | U ( j ) - a - 1 | C j = 0 t ϵ - 1 m ( t - j ) + ϵ j = t ϵ t m ( t - j ) C ( M t - M t - t ϵ ) + ϵ M t , t t ϵ ,

for an arbitrarily small ϵ > 0 and some finite constants 𝐶, t ϵ . It remains to apply the property of the regularly varying function M t , saying that M t - M t - c = o ( M t ) as t for any fixed c 0 . ∎

Turning to X ( t ) = Z ( t ) , the number of individuals alive at time 𝑡, observe that, with χ ( t ) = 1 { 0 t < L } ,

t 0 E ( χ ( t ) ) = t 0 P ( L > t ) = μ .

Therefore, given E ( N ) = 1 ,

E 1 ( Z ( t ) ) μ a - 1 , t .

In this case, the parameter m χ = μ a - 1 can be treated as the degree of generation overlap. For example, m χ = 2 means that, on average, the life length 𝐿 covers two generation lengths.

3 Branching Renewal Equations

A useful extension of Definition 2.1 broadens the range of individual scores by replacing (2.1) with

(3.1) X ( t ) := j = 0 k = 1 Z j χ j k ( t - j ) , t Z .

Relation (3.1) takes into account even those individuals who are born after time 𝑡, allowing χ ( t ) > 0 for t < 0 . In this paper, we refer to this extension only to deal with the finite-dimensional distributions of the population counts defined by (2.1); see Lemma 3.2 below.

Definition 3.1

For the population count X ( t ) = X [ χ ] ( t ) given by (3.1), the log-Laplace transform Λ ( t ) = Λ [ χ ] ( t ) is given by

e - Λ ( t ) := E 1 ( e - X ( t ) ) , t Z .

The purpose of this section is to introduce a branching renewal equation for Λ ( ) and establish Proposition 3.5, which will play a key role in the proofs of the main results of this paper.

Lemma 3.2

For a given vector ( t 1 , , t p ) with non-negative integer components, consider the log-Laplace transform

Λ ( t ) = - ln E 1 ( exp { - i = 1 p λ i X ( t i + t ) } )

of the 𝑝-dimensional distribution of the population sum X ( ) defined by (2.1). Then, in accordance with Definition 3.1,

Λ ( t ) = Λ [ ψ ] ( t ) , ψ ( t ) := i = 1 p λ i χ ( t i + t ) , t Z .

Proof

It suffices to observe that

i = 1 p λ i X ( t i + t ) = ( 2.1 ) i = 1 p j = 0 t k = 1 Z j λ i χ j k ( t i + t - j ) = j = 0 k = 1 Z j ψ j k ( t - j ) = ( 3.1 ) X [ ψ ] ( t ) .

3.1 Derivation of the Branching Renewal Equation

Here we show that Definition 3.1 leads to what we call a branching renewal equation,

(3.2) Λ ( t ) = B ( t ) - Ψ [ Λ ] * U ( t ) , t 0 ,

where the operator

(3.3) Ψ [ f ] ( t ) := E ( j = 1 L e - ν ( j ) f ( t - j ) ) - j = 1 e - f ( t - j ) A ( j ) , t 0 ,

is defined on the set of non-negative sequences ( f ( t ) ) t Z ; see more on it in Section 3.2. The convolution term Ψ [ Λ ] * U ( t ) represents the non-linear part of the branching renewal equation. A seemingly free term B ( ) of equation (3.2) is a non-negative function specified below by (3.4) and (3.5). It also depends on the function Λ ( ) in a non-linear way; however, asymptotically it acts as a truly free term.

The derivation of (3.2) is based on the following extended version of decomposition (2.4):

X ( t ) = χ ( t ) + j = 1 N X ( j ) ( t - τ j ) , t Z ,

where X ( j ) ( ) are independent daughter copies of ( X ( ) Z 0 = 1 ) . It entails e χ ( t ) - X ( t ) = j = 1 N e - X ( j ) ( t - τ j ) , and taking expectations, we obtain

E 1 ( e χ ( t ) - X ( t ) ) = E ( e - j = 1 N Λ ( t - τ j ) ) = E ( e - j = 1 L ν ( j ) Λ ( t - j ) ) .

On the other hand (recall e 1 x := 1 - e - x ),

E 1 ( e χ ( t ) - X ( t ) ) - e - Λ ( t ) = E 1 ( e χ ( t ) - X ( t ) - e - X ( t ) ) = E 1 ( e 1 χ ( t ) e χ ( t ) - X ( t ) ) .

Denoting the last expectation D ( t ) , we can write

(3.4) D ( t ) = E ( e 1 χ ( t ) e - j = 1 L ν ( j ) Λ ( t - j ) ) ,

due to independence between the progenitor score χ ( t ) and the GWO-processes stemming from progenitor’s daughters. Combing the previous relations, we find

e - Λ ( t ) = E ( e - j = 1 L ν ( j ) Λ ( t - j ) ) - D ( t ) ,

which, after introducing a term involving operator (3.3), brings

e - Λ ( t ) = j = 1 e - Λ ( t - j ) A ( j ) + Ψ [ Λ ] ( t ) - D ( t ) .

Subtracting both sides from 1 yields

e 1 - Λ ( t ) = j = 1 e 1 - Λ ( t - j ) A ( j ) - Ψ [ Λ ] ( t ) + D ( t ) ,

which can be rewritten in the form of a renewal equation

e 1 - Λ ( t ) = e 1 - Λ * A ( t ) + j = t + 1 e 1 - Λ ( t - j ) A ( j ) - Ψ [ Λ ] ( t ) + D ( t ) .

Formally solving this renewal function, we get

e 1 - Λ ( t ) = j = 1 e 1 Λ ( - j ) R t ( j ) - Ψ [ Λ ] * U ( t ) + D * U ( t ) ,

where R t ( j ) is given by (2.3). Here we used

k = 0 t j = t - k + 1 e 1 - Λ ( t - k - j ) A ( j ) U ( k ) = k = 0 t U ( k ) j = 1 e 1 - Λ ( - j ) A ( j + t - k ) = j = 1 e 1 Λ ( - j ) R t ( j ) .

Since e 1 - Λ ( t ) = Λ ( t ) - e 2 - Λ ( t ) , we conclude that relation (3.2) holds with

(3.5) B ( t ) = e 2 Λ ( t ) + j = 1 e 1 Λ ( - j ) R t ( j ) + D * U ( t ) .

3.2 Laplace Transform of the Reproduction Law

The Laplace transform of the reproduction law E ( e - f ( τ 1 ) - - f ( τ N ) ) is a positive functional defined on the set of non-negative sequences ( f ( t ) ) t 1 . The higher than first moments of the joint distribution of ( τ 1 , , τ N ) are characterised by the non-linear functional

(3.6) Ψ ( f ) := E ( j = 1 N e - f ( τ j ) - j = 1 N e - f ( τ j ) ) .

This functional is monotone in view of the elementary equality

(3.7) j = 1 k ( a j - b j ) - j = 1 k a j + j = 1 k b j = j = 1 k ( a j - b j ) ( 1 - a 1 a j - 1 b j + 1 b k ) ,

in that if f ( t ) g ( t ) for all t 1 , then Ψ ( f ) Ψ ( g ) . In particular, with g ( t ) 0 , we get Ψ ( g ) = E ( 1 - N ) = 0 due to our standing assumption E ( N ) = 1 , which implies that Ψ ( f ) 0 for all eligible f ( ) .

The earlier introduced operator (3.3) is obtained from functional (3.6) through the connection

Ψ [ f ] ( t ) = Ψ ( f t ) , f t ( j ) := f ( t - j ) 1 { 1 j t } ,

which is verified by

Ψ ( f t ) = ( 3.6 ) E ( j = 1 N e - f t ( τ j ) - j = 1 N e - f t ( τ j ) ) = E ( k = 1 L e - f t ( k ) ν ( k ) - k = 1 L e - f t ( k ) ν ( k ) ) = E ( k = 1 L e - f ( t - k ) ν ( k ) ) - k = 1 e - f ( t - k ) A ( k ) = ( 3.3 ) Ψ [ f ] ( t ) .

Lemma 3.3

Consider a constant function f ( t ) = z , t Z . If (1.1), then

Ψ [ f ] ( t ) = Ψ ( z ) = E ( e - z N ) - e - z , t 0 ,

and z - 2 Ψ ( z ) b as z 0 .

Proof

The first assertion follows from the relation connecting Ψ [ f ] ( t ) and Ψ ( f ) . The second assertion follows from the L’Hospital rule. ∎

Lemma 3.4

If (1.1) holds and

n r n ( n y ) y r ( y ) , n ,

where r : [ 0 , ) [ 0 , ) is a continuous function, then

n 2 Ψ [ r n ] ( n y ) y b r 2 ( y ) , n .

Proof

Observe that (3.7) implies

Ψ [ f ] ( t ) - Ψ [ g ] ( t ) = E ( j = 1 N ( e - g ( t - τ j ) - e - f ( t - τ j ) ) ( 1 - i = 1 j - 1 e - f ( t - τ i ) i = j + 1 N e - g ( t - τ i ) ) ) ,

which in turn gives, for arbitrary 1 t 1 t ,

| Ψ [ f ] ( t ) - Ψ [ g ] ( t ) | E ( j = 1 N ( t 1 ) | f ( t - τ j ) - g ( t - τ j ) | I j + f g j = N ( t 1 ) + 1 N I j ) ,

where f := sup t 1 | f ( t ) | and

I j := ( 1 - i = 1 j - 1 e - f ( t - τ i ) i = j + 1 N e - g ( t - τ i ) ) i = 1 j - 1 f ( t - τ i ) + i = j + 1 N g ( t - τ i ) f g ( N - 1 ) .

Using E ( N ( N - 1 ) ) = 2 b , we therefore obtain

| Ψ [ f ] ( t ) - Ψ [ g ] ( t ) | 2 b f g max 1 j t 1 | f ( t - j ) - g ( t - j ) | + f g 2 E ( ( N ( t ) - N ( t 1 ) ) N ) .

This implies that

(3.8) | Ψ [ f ] ( t ) - Ψ [ g ] ( t ) | 2 b f g max 1 j t 1 | f ( t - j ) - g ( t - j ) | + f g 2 δ ( t 1 ) ,

where δ ( t ) := E ( ( N - N ( t ) ) N ) 0 as t .

Applying (3.8) with t 1 = n ϵ , t = n y , and

f ( j ) := r n ( j ) , g ( j ) := z n , j 1 , z n := n - 1 r ( y ) ,

we get

| n 2 Ψ [ r n ] ( n y ) - n 2 Ψ [ z n ] ( n y ) | C sup 0 x ϵ | n r n ( n ( y - x ) ) - r ( y ) | + C 1 δ ( n ϵ ) .

Thus, under the imposed conditions,

lim ϵ 0 sup 0 y y 0 ( n 2 Ψ [ r n ] ( n y ) - n 2 Ψ [ z n ] ( n y ) ) 0 , n ,

for any y 0 > 0 . It remains to observe that n 2 Ψ [ z n ] ( n y ) y b r 2 ( y ) as n , according to Lemma 3.3. ∎

3.3 Basic Convergence Result

If Λ ( t ) is given by Definition 3.1, then

(3.9) E n ( e - X ( t ) ) = e - n Λ ( t ) .

This observation explains the importance of the next result.

Proposition 3.5

Assume (1.1), a < , and consider a sequence of positive functions Λ n ( ) satisfying

(3.10) Λ n ( t ) = B n ( t ) - Ψ ( Λ n ) * U ( t ) , t 0 , n 1 .

If the non-negative functions B n ( t ) are such that

(3.11) n B n ( n y ) y B ( y ) , n ,

where B ( y ) is a continuous function, then

n Λ n ( n y ) y r ( y ) , n ,

where r ( y ) is a continuous function uniquely defined by

(3.12) r ( y ) = B ( y ) - b a - 1 0 y r 2 ( u ) d u .

Proof

We will prove this statement in three steps. Firstly, we will show

(3.13) r ( y ) = n B n ( n y ) - n t = 0 n y Ψ [ n - 1 r n ] ( n y - t ) U ( t ) + δ n ( y ) ,

where δ n ( y ) stands for a function (different in different formulas) such that δ n ( y ) y 0 as n . Secondly, putting Δ n ( y ) := n Λ n ( n y ) - r ( y ) , we will find a y * > 0 such that

(3.14) sup y 0 u y 1 | Δ n ( u ) | 0 , n , 0 < y 0 y 1 y * .

Thirdly, we will demonstrate that

(3.15) Δ n ( y ) y 0 , n .

Proof of (3.13). Rewriting (3.12) as

r ( y ) = B ( y ) - b 0 y r 2 ( y - u ) a - 1 d u

and using (2.2), (3.11), we obtain

r ( y ) = n B n ( n y ) - b n - 1 t = 0 n y r 2 ( y - t n - 1 ) U ( t ) + δ n ( y ) .

This and Lemma 3.4 imply (3.13).

Proof of (3.14). Relations (3.10) and (3.13) yield

(3.16) Δ n ( y ) = n t = 0 n y ( Ψ [ Λ n ] ( t ) - Ψ [ n - 1 r n ] ( t ) ) U ( n y - t ) + δ n ( y ) .

Under the current assumptions, the inequality n Λ n ( n y ) n B n ( n y ) implies that the sequence of functions n Λ n ( n y ) is uniformly bounded over any finite interval 0 y y 1 . Therefore, putting t 1 := t ϵ into (3.8) gives

n 2 | Ψ [ Λ n ] ( t ) - Ψ [ n - 1 r n ] ( t ) | C 1 sup ( 1 - ϵ ) t j t | Δ n ( j n - 1 ) | + C 2 δ ( t ϵ )

for any fixed 0 < ϵ < 1 . Combining this with (3.16) entails

(3.17) | Δ n ( y ) | C n - 1 t = n ϵ n y U ( n y - t ) sup ( 1 - ϵ ) t j t | Δ n ( j n - 1 ) | + C 1 n - 1 t = 0 n ϵ U ( n y - t ) + δ n ( y )

so that, for some positive constant c * independent of ( n , ϵ , y ) ,

| Δ n ( y ) | c * y sup ϵ ( 1 - ϵ ) u y | Δ n ( u ) | + C ϵ + δ n ( y ) .

It follows that

sup ϵ ( 1 - ϵ ) y v | Δ n ( y ) | c * v sup ϵ ( 1 - ϵ ) u v | Δ n ( u ) | + C ϵ + sup ϵ ( 1 - ϵ ) y v δ n ( y ) .

Replacing here 𝑣 by y * := ( 2 c * ) - 1 , we derive

lim sup n sup ϵ ( 1 - ϵ ) u y * | Δ n ( u ) | C ϵ ,

which, after letting ϵ 0 , results in (3.14).

Proof of (3.15). It suffices to demonstrate that the convergence interval in (3.14) can be consecutively expanded from ( 0 , y * ] to ( 0 , 2 y * ] , from ( 0 , 2 y * ] to ( 0 , 3 y * ] , and so forth. Suppose we have established that, for some k 1 ,

sup y 0 u y 1 | Δ n ( u ) | 0 , n , 0 < y 0 y 1 k y * .

Then, for k y * < y ( k + 1 ) y * , by (3.17),

| Δ n ( y ) | C n - 1 t = n k y * n y U ( n y - t ) sup ( 1 - ϵ ) t j t | Δ n ( j n - 1 ) | + C ϵ + δ n ( y ) ,

yielding

sup k y * y ( k + 1 ) y * | Δ n ( y ) | c * y * sup k y * u ( k + 1 ) y * | Δ n ( u ) | + C ϵ + sup k y * u ( k + 1 ) y * δ n ( y ) .

Since c * y * < 1 , we may conclude that

sup k y * u ( k + 1 ) y * | Δ n ( u ) | 0 , n ,

thereby completing the proof of (3.15). ∎

4 Continuous State Critical Branching Process

In this section, among other things, we clarify the meaning of ξ γ ( ) given by (1.7), in terms of the log-Laplace transforms of the fdds of the process ξ ( ) . From now on, we consistently use the following shortenings:

G p ( u ¯ , λ ¯ ) := G p ( u 1 , , u p ; λ 1 , , λ p ) , G p ( c 1 u ¯ + y , c 2 λ ¯ ) := G p ( c 1 u 1 + y , , c 1 u p + y ; c 2 λ 1 , , c 2 λ p ) , H p , q ( u ¯ , λ ¯ ) := H p , q ( u 1 , , u p ; λ 11 , , λ p 1 ; ; λ 1 q , , λ p q ) .

4.1 Laplace Transforms for ξ ( )

The set of functions

(4.1) G p ( u ¯ , λ ¯ ) := - ln E 1 ( e - λ 1 ξ ( u 1 ) - - λ p ξ ( u p ) ) , p 1 ,

with u i , λ i 0 , determines the fdds for the process ξ ( ) .

Lemma 4.1

For non-negative x , y , u 1 , u 2 , , λ 1 , λ 2 , ,

E ( e - λ 1 ξ ( u 1 + y ) - - λ p ξ ( u p + y ) ξ ( y ) = x ) = e - x G p ( u ¯ , λ ¯ ) .

Proof

This result is obtained by induction, using (1.3) and the Markov property of ξ ( ) . To illustrate the argument, take p = 2 and non-negative y , y 1 , y 2 . We have

E ( e - λ 1 ξ ( y + y 1 + y 2 ) - λ 2 ξ ( y + y 2 ) ξ ( y ) = x ) = E ( e - λ 2 ξ ( y + y 2 ) E ( e - λ 1 ξ ( y + y 1 + y 2 ) ξ ( y + y 2 ) ) ξ ( y ) = x ) = ( 1.3 ) E ( exp { - ( λ 2 + λ 1 1 + b λ 1 y 1 ) ξ ( y + y 2 ) } | ξ ( y ) = x ) = ( 1.3 ) exp { - ( λ 1 + λ 2 + b λ 1 λ 2 y 1 ) x 1 + b λ 1 ( y 1 + y 2 ) + b λ 2 y 2 + b 2 λ 1 λ 2 y 1 y 2 } .

With u 2 = y 2 and u 1 = y 1 + y 2 , this gives an explicit expression

G 2 ( u ¯ , λ ¯ ) = λ 1 + λ 2 + b λ 1 λ 2 ( u 1 - u 2 ) 1 + b λ 1 u 1 + b λ 2 u 2 + b 2 λ 1 λ 2