A new extreme value copula and new families of univariate distributions based on Freund’s exponential model

Abstract: The use of the exponential distribution and its multivariate generalizations is extremely popular in lifetime modeling. Freund’s bivariate exponential model (1961) is based on the idea that the remaining lifetime of any entity in a bivariate system is shortened when the other entity defaults. Such a model can be quite useful for studying systemic risk, for instance in nancial systems. Guzmics and P ug (2019) revisited Freund’s model, deriving the corresponding bivariate copula and examined some characteristics of it; furthermore, we opened the door for a multivariate setting. Now we present further investigations in the bivariate model: we compute the tail dependence coe cients, we examine themarginal and joint distributions of the componentwisemaxima,which leads to an extreme value copula, which – to the best of our knowledge – has not been investigated in the literature yet. The original bivariate model of Freund has been extended to more variables by several authors.We also turn to themultivariate setting, and our focus is di erent from that of the previous generalizations, and therefore it is novel: examining the distribution of the sum and of the average of the lifetime variables (provided that the shock parameters are all the same) leads to new families of univariate distributions, which we call Exponential Gamma Mixture Type I and Type II (EGM) distributions. We present their basic properties, we provide asymptotics for them, and nally we also provide the limiting distribution for the EGM Type II distribution.


Introduction
We consider the bivariate lifetime model introduced by Freund [4]. The idea is that the lifetimes of two entities (we also refer to them as "institutions") are originally assumed to be Exp(λ i ) distributed (i = , ), and when one of the entities defaults, it modi es the remaining lifetime of the other entity by increasing the intensity of its original exponential lifetime. This assumption is a possible way for modelling cascading e ects when we examine systemic risk in nance. The construction in detail looks as follows. (We recall Section 2.1 from Guzmics and P ug [6].) Let Y i ∼ Exp(λ i ) (i = , ) be independent random variables. They are attributed as auxiliary lifetime variables (if one wishes as pre-lifetime variables) of the two entities of the system. When in a certain realization the rst entity defaults earlier, i.e., Y < Y , then the second entity will continue its operation according to another exponentially distributed random variable Z ∼ Exp(λ + a ) , which is independent of Y and Y . The parameter a ≥ is called the shock parameter, and it expresses the e ect of the default of the rst institution on the second institution. Z is de ned analogously: when Y < Y , then Z ∼ Exp(λ + a ) , where a ≥ is a shock parameter.
The actual lifetime variables of the two entities are denoted by X , X , and -in the light of the above construction -can be written as follows.
If Y < Y , then X := Y , The new lifetime variables X , X can be expressed explicitly in terms of Y , Y , Z , Z : The case Y = Y does not need to be taken into account, since it has probability zero. The resulting bivariate distribution was rst presented in [4], and was investigated further in [6]. The joint cumulative distribution function (cdf) of the resulting lifetime variables (X , X ) (if λ ≠ a and λ ≠ a ) is given by The marginal cdf of X (if λ ≠ a ) and of X (if λ ≠ a ) are given by G(y) = − λ λ − a · e −(λ +a )y + a λ − a · e −(λ +λ )y , y ≥ .
For the remaining parameter constellations and for the joint and marginal densities, we refer again to [4] and [6]. Here we only highlight the special case λ = λ = a = a = , which will become important in some of the upcoming computations; in this case the marginal cdfs are We will also need the copula C(u, v) of the lifetime variables X and X . It was explained in [6] that the copula function cannot be expressed explicitly in terms of u and v , because the inverse of the strictly increasing marginal cdfs F and G given in (4) and (5) cannot be expressed explicitly. So we provide a semiexplicit form for the parameter setting λ = λ = , a = a = a < ∞, which will be used in the subsequent computations.
Notice that the four parameters can be written as a matrix so we will use the notation Freund(A), when we refer to the distribution given in (3). The same notation Freund(A) and the analogue notation Freund(a, n) will be used in Section 3 for the analogous situations in the multivariate case (n ≥ ). The model also allows to set a = ∞ or a = ∞, which means that the default of one institution causes the immediate default of the other one. For instance, if a = a = ∞, then the underlying lifetimes variables X , X are completely dependent, and the copula function in (7) reduces to C (∞) (u, v) = min{u, v} , i.e., the Fréchet upper bound. In some situations we will focus on the special parameter setting λ = λ = , a = a = a ≥ , which will be denoted by Freund(a, ) . In accordance with this, we can speak about the Freund(A) and Freund(a, ) copulas. This latter one is given in (7). The paper is organized as follows. In Section 2, we deal with the bivariate model, examine the relation between componentwise maxima and provide a new extreme value copula. In Section 3, we present the multivariate setting and investigate the sum and the average of the lifetime variables. This setting leads to two new, closely related families of univariate distributions, which we call Exponential Gamma Mixture Type I and Type II distributions. Furthermore, we provide the limiting distribution for the average lifetime of the entities in the system. In Section 4, we give an outlook on future research.

Computations in the bivariate Freund(a, ) model . Tail dependence
De nition 1. The lower and upper tail dependence coe cients for a bivariate copula are de ned by Proposition 1. The lower tail dependence coe cient for the bivariate Freund(A) copula is for all λ > , λ > , a = a = ∞ .
Proof. If min{a , a } < ∞, then using the rst order Taylor expansion of F(x) and G(x) (given in (4) and (5)) around the basis point x = , one can write (3) , we can write where the indeterminate form appearing in the limit can be computed either by the rst order Taylor expansion of the exponential function or by L'Hospital's rule. If a = a = ∞ , the copula function is C(u, v) = min{u, v}, so λ L = lim Remark 1. The rst case of the statement (λ L = ) is a direct consequence of a much more general but simple fact. For the details see Appendix D.

Proposition 2.
The upper tail dependence coe cient for the bivariate Freund(A) copula is given by if a >λ , a >λ , a ·λ ≥λ ·a , if a >λ , a >λ , a ·λ ≤λ ·a , Proof. In addition to the indicated cases (i) and (ii), we distinguish the following cases as well.
a = λ and a = λ . The basic idea in each case is that one of the two exponential terms in formulas (4) and (5) of the marginal cdfs F(x) and G(x) is negligible if x is large enough. This simpli cation enables us to give an approximation for F − (u) and G − (u) when u is near to . We present the proofs for (i), and (vi). The remaining cases are similar to these. To see (i): Under the above assumptions on parameters λ , λ , a , a , the approximation Using formulas (3), (4), and (5), we can write λ U = lim λ +λ , as it was claimed.
To see (vi): We can assume without loss of generality that a > λ . It is easy to think over that in this case F − (u) ≤ G − (u), if u is close to . We will compute λ U in the following form.
Using the negligibility principle explained in the beginning of the proof, we can write (8) and (9) in [6] Substituting this into (8), and performing elementary limit computations, λ U = follows.

. The componentwise maxima of the lifetime variable X ∼ Freund(a, )
When one comes across a bivariate distribution and the corresponding bivariate copula, it is a natural question whether the distribution or the copula possesses some remarkable properties, for instance, whether they are max-stable or not. In order to investigate this, we introduce the componentwise maximum across N independent copies X (i) , X (i) ∼ Freund(a, ), (i = , . . . , N), which will be denoted by Since the law of M ( ) N coincides with that of M ( ) N , we will omit the upper index from the notation, unless both quantities appear in the same context (e.g., in Subsection 2.3 and 2.4). In accordance with this, we will use the notation X (i) as well. In this subsection, we investigate the fundamental characteristics (the expectation E and the variance V) of Mn, and we provide a non-trivial limiting distribution for it after suitable normalization. In Subsection 2.3, we examine the joint behavior of M ( ) N and M ( ) N , including their copula C (a) N . In Subsection 2.4, we derive the limiting case of C (a) N as N → ∞, which, to the best of our knowledge, is a new extreme value copula. As a consequence, we will see that the Freund copula itself is not max-stable, except when a = ∞ .

. . Expectation and variance for N = and for large N
The cdf of M N is given by where F is given by setting λ = λ = and a = a in (4). In principle, it is easy to compute E(M N ), E(M N ) and D(M N ), i.e., the mean, the second moment and the standard deviation, because all appearing integrals only consist of products of polynomials and exponential functions, where the degree of polynomials is zero, one, or two. However, in practice this task is much harder. We performed these computations for N = and for comparison, we recall also the case N = . For xed N > , we provide E(M N ) only in the extreme cases a = and a = ∞. For large N we refer to the asymptotics that will be presented in Corollary 1 after Proposition 3 (see (18) and also Table 1). For N = , we have E(M ) = E(X ) = · a+ a+ trivially, as it was shown in [6], Section 3.1., p.34. Notice that E(M ) = for a = and E(M ) = for a = ∞. For N = , one gets by a cumbersome computation that V (M ) = a + a + a + a + (a + ) (a + ) . (12) Notice that for a = , formulas ( If a = ∞, then X (i) s are i.i.d. Exp( ) distributed, so, taking also into account the extreme case a = and its result in (13), one gets that the expectation of the variable M N = max(X ( ) , . . . , X (N) ) is An alternative way for deriving (13) and (14) can be found in Spivey [18].

. . Limiting distribution
Proposition 3. The marginal cdfs F and G of the Freund(a, ) distribution belong to the domain of attraction of the standard Gumbel distribution. Namely, where the normalizing constants are Proof. We will use the su cient condition on a cdf F for belonging to the domain of attraction of the Gumbel distribution stated for instance in Exercise 3.2 in R. Sun [19], and based on the classical works Leadbetter et al. [9] and Resnick [16].
For a ≠ , we introduce the function h(x) = − log −a · e −( +a)·x − a −a · e − x . The marginal distribution function F(x) can be written as F(x) = − e −h(x) . It is easy to see that h (x) is a slowly varying function. This means that condition (i) in Exercise 3.2 in [19] holds, therefore F(x) belongs to the domain of attraction of the Gumbel distribution.
For a = , we introduce the function h(x) = − log( + x) + x. The marginal distribution function F(x) can be written as F(x) = − e −h(x) (see (6)). It is easy to see that h(x) is of the form h(x) = x · L(x), where L(x) is a slowly varying function; therefore, condition (ii) in Exercise 3.2 in [19] holds, and we obtain that F(x) belongs to the domain of attraction of the Gumbel distribution in this case as well.
The normalizing constants a N , b N for a ≠ can be computed analogously to Example 3.3 in [19], and by elementary considerations for a = .
Remark 2. The general theoretical background of the normalizing constants can be found for instance in Resnick [16].

De nition 2. The normalized componentwise maximum is de ned as
where a N and b N are given in (15)- (17). (See also Figure 1.) Notice in Figure 1 that the convergence of M N to its limiting distribution is very fast. The histograms for N = , , are visually hardly distinguishable.

. Joint distribution and correlation between the componentwise maxima
In the following, we will examine the relation between the componentwise maxima M ( ) N and M ( ) N . Their joint cdf can be easily determined:  where H(x, y) is the cumulative distribution function of the Freund(a, ) distribution. Therefore corr(M ( ) N , M ( ) N ) can be also determined, since all the computations consist of integrals of functions comprising a polynomial times exponential, where the polynomial has degree zero, one, or two. We performed the computation for N = .
For the cross-product we have found that , and using formulas (10)- (12), one gets From (19), we immediately see that corr(M ( ) , M ( ) ) is increasing in a. The two extreme cases are corr(M ( ) , M ( ) ) = for a = and corr(M ( ) , M ( ) ) = for a = ∞. It also means that M ( ) and M ( ) are not independent if a > . Figure 2b shows a sample for a = ; the theoretical correlation is corr(M ( ) , M ( ) ) = . .

. The limiting case of the copula of the componentwise maxima: a new extreme-value copula
It is worth examining how the dependence between M ( ) N and M ( ) N changes as N varies. (See also Figure 2 and Figure 3.) Although the scatterplots of the copula of M ( ) N and M ( ) N for a = do not seem to signi cantly di er from the copula of (X , X ), we will see that the distribution of (X , X ) is not max-stable provided that a < ∞. In Proposition 4, we will derive an analytic formula for the extreme-value copula which stems from (X , X ). Two cases need to be distinguished: ≤ a ≤ , and a > . The latter case, we get an extreme value copula which, to the best of our knowledge, has not been discussed in the literature yet, so in fact we have found a new copula (what is more, an extreme value copula) that can be given explicitly.
Before presenting this computation, we numerically quantify the phenomenon pictured in Figure 2, which visually suggests that the copulas of (M ( ) N , M ( ) N ) are "similar" to each other practically for all values of N. We want to express more exactly whether these copulas are indeed close to each other. The second and fourth columns of Table 2 show the sup-distance and the mean-absolute-distance between the copula of (X , X ) (denoted by C in Table 2) and the copula of (M ( ) Table 2). The third and fth columns of the table use the notation C N i− and C N i , referring to the N values which appear in the (i − )-th and i-th row of the table, i.e., always in the current and in the previous row.  Table 2   .

The computations in
in the current sample. some things .
in the current sample.  (i) If ≤ a ≤ , then (ii) If a > , then Proof. It is well-known, and one can also verify by trivial considerations, that between a bivariate copula C(u, v) and the corresponding copula C N (u, v) of the componentwise maxima, the following relation holds: Our aim is to compute for the above de ned C (a) N . We will use a characterization which can be found (among others) in Gudendorf and Segers [5] and in Drees and Huang [3], by which the relation between C and C can be written in terms of the the tail dependence function as follows: Applying this to the Freund(a, ) model and also using the semi-explicit expression (7), one can write In the remaining part of the proof, we resort to approximating the marginal cdfs and the inverse marginal cdfs in order to compute the limit in (23). The validity of these approximations can be veri ed in an elementary way.
To show (i) for ≤ a < , we need the following considerations. When x → ∞ , then according to (4), the marginal cdf F can be approximated as Therefore, F − (u) can be approximated as As a consequence, the approximation can be used when we compute the limit in (23). Analogously, the approximation holds for the marginal cdf G.
Substituting (26) and (27) into (23) and also using (3), one gets after taking the limit as it was claimed. The proof of (i) for a = is similar to the case ≤ a < . The only di erence is that one has to use the following approximations based on (6): and therefore To show (ii), rst we assume that < u ≤ v . When x → ∞ , y → ∞ , then (according to (4) and (5) Substituting (28) and (29) into (23) and also using (3), one gets after taking the limit Showing (ii) for the case < v ≤ u is similar and for the case u = or v = is trivial. Figure 3 illustrates Proposition 4 (i) for a = . . One can immediately see that the convergence to the independence copula Π is relatively slow; however this e ect is hard to observe visually based on the scatterplots. (It is also obvious that smaller values of a would lead to faster convergence, since the original variables (X , X ) are more independent and in case a = actually independent.) Remark 3. The copula C (a) (u, v) is continuous in the parameter a . This is trivial for all a ≠ and can be veri ed for a = by evaluating (21) for a = .
Corollary 2. The result clearly shows that the Freund copula C (a) (u, v) for ≤ a < ∞ is not max-stable, since for the max-stability C (a) (u, v) = C (a) (u, v) should hold, which is not the case.

Remark 4.
According to Mathieu and Mohammed [13] and de Haan and Resnick [1] (bivariate) extreme value copulas can be also characterized by an exponent function V such that where V is a homogeneous function of order − . Looking at (21), it is easy to see that the exponent function V corresponding to the Freund(a, ) model (for x ≤ y ) is given by Proposition 5. Let c (a) denote the copula density belonging to the extreme value copula C (a) given in (20) and (21), i.e., Proof. By simple di erentiation.

Remark 5.
For a > , the copula density c (a) is unbounded at ( , ) and at ( , ). One can see this by computing the suitable limits. The copula density is pictured in Figure 4 for a = . . Sampling from the new extreme value copula We present two methods for sampling from the new extreme value copula (21). According to our experiences, numerically none of them is signi cantly overperforming the other one. Figure 5 illustrates two samples obtained by these methods.

. . Sampling by acceptance-rejection
The idea of the acceptance-rejection method(s) (to the best of our knowledge) appeared rst in von Neumann [21] and it also can be found in Chaper II.3 in Devroye [2], as a classical reference.
In order to nd a suitable dominating density, one has to assure that both peaks of c (a) (u, v) (i.e., at around ( , ) and ( , )) are dominated. To this end, we will use the bivariate Clayton copula, a well-know copula family in the class of Archimedean copulas: The ipped Clayton copula can be found in many sources as well. As Hochrainer-Stigler et al. [7] mentions expressively: a (bivariate) ipped copula means that the copula is rotated by 180 degrees, and they also refer to Jongman et al. [8], where the formula of the ipped Clayton copula is given bŷ Let us de ne the bivariate random vector B = (B , B ) as where c (a) (u, v) is given in (31). Then one can draw a sample from the distribution of the random vector B, for instance by the well-known algorithm of Marshall and Olkin [12]. Finally, according to the classical acceptance-rejection method mentioned above, we gain a sample for the copula density c (a) (u, v). Figure 5a shows such a sample, where the underlying distribution is a Freund( , ) distribution (i.e., a = ), the parameters are set to ϑ = and w = . . We have found that the numerical solution of the maximization problem on the right-hand-side of (32) is about . , and for the actual sampling we have chosen K = .
. . Sampling by numerically computing the quantiles of the conditional copula function The inverse conditional distribution method is well know (see for instance Nelsen [14], Section 2.9, or Mai and Scherer [10], Section 1.1.3). Suppose that the copula density is c(u, v), which does not vanish in [ , ] , and let Then the algorithm using Newton's method is as follows. Sample V ∼ Uni[ , ] and independently Z ∼ Uni [ , ]. Set u = . and consider the iteration

. Correlation coe cients (measures of concordance), Pickands dependence function and tail dependence coe cients of the new extreme value copula
In this subsection, we investigate some charateristics of the newly explored extreme value copula. First we recall the de nition of three usual correlation coe cients (measures of concordance), then we picture their values forC (a) (u, v) as functions of a. Second, we compute and picture Pickands dependence function, which is another usual way to characterize bivariate extreme value copulas. Third, we provide the tail dependence coe cients in Proposition 8.

De nition 3. sg For a bivariate copula C(u, v) (i) Spearman's ρ is given by
(ii) Kendall's τ is given by Notice that in the trivial case ≤ a ≤ , all the three correlation coe cients are zero forC (a) (u, v).
When we aim to determine Spearman's ρ and Kendall's τ for the extreme value copulaC (a) (u, v), we cannot hope for analytic expressions. We performed these computations by numerical integration; the results, along with Blomqvist's β, are shown in Figure 6.

De nition 4. For a bivariate extreme value copulaC(u, v), Pickands dependence function is given by
logC y −t , y t log y for all t ∈ [ , ] and for any y ∈ ( , ).
Proposition 7. Pickands dependence function for the extreme value copulaC (a) from (21) is given by whereas for the extreme value copulaC (a) from (20), is simply A(t) = for t ∈ [ , ] .
Proof. One has to apply formula (34) for the extreme value copulas (20) and (21). Figure 7 depicts Pickands dependence function for some values of a for the extreme value copulasC (a) from Proposition 4. Pickands depenence function also allows an alternative way to quantify the distance between Π andC (a) (see also Trutschnig et al [20]). Namely, distC (a) ,Π := − · / A(t) dt .
Note that this de nition amounts to . as the largest possible distance between an extreme value copula and the independence copula. The distance between the corresponding C (a) and Π is also shown in Figure 7. Proof. Elementary computation, using De nition 1 and formula (21).

Computations in the multivariate Freund(a, n) model
We draw the reader's attention that several approaches and models can be found in the literature, which generalize Freund's bivariate model, or where Freund's bivariate model appears as a special case. We mention Shaked et al. [17] and Norros [15] as two examples. On the other hand, our focus is di erent from that of the authors who have already contributed to this issue, so in this sense we believe that the multivariate setting that we propose is novel.

. The n-variate one-step model with uniform cascading parameter
We consider our model with n institutions facing a one-step cascade, where the lifetime variables are governed by a matrix A according to the following. The diagonal elements λ , . . . , λn ∈ R + are the initial lifetime intensities of the institutions, i.e., Y j ∼ Exp(λ j ), (j = , . . . , n) are independent of each other. Each nondiagonal element a i,j (i ≠ j, i, j = , . . . , n) expresses the e ect of the default of institution i on institution j, by adding the value a i,j ≥ to the original lifetime parameter λ j . It can be formulated as for j = , . . . , n, where Z j s are independent of each other and of all Y i (i = , . . . , n). The notation emphasizes that the distribution of the variables depends on n . The above mechanism de nes an exchangable multivariate distribution, in notation X n := (X ,n , . . . X i,n , . . . , Xn,n) ∼ Freund(A). We set λ = . . . = λn = and a i,j = a ≥ in the entire Section 3, and we will use the notation X n ∼ Freund(a, n). (For the generalization of Proposition 9 see Appendix A.) The model also allows for the case a = ∞ . In this case, X ,n = . . . = Xn,n with probability one. In the following, we present the joint density function, the marginal densities, and the marginal cdfs of the Freund(a, n) distribution.
(i) The joint pdf of X n = (X ,n , . . . X i,n , . . . , Xn,n) is given by (ii) If n ≠ a + , then any one-dimensional marginal density, i.e., the pdf of each X i,n , is given by (iii) If n ≠ a + , then any one-dimensional marginal cdf, i.e., the cdf of each X i,n , is given by This can be done in an elementary way and results in (36). Assertion (ii) can be veri ed similarly; (iii) follows from (ii) by integration. x i , which is obvious, since in this case, the marginals are independent and Exp( ) distributed. (ii) For a = ∞ , the marginal density is given by f X i,n (x) = ne −nx . This is obvious, since in this case X ,n = . . . = Xn,n ∼ Exp(n) . This formula can be also attained by taking the limit a → ∞ in (37). The joint distribution of (X ,n , . . . , Xn,n) is degenerate and concentrated on a one-dimensional subset in R n , namely on x = . . . = xn . (iii) For n → ∞ , the marginal density reduces to f X i,∞ (x) = ( + a)e −( +a)x , which can be understood as follows. In case n , the lifetime variables are nearly independent of each other: one institute receives the shock, and the others (i.e., all except the one who received the shock) essentially do not depend on each other any more. (iv) Based on the marginal density (37), for a > , the distribution of X i,n can be seen as a generalized mixture (see De nition 5) , so w + w = , and either w or w is non-positive.
(v) If n = a + , then the one-dimensional marginal densities and cdfs can be computed similarly as in the proof of Proposition 9, and they are given by f X i,n (x) = + n(n − )x · e −nx · 1 {x≥ } , and F X i,n (x) = − + (n − )x · e −nx · 1 {x≥ } .

De nition 5.
We say that a random variable Z is a generalized mixture of the random variables Z , . . . , Zn, if for their pdfs f Z , f Z , . . . , f Zn the relation holds, where n i= w i = . Notice that this de nition allows that some of the weights are negative. We use the . . , f Zn , we are not aware of any straightforward manner in general for constructing a random variable Z whose pdf is f Z . Hence (40) is only a reformulation of the relation (39) among the density functions.

. . Expectation and variance of the marginal distributions
Some standard characteristics of the one-dimensional marginal distributions, for instance the expectation, the second moment, and the variance are easy to determine by using (37) and computing the suitable integrals. They will be derived again in Subsusbsection 3.2.2 more elegantly and more generally. For all ≤ i ≤ n, we have V(X i,n ) = − a n (n − a − ) + (n − ) ( + a) (n − a − ) − (n + a) ( + a) n = (a + ) + a(a + ) (a + ) n .
One immediately sees that for any xed ≤ a < ∞, Notice that for a = , the above characteristics do not depend on n : in particular, E(X i,n ) = , E X i,n = , and V(X i,n ) = for all n (cf. Remark 6 (i)).

. . The k-dimensional marginal distributions
It is trivial due to exchangeability that the law of (X i ,n , . . . , X i k ,n ) does not depend on the choice of the indices, so when we aim to determine its distribution, then we can assume without loss of generality that i j = j for j = , . . . , k, ( ≤ k ≤ n).

Proposition 10.
For ≤ a < ∞, if n ≠ k( + a), then the k-dimensional marginal density, i.e., the pdf of (X ,n , . . . , X k,n ), is given by Proof. In order to derive (44), we did not integrate the joint density (36) n − k times, which would also be possible. We applied model-based considerations, i.e., we elaborated the probability P X ,n ∈ [x , x + ∆x ], . . . , X k,n ∈ [x k , x k + ∆x k ] in an elementary way, similar to the proof of Proposition 9 .

. . Correlation between the components
We provide the special case of formula (44) for k = . Then we will use it to compute the covariance and correlation of X i,n and X j,n . It is given by From (45), it follows by integration that From (41), (43), and (46), the correlation can be computed: corr(X i,n , X j,n ) = P(a, n) Q(a, n) , where P(a, n) and Q(a, n) are polynomials of a and n with deg P = deg Q = , namely P(a, n) = a(a + ) (a + ) ( n − a − ) + n + (n − a − ) a n + an + a + a , Q(a, n) = (n − a − )(n − a − ) − a(a + ) + (n − )n − (n + a) (n − a − ) .
It is easy to see that for ≤ a < ∞ , corr(X i,n , X j,n ) −→ n→∞ , and the rate of convergence is /n , i.e., lim n→∞ n · corr(X i,n , X j,n ) = l, with < l < ∞ .
Observe that for a = , the correlation is zero for all n (cf. Remark 6 (i)).

. Distributions of sums of components in the n-variate one-step model -Exponential Gamma Mixture Type I distribution
Our aim in the following is to investigate the average lifetime of the entities in the system for xed n and for n → ∞. Let us introduce the following notation: Sn := X ,n + . . . + Xn,n S k,n := X ,n + . . . + X k,n for k = , . . . , n , i.e., Sn = Sn,n. We provide an explicit form and an integral form for the pdf of S k,n . (Due to exchangeability, S k,n could also be de ned by X i ,n + . . . + X i k ,n with any k-element subset of { , . . . , n}.)

. . Straightforward derivation and the explicit form of the pdf of S k,n
Proposition 11. For any xed n and for all k = , . . . , n , if n ≠ k · ( + a), then the probability density function of S k,n is given by g S k,n (s) = c(a, n, k) · e − ns k + e −( +a)s · pol k− (s), for s ≥ , where In particular, when k = n , if a ≠ , then Proof. The proof is a straightforward, but sometimes extremely cumbersome computation. It is clear that where H k = (s , . . . , s k ) : s + . . . + s k = s . From (44), it is clear that f k,n is a linear combination of exponential functions, where the exponents are linear combinations of the (integration) variables s , . . . , s k . Furthermore, the integration region H k translates to linear expressions concerning the integration bounds if the (k − )-dimensional integral in (51) is written out as k − (successive) one-dimensional integral. It means that in the j-th integration step (j = , . . . , k− ), the current integrand consists of a pure exponential term and an exponential multiplied by a polynomial of degree j − . Due to the above-mentioned facts (regarding f k,n and the linear expressions in the integration bounds) the result of the j-th integration step (j = , . . . , k − ) is a function consisting of a pure exponential term and an exponential multiplied by a polynomial of degree j. This argumentation already explains the structure given in (48). We show the computation in reasonable details for k = in Appendix C, and we will also indicate how the proof can be carried out for k > .

Remark 8.
According to (49), the value of c( , n, n) , c ( , n, n) and c n− ( , n, n) might be ambiguous. By evaluating in the correct order, namely starting with a = followed by k = n , one gets c( , n, n) = c ( , n, n) = and c n− ( , n, n) = (n− )! . This little technicality will become important in Subsection 3.3.

Remark 9.
Observe the following.
(i) If a = ∞ for any xed k and n , then g S k,n (s) = n k · e − ns k , Exp(n/k) distribution .
(ii) If n → ∞ for any xed a and k , then lim n→∞ g S k,n (s) = (k− )! · e −( +a)s · ( + a) k · s k− , Gamma(k, a + ) distribution, where we use the following parametrization of the gamma distirbution: gamma_pdf α,β (s) = β α Γ(α) s α− e −βs for s ≥ . So S k,n is, roughly speaking, the sum of k almost independent Exp( + a) variables for large n, i.e., when n k ; and it can be seen as the sum of k , indeed independent Exp( + a) variables for the limiting case n → ∞ (cf. Remark 6 (iii)). (iii) If a = for any xed k and n , then Remark 10. Based on Proposition 11, for ≤ k < n, the variable S k,n can be seen as a generalized mixture of an exponential distribution and k gamma distributions, one of which is in fact an exponential distribution. The case k = n is slightly di erent: Sn,n is a generalized mixture of an exponential and n − gamma distributions, one of which is in fact an exponential distribution. We formulate this assertion in the next statement.

Proposition 12.
Let the random variable G be Exp(n/k) distributed and the random variables G j be Gamma(j, + a) distributed (j = , . . . , k). Assume that n ≠ k( + a) . Then S k,n can be written as where the mixing weights w j are Proof. The assertion follows from Proposition 11 by identifying the pdfs of the Exp(n/k) and Gamma(j, +a) distributions.
k j= w j = can be seen by a cumbersome, but elementary computation. Notice that the mixture is indeed a generalized mixture: w j < if k − j is odd and n − k( + a) > ; and w j < for all j = , . . . , k if n − k( + a) < .

. . Model based derivation and the integral form of the pdf of S k,n
Due to the de nition of the multivariate model given in (35), S k,n can be rewritten as where En,n ∼ Exp( ) and Z (n− ) ∼ Gamma(n − , a + ) are independent of each other.

Proposition 13.
The integral form of the pdf of S k,n for k = , . . . , n is given by In particular, Remark 11. Notice that (54) can be rewritten equivalently as where the summands are independent, E k,n ∼ Exp(n/k), G k− ∼ Gamma(k − , a + ), and with probability k n , Z k ∼ Exp(a + ) with probability n−k n .
Using (58), we can derive elegantly the expectation and variance of S k,n . We have De nition 6. The Exponential Gamma Mixture Type I (in notation EGM(n, k, a) or EGM Type I) distribution is de ned by its probability density function via the formulas (48)-(49), or alternatively by formula (56).
The rst approach (formulas (48)-(49)) gives an algebraic way for de ning the EGM(n, k, a) distribution, as it was pointed out in Proposition 12, while the second approach (decomposition (58)) gives a probabilistic way. The pdf of the EGM(n, k, a) distribution for some n, k, a values is plotted in Figure 8. Remark 12. For xed k and for n → ∞ , the weights w j ( j = , . . . , k − ) in (53) vanish, while w k → , so the generalized mixture in (52) reduces to a Gamma(k, a + ) distribution, as it was already stated in Remark 9 (ii), i.e., the EGM(∞, k, a) distribution is identical to the Gamma(k, a + ) distribution.

. . Asymptotics for Sn,n: gamma approximation
Since no simple, well tractable formula for the pdf of Sn,n is available (see (50) and (57)), we aim to understand the asymptotical behaviour of Sn,n by suitably approximating its pdf. We provide an asymptotics which arises from modifying a lower bound for the density g Sn,n . Setting k = n in (59) and (60) yields Inserting these bounds into (50), the statement follows. (61) and (62) show that the distribution of Sn,n is degenerate in the sense that E(Sn,n) → ∞ and V(Sn,n) → ∞ as n → ∞ , but it is still worth examining its distribution as we will see in the following. The upper bound in (63) (or alternatively the integral form (57)) shows that lim n→∞ g Sn,n (s) = for all s ≥ . When the lower bound is multiplied by (a + ) , it turns into the pdf of a Gamma(n, a + ) distribution, which can be used to approximate the density g Sn,n after appropriate scaling and shifting. This assertion is formulated precisely in the next statement and illustrated in Figure 9. (For n = the di erence between the curves is hardly any visible.) We callỸn the gamma approximation of Sn,n, althoughỸn (due to the αn term) is not gamma-distributed, but follows a shifted gamma distribution, which justi es the terminology. The proof can be found in Appendix B. T is an arbitrary monote transformation, and g T(Sn,n) and g T(Ỹn) are the corresponding density functions.

. Distribution of X n , asymptotics and limiting distribution -Exponential Gamma Mixture Type II distribution
Let us introduce the notationX n := X ,n + . . . + Xn,n n .
In order to obtain a non-trivial limiting distribution forXn, we need to compute E X n and D X n . Using (41) (or equivalently (61)) and (62), we get trivially that Formulas (65) and (67) provide the suitable normalizing terms for deriving a limiting distribution forXn . The rescaled variable has mean zero and variance one:

. . The distribution of X n and its limiting distribution
The probabilty density function ofX n can be derived using the probability density function g Sn,n (s) given in (50). gX n (s) = n · D X n · g Sn,n n · D X n · s + n · E X n for all s ≥ − E (Xn) D (Xn) .

Proposition 16.
For all s ∈ R , we have It is remarkable that the approximating expression in ( ) de nes a two-parametric family of probability density functions with asymptotically the same support as that of gX n (s) . We call this family Exponential Gamma Mixture Type II distribution, and to the best of our knowledge, it has not been mentioned in the literature yet.

De nition 7.
Let a > . The Exponential Gamma Mixture (EGM) Type II (in notation EGM(n, a)) distribution is de ned by its probability density. For s ≥ − The pdf of the EGM(n, a) distribution is pictured in Figure 10 for some values of a and n.
Proof. One has to identify the suitable pdfs in (68) and has to verify that the weights indeed sum up to one.

Remark 16.
The mixture in (70) is indeed a generalized mixture: the weights satisfy w j < for j = , . . . , n− and they are unbounded as n → ∞.
Finally, we state the limiting distribution of the EGM(n, a) distribution. The core idea is the following: although the variables X ,n , . . . , Xn,n are not independent, the dependence among them vanishes as n → ∞ (cf. Remark 6). The statement is illustrated in Figure 10.
where E n,n = En,n−E(En,n) D(En,n) . The variable a+ √ n+a(a+ ) · E n,n has zero mean and its variance converges to zero as n → ∞. The coe cient ofZ n− in (71) converges to one, and the limiting distribution ofZ n− is N( , ) by the central limit theorem, hence the same holds true forX n . . Inserting this into the right-hand-side of (72), the statement follows.

Outlook
We have seen that the model introduced by Freund [4], and elaborated further by Guzmics and P ug [6] still contains many interesting and as yet unexplored details, concerning extreme value questions and multivariate generalizations. In a future work, we plan to obtain results for the componentwise minimum, analogous to those we presented for the maximum in this current work. We also guess that the analysis presented in Section 2 and Section 3 can be also performed in the general Freund(A) model, though it would be even more technical, see Appendix A. Neveretheless, that would highly increase the applicability of the model for real-life situations, as well as a detailed elaboration of a multi-step cascade. Furthermore, we would like to extend our examinations that we performed so far to stochastic order relations, too.