# The stopped clock model

• Helena Ferreira and Marta Ferreira
From the journal Dependence Modeling

## Abstract

The extreme value theory presents specific tools for modeling and predicting extreme phenomena. In particular, risk assessment is often analyzed through measures for tail dependence and high values clustering. Despite technological advances allowing an increasingly larger and more efficient data collection, there are sometimes failures in the records, which causes difficulties in statistical inference, especially in the tail where data are scarcer. In this article, we present a model with a simple and intuitive failures scheme, where each record failure is replaced by the last record available. We will study its extremal behavior with regard to local dependence and high values clustering, as well as the temporal dependence on the tail.

MSC 2010: 60G70

## 1 Introduction

Let { X n } n Z and { U n } n Z be stationary sequences of real random variables on the probability space ( Ω , A , P ) and P ( U n { 0 , 1 } ) = 1 . We define, for n 1 ,

(1) Y n = X n , U n = 1 Y n 1 , U n = 0 .

Sequence { Y n } n 1 corresponds to a model of failures on records of { X n } n Z replaced by the last available record, which occurs in some random past instant, if we interpret n as time. Thus, if, for example, it occurs { U 1 = 1 , U 2 = 0 , U 3 = 1 , U 4 = 0 , U 5 = 0 , U 6 = 0 , U 7 = 1 } , we will have { Y 1 = X 1 , Y 2 = X 1 , Y 3 = X 3 , Y 4 = X 3 , Y 5 = X 3 , Y 6 = X 3 , Y 7 = X 7 } . This constancy of some variables of { X n } n Z for random periods of time motivates the designation of “stopped clock model” for sequence { Y n } n 1 .

Failure models studied in the literature from the point of view of extremal behavior do not consider the stopped clock model (Hall and Hüsler [4]; Ferreira et al. [3] and references therein).

The model we will study can also be represented by { X N n } n 1 , where { N n } n 1 is a sequence of positive integer variables representable by

N n = n U n + n 1 i 1 j = 0 i 1 ( 1 U n j ) U n i ( n i ) , n 1 .

We can also state a recursive formulation for { Y n } n 1 through

Y n = X n U n + n 1 i 1 j = 0 i 1 ( 1 U n j ) U n i X n i + i 0 ( 1 U n i ) Y n κ , n 1 , κ 1 .

Under any of the three possible representations (failures model, random index sequence or recursive sequence), we are not aware of an extremal behavior study of { Y n } n 1 in the literature.

Our departure hypotheses about the base sequence { X n } n Z and about sequence { U n } n Z are as follows:

1. { X n } n Z is a stationary sequence of random variables almost surely distinct and, without loss of generality, such that F X n ( x ) F ( x ) = exp ( 1 / x ) , x > 0 , i.e., standard Fréchet distributed.

2. { X n } n Z and { U n } n Z are independent.

3. { U n } n Z is stationary and p n 1 , , n s ( i 1 , , i s ) P ( U n 1 = i 1 , , U n s = i s ) , i j { 0 , 1 } , j = 1 , , s , is such that p n , n + 1 , , n + κ 1 ( 0 , , 0 ) = 0 , for some κ 1 .

The trivial case κ = 1 corresponds to Y n = X n , n 1 . Hypothesis (3) means that we are assuming that it is almost impossible to lose κ or more consecutive values of { X n } n Z . We remark that, along the paper, the summations, produts and intersections is considered to be nonexistent whenever the end of the counter is less than the beginning. We will also use notation a b = max ( a , b ) .

## Example 1.1

Consider an independent and identically distributed sequence { W n } n Z of real random variables on ( Ω , A , P ) and a Borelian set A . Let p = P ( A n ) , where A n = { W n A } , n Z . The sequence of Bernoulli random variables

(2) U n = 1 { i = 1 κ 1 A ¯ n i } + ( 1 1 { i = 1 κ 1 A ¯ n i } ) 1 { A n } , n Z ,

where 1 { } denotes the indicator function, defined for some fixed κ 2 , is such that p n , n + 1 , , n + κ 1 ( 0 , , 0 ) = 0 , i.e., it is almost sure that after κ 1 consecutive variables equal to zero, the next variable takes value one. We also have

p n ( 0 ) = P ( 1 { i = 1 κ 1 A ¯ n i } = 0 , 1 { A n } = 0 ) = P i = 1 κ 1 A n i A ¯ n = P ( A ¯ n ) P i = 0 κ 1 A ¯ n i = 1 p ( 1 p ) κ ,

since the independence of random variables W n implies the independence of events A n , and, for κ > 2 ,

p n 1 , n ( 0 , 0 ) = P i = 1 κ 1 A n i A ¯ n i = 1 κ 1 A n 1 i A ¯ n 1 = P ( A ¯ n A ¯ n 1 ) P A ¯ n A ¯ n 1 i = 1 κ 1 A ¯ n i i = 1 κ 1 A ¯ n 1 i = P ( A ¯ n ) P ( A ¯ n 1 ) P i = 0 κ 1 A ¯ n i = ( 1 p ) 2 ( 1 p ) κ ,

p n 1 , n ( 1 , 0 ) = p n ( 0 ) p n 1 , n ( 0 , 0 ) = p ( 1 p ) .

In Figure 1, we illustrate with a particular example based on independent standard Fréchet { X n } n Z , { W n } n Z with standard exponential marginals, A = ] 0 , 1 / 2 ] and thus, p = 0.3935 and considering κ = 3 . Therefore, p n , n + 1 , n + 2 ( 0 , 0 , 0 ) = 0 , p n , n + 1 , n + 2 ( 1 , 0 , 0 ) = p n , n + 1 ( 0 , 0 ) = p ( 1 p ) 2 .

Figure 1

Sample path of 100 observations simulated from { Y n } defined in (1) based on independent standard Fréchet { X n } and on { U n } given in (2) where we take random variables { W n } standard exponential distributed, A = ] 0 , 1 / 2 ] , and thus, p = 0.3935 and considering κ = 3 .

In the next section, we propose an estimator for probabilities p n , , n + s ( 1 , 0 , , 0 ) , 0 s < κ 1 . In Section 3, we analyze the existence of the extremal index for { Y n } n 1 , an important measure to evaluate the tendency to occur clusters of its high values (see, e.g., Kulik and Solier [6], and references therein). A characterization of the tail dependence will be presented in Section 4. The results are illustrated with an ARMAX sequence.

For the sake of simplicity, we will omit the variation of n in sequence notation whenever there is no doubt, taking into account that we will keep the designation { Y n } for the stopped clock model and { X n } and { U n } for the sequences that generate it.

## 2 Inference on { U n }

Assuming that { U n } is not observable, as well as the values of { X n } that are lost, it is of interest to retrieve information about these sequences from the available sequence { Y n } .

Since, for n 1 and s 1 , we have

p n ( 1 ) = E ( 1 { Y n Y n 1 } ) , p n ( 0 ) = E ( 1 { Y n = Y n 1 } ) and p n s , n s + 1 , , n ( 1 , 0 , , 0 ) = E ( 1 { Y n s 1 Y n s = Y n s + 1 = = Y n } ) ,

we propose to estimate these probabilities from the respective empirical counterparts of a random sample ( Y ˆ 1 , Y ˆ 2 , , Y ˆ m ) from { Y n } , i.e.,

p ^ n ( 1 ) = 1 m i = 2 m 1 { Y ˆ i Y ˆ i 1 } , p ^ n ( 0 ) = 1 m i = 2 m 1 { Y ˆ i = Y ˆ i 1 } and p ^ n s , n s + 1 , , n ( 1 , 0 , , 0 ) = 1 m i = s + 2 m 1 { Y ˆ i s 1 Y ˆ i s = Y ˆ i s + 1 = = Y ˆ i } ,

which are consistent by the weak law of large numbers. The value of κ can be inferred from

κ ^ = i = s + 2 m s 1 s 1 { Y ˆ i s 1 Y ˆ i s = Y ˆ i s + 1 = = Y ˆ i } .

In order to evaluate the finite sample behavior of the aforementioned estimators, we have simulated 1,000 independent replicas with size m = 100 , 1,000, 5,000 of the model in Example 1.1. The absolute bias (abias) and root mean squared error (rmse) are presented in Table 1. The results reveal a good performance of the estimators, even in the case of smaller sample sizes. Parameter κ was always estimated with no error.

Table 1

The absolute bias (abias) and rmse obtained from 1,000 simulated samples with size m = 100 , 1,000, 5,000 of the model in Example 1.1

abias rmse
p ^ n ( 0 ) m = 100 0.0272 0.0335
m = 1,000 0.0087 0.0108
m = 5,000 0.0039 0.0048
p ^ n 1 , n ( 1 , 0 ) m = 100 0.0199 0.0253
m = 1,000 0.0065 0.0080
m = 5,000 0.0030 0.0037
p ^ n 2 , n 1 , n ( 1 , 0 , 0 ) m = 100 0.0160 0.0200
m = 1,000 0.0051 0.0064
m = 5,000 0.0022 0.0028

## 3 The extremal index of { Y n }

The sequence { Y n } is stationary because the sequences { X n } and { U n } are stationary and independent from each other. In addition, the common distribution for Y n , n 1 , is also standard Fréchet, as is the common distribution for X n , since

F Y n ( x ) = i = 1 κ 1 P ( X n i x , U n i = 1 , U n i + 1 = 0 = = U n ) + P ( X n x ) P ( U n = 1 ) = F ( x ) p n ( 1 ) + i = 1 κ 1 p n i , , n ( 1 , 0 , , 0 ) = F ( x ) .

For any τ > 0 , if we define u n u n ( τ ) = n / τ , n 1 , it turns out that E i = 1 n 1 { Y i > u n } = n P ( Y 1 > u n ) n τ and n P ( X 1 > u n ) n τ , so we refer to these levels u n by normalized levels for { Y n } and { X n } .

In this section, in addition to the general assumptions about the model presented in Section 1, we start by assuming that { X n } and { U n } present dependency structures such that variables sufficiently apart can be considered approximately independent. Concretely, we assume that { U n } satisfies the strong-mixing condition (Rosenblatt [9]) and { X n } satisfies condition D ( u n ) (Leadbetter [7]) for normalized levels u n .

## Proposition 3.1

If { U n } is strong-mixing and { X n } satisfies condition D ( u n ) , then { Y n } also satisfies condition D ( u n ) .

## Proof

For any choice of p + q integers, 1 i 1 < < i p < j 1 < < j q n such that j 1 i p + l , we have that

P s = 1 p X i s u n , s = 1 q X j s u n P s = 1 p X i s u n P s = 1 q X j s u n α n , l ,

with α n , l n 0 , as n , for some sequence l n = o ( n ) , and

P ( A B ) P ( A ) P ( B ) g ( l ) ,

with g ( l ) 0 , as l , where A belongs to the σ -algebra generated by { U i , i = 1 , , i p } and B belongs to the σ -algebra generated by { U i , i = j 1 , j 1 + 1 , } . Thus, for any choice of p + q integers, 1 i 1 < < i p < j 1 < < j q n such that j 1 i p + l + κ , we will have

P s = 1 p Y i s u n , s = 1 q Y j s u n P s = 1 p Y i s u n P s = 1 q Y j s u n i s κ < i s i s j s κ < j s j s P s = 1 p X i s u n , s = 1 q X j s u n P ( A B ) P s = 1 p X i s u n P s = 1 q X j s u n P ( A ) P ( B ) ,

where A = s = 1 p { U i s = 0 = = U i s + 1 , U i s = 1 } and B = s = 1 q { U j s = 0 = = U j s + 1 , U j s = 1 } and j 1 > j 1 κ i p + l .

Therefore, the aforementioned summation is upper limited by

i s κ < i s i s j s κ < j s j s P s = 1 p X i s u n , s = 1 q X j s u n P s = 1 p X i s u n P s = 1 q X j s u n + P ( A B ) P ( A ) P ( B ) i s κ < i s i s j s κ < j s j s ( α n , l + g ( l ) ) ,

which allows to conclude that D ( u n ) holds for { Y n } with l n ( Y ) = l n + κ .□

The tendency for clustering of values of { Y n } above u n depends on the same tendency within { X n } and the propensity of { U n } for consecutive null values. The clustering tendency can be assessed through the extremal index (Leadbetter, [7]). More precisely, { X n } is said to have extremal index θ X ( 0 , 1 ] if

(3) lim n P i = 1 n X i n / τ = e θ X τ .

If D ( u n ) holds for { X n } , we have

lim n P i = 1 n X i u n = lim n P k n i = 1 [ n / k n ] X i u n

for any integers sequence { k n } , such that,

(4) k n , k n l n / n 0 and k n α n , l n 0 , as n .

We can therefore say that

θ X τ = lim n k n P i = 1 [ n / k n ] X i > u n .

Now we compare the local behavior of sequences { X n } and { Y n } , i.e., of X i and Y i for i ( j 1 ) n k n + 1 , , j n k n , j = 1 , , k n , with regard to the oscillations of their values in relation to u n . To this end, we will use local dependency conditions D ( s ) ( u n ) . We say that { X n } satisfies D ( s ) ( u n ) , s 2 , whenever

lim n n j = s [ n / k n ] P ( X 1 > u n , X j u n < X j + 1 ) = 0 ,

for some integers sequence { k n } satisfying (4). Condition D ( 1 ) ( u n ) translates into

lim n n j = 2 [ n / k n ] P ( X 1 > u n , X j > u n ) = 0 .

Observe that if D ( s ) ( u n ) holds for some s 1 , then D ( m ) ( u n ) also holds for m > s . Condition D ( 1 ) ( u n ) is known as D ( u n ) (Leadbetter et al. [8]) and relates to a unit extremal index, i.e., absence of extreme values clustering. In particular, this is the case of independent variables. Although { X n } satisfies D ( u n ) , this condition is not generally valid for { Y n } . Observe that

n j = 2 [ n / k n ] P ( Y 1 > u n , Y j > u n ) = i = 2 κ 1 n j = 2 [ n / k n ] j = i ( j κ + 1 ) j P ( X i > u n , X j > u n ) p i , , 1 , j , j + 1 , , j ( 0 , , 0 , 1 , 0 , , 0 ) .

For i = 1 and j = κ , we have j = 1 and the corresponding term becomes n P ( X 1 > u n ) τ > 0 , as n , and this is the reason why, in general, { Y n } does not satisfy D ( u n ) even if { X n } satisfies it.

## Proposition 3.2

The following statements hold:

1. If { Y n } satisfies D ( s ) ( u n ) , s 2 , then { X n } satisfies D ( s ) ( u n ) .

2. If { X n } satisfies D ( s ) ( u n ) , s 2 , then { Y n } satisfies D ( s + κ 1 ) ( u n ) .

3. If { X n } satisfies D ( u n ) , then { Y n } satisfies D ( 2 ) ( u n ) .

## Proof

Consider r n = [ n / k n ] . We have that

(5) n j = s r n P ( Y 1 > u n , Y j u n < Y j + 1 ) = i = 2 κ 1 n j = s r n P ( X i > u n , Y j u n < X j + 1 , U i = 1 , U i + 1 = 0 = = U 1 , U j + 1 = 1 ) = i = 2 κ 1 n j = s r n j = ( i + 1 ) ( j κ + 1 ) j P ( X i > u n , X j u n < X j + 1 ) p i , i + 1 , , 1 , j , j + 1 , , j , j + 1 ( 1 , 0 , , 0 , 1 , 0 , , 0 , 1 ) .

Since { Y n } satisfies D ( s ) ( u n ) , with s 2 , and thus, the first summation in (5) converges to zero, as n , then all the terms in the last summations also converge to zero. In particular, when i = 1 and j = j , we have n j = s r n P ( X 1 > u n , X j u n < X j + 1 ) 0 , as n , which proves (i).

Conversely, writing the first summation in (5) with j starting at s + κ 1 , we have

(6) n j = s + κ 1 r n P ( Y 1 > u n , Y j u n < Y j + 1 ) = i = 2 κ 1 n j = s + κ 1 r n j = j κ + 1 j P ( X i > u n , X j u n < X j + 1 ) p i , i + 1 , , 1 , j , j + 1 , , j , j + 1 ( 1 , 0 , , 0 , 1 , 0 , , 0 , 1 ) = i = 2 κ 1 n j = s + κ 1 r n j = j κ + 1 j i = j j P ( X i > u n , X j u n , , X i u n , X j + 1 > u n ) p i , i + 1 , , 1 , j , j + 1 , , j , j + 1 ( 1 , 0 , , 0 , 1 , 0 , , 0 , 1 ) ,

where the least of distances between i and i corresponds to the case i = 1 and i = j = s . Therefore, if { X n } satisfies D ( s ) ( u n ) for some s 2 , then each term of (6) converges to zero, as n , and thus, { Y n } satisfies D ( s + κ 1 ) ( u n ) , proving (ii).

As for (iii), observe that

(7) n j = 2 r n P ( Y 1 > u n , Y j u n < Y j + 1 ) = i = 2 κ 1 n j = 2 r n P ( X i > u n , Y j u n < X j + 1 , U i = 1 , U i + 1 = 0 = = U 1 , U j + 1 = 1 ) i = 2 κ 1 n j = 2 r n P ( X i > u n , X j + 1 > u n ) = i = 2 κ 1 n j = 2 r n P ( X 1 > u n , X j i + 2 > u n ) κ n j = 2 r n P ( X 1 > u n , X j > u n ) .

If { X n } satisfies D ( u n ) , then (7) converges to zero, as n , and D ( 2 ) ( u n ) holds for { Y n } .□

Under conditions D ( u n ) and D ( s ) ( u n ) with s 2 , we can also compute the extremal index θ X defined in (3) by (Chernick et al. [1]; Corollary 1.3)

(8) θ X = lim n P ( X 2 u n , , X s u n X 1 > u n ) .

If { X n } and { Y n } have extremal indexes θ X and θ Y , respectively, then θ Y θ X , since P ( i = 1 n X i n / τ ) P ( i = 1 n Y i n / τ ) . This corresponds to the intuitively expected, if we remember that the possible repetition of variables X n leads to larger clusters of values above u n . In the following result, we establish a relationship between θ X and θ Y .

## Proposition 3.3

Suppose that { U n } is strong-mixing and { X n } satisfies conditions D ( u n ) and D ( s ) ( u n ) , s 2 , for normalized levels u n u n ( τ ) . If { X n } has extremal index θ X , then { Y n } has extremal index θ Y given by

θ Y = θ X j = 0 κ 1 p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 ) β j ,

where

β j = lim n P ( X s + j > u n X 1 u n , , X s 1 u n < X s ) .

## Proof

By Proposition 3.1, { Y n } also satisfies condition D ( u n ) . Thus, we have

lim n P i = 1 n Y i u n = exp lim n k n P i = 1 [ n / k n ] Y i > u n

and

(9) lim n k n P i = 1 [ n / k n ] Y i > u n = lim n k n P Y 1 u n , i = 1 [ n / k n ] { Y i > u n } = lim n k n P i = 1 [ n / k n ] { Y i u n < Y i + 1 } = lim n k n P i = 1 [ n / k n ] { Y i u n < X i + 1 , U i + 1 = 1 } = lim n k n P i = 1 [ n / k n ] j = 0 κ 1 { X i j u n < X i + 1 , U i j = 1 , U i j + 1 = 0 = = U i , U i + 1 = 1 } = lim n k n P i = 1 [ n / k n ] j = 0 κ 1 { X i u n < X i + j + 1 , U i = 1 , U i + 1 = 0 = = U i + j , U i + j + 1 = 1 } = lim n k n i = 1 [ n / k n ] j = 0 κ 1 P ( X 1 u n , , X i u n < X i + 1 , X i + j + 1 > u n ) p i , i + 1 , , i + j , i + j + 1 ( 1 , 0 , , 0 , 1 ) = lim n k n i = 1 [ n / k n ] j = 0 κ 1 P ( X i s + 2 u n , , X i u n < X i + 1 , X i + j + 1 > u n ) p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 )

since { X n } satisfies condition D ( s ) ( u n ) for some s 2 . The stationarity of { X n } leads to

lim n k n i = 1 [ n / k n ] j = 0 κ 1 P ( X i s + 2 u n , , X i u < X i + 1 , X i + j + 1 > u n ) p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 ) = lim n k n i = 1 [ n / k n ] j = 0 κ 1 P ( X 1 u n , , X s 1 u n < X s , X s + j > u n ) p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 ) = lim n j = 0 κ 1 n P ( X 1 u n , , X s 1 u n < X s , X s + j > u n ) p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 )

= lim n j = 0 κ 1 n P ( X 1 u n , , X s 1 u n < X s ) P ( X s + j > u n X 1 u n , , X s 1 u n < X s ) p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 ) = τ θ X j = 0 κ 1 p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 ) β j ,

where the last step follows from (8).□

Observe that j = 0 κ 1 p 1 , 2 , , j + 1 , j + 2 ( 1 , 0 , , 0 , 1 ) = p n ( 1 ) = P ( U n = 1 ) and thus, θ Y θ X p n ( 1 ) θ X , as expected.

## Proposition 3.4

Suppose that { U n } is strong-mixing and { X n } satisfies conditions D ( u n ) and D ( u n ) , for normalized levels u n u n ( τ ) . Then, { Y n } has extremal index θ Y given by θ Y = p 1 , 2 ( 1 , 1 ) .

## Proof

By condition D ( u n ) , the only term to consider in (9) corresponds to j = 0 , and we obtain

lim n k n P i = 1 [ n / k n ] Y n u n = lim n k n i = 1 [ n / k n ] P ( X 1 u n , , X s 1 u n < X s ) p 1 , 2 ( 1 , 1 ) = lim n n P ( X s > u n ) p 1 , 2 ( 1 , 1 ) = τ p 1 , 2 ( 1 , 1 ) .

Observe that we can obtain the aforementioned result by applying Proposition 3.2 (iii) and calculating directly τ θ Y = lim n n P ( Y 1 u n < Y 2 ) . More precisely, we have that { Y n } satisfies D ( 2 ) ( u n ) , and by applying (8), we obtain

τ θ Y = lim n n P ( Y 1 u n < Y 2 ) = lim n n P ( Y 1 u n < X 2 , U 2 = 1 ) = lim n n P j = 0 κ 1 X 1 j u n < X 2 , U 1 j = 1 , U 1 j + 1 = 0 = = U 1 , U 2 = 1 , = lim n n P j = 0 κ 1 X 2 κ u n , , X 1 j u n < X 2 j , X 2 > u n p 1 j , 1 j + 1 , , 1 , 2 ( 1 , 0 , , 0 , 1 ) = lim n n P ( X 1 u n < X 2 ) p 1 , 2 ( 1 , 1 ) = lim n n P ( X 2 > u n ) p 1 , 2 ( 1 , 1 ) = τ p 1 , 2 ( 1 , 1 ) .

The same result can also be seen as a particular case of Proposition 3.3, where, if we take s = 1 , we have β j = 0 , for j 0 , and we obtain θ Y = θ X β 0 p 1 , 2 ( 1 , 1 ) = p 1 , 2 ( 1 , 1 ) , since β 0 = 1 and under D ( u n ) it comes θ X = 1 .

## Example 3.1

Consider { Y n } such that { X n } is an ARMAX sequence, i.e., X n = ϕ X n 1 ( 1 ϕ ) Z n , n 1 , where { Z n } is an independent sequence of random variables with standard Fréchet marginal distribution and { X n } and { Z n } are independent. We show that { X n } has also standard Fréchet marginal distribution, satisfies condition D ( 2 ) ( u n ) and has extremal index θ X = 1 ϕ (see, e.g., Ferreira and Ferreira [2] and references therein).

Observe that, for normalized levels u n n / τ , τ > 0 , we have

β 1 = lim n P ( X 3 > u n X 1 u n < X 2 ) = lim n P ( X 1 u n ) P ( X 1 u n , X 2 u n ) P ( X 1 u n , X 3 u n ) + P ( X 1 u n , X 2 u n , X 3 u n ) P ( X 1 u n ) P ( X 1 u n , X 2 u n ) = lim n 1 τ n 1 τ n ( 2 ϕ ) 1 τ n ( 2 ϕ 2 ) + 1 τ n ( 3 2 ϕ ) 1 τ n 1 τ n ( 2 ϕ ) = ϕ .

Analogous calculations lead to β 2 = ϕ 2 . Considering κ = 3 , we have θ Y = ( 1 ϕ ) ( p 1 , 2 ( 1 , 1 ) + ϕ p 1 , 2 , 3 ( 1 , 0 , 1 ) + ϕ 2 p 1 , 2 , 3 , 4 ( 1 , 0 , 0 , 1 ) ) .

The observed sequence is { Y n } , and therefore, results that allow retrieving information about the extreme behavior of the initial sequence { X n } , subject to the failures determined by { U n } , may be of interest.

If we assume that { Y n } satisfies D ( s ) ( u n ) , then { X n } also satisfies D ( s ) ( u n ) by Proposition 3.2 (i), thus coming

τ θ X = lim n n P ( X 1 u n , , X s 1 u n < X s ) = lim n n P ( Y 1 u n , , Y s 1 u n < Y s U 1 = = U s = 1 ) = lim n n P ( Y 1 u n , , Y s 1 u n < Y s Y 0 Y 1 Y s ) .

Thereby, we can write

θ X = lim n P ( Y 1 u n , , Y s 1 u n < Y s Y 0 Y 1 Y s ) P ( Y 1 > u n ) .

## 4 Tail dependence

Now we will analyze the effect of this failure mechanism on the dependency between two variables, Y n and Y n + m , m 1 . More precisely, we are going to evaluate the lag- m tail dependence coefficient

λ ( Y n + m Y n ) = lim x P ( Y n + m > x Y n > x ) ,

which incorporates the tail dependence between X n and X n + j , with j regulated by the maximum number of failures κ 1 and by the relation between m and κ . In particular, independent variables present null tail dependence coefficients. If m = 1 , we obtain the tail dependence coefficient in Joe [5]. For results related to lag- m tail dependence in the literature, see, e.g., Zhang [10,11].

## Proposition 4.1

Sequence { Y n } has lag- m tail dependence coefficient, with m 1 ,

(10) λ ( Y n + m Y n ) = p 1 , , m ( 0 , , 0 ) 1 { m κ 1 } + i = 1 ( m κ + 1 ) m i = 0 κ 1 λ ( X n + i + i X n ) p 1 , 2 , , i + 1 , i + 1 + i , i + 2 + i , , i + 1 + m ( 1 , 0 , , 0 , 1 , 0 , , 0 ) ,

provided all coefficients λ ( X n + i + i X n ) exist.

## Proof

Observe that

P ( Y n > x , Y n + m > x ) = P ( Y n > x , U n + 1 = 0 = = U n + m ) 1 { m κ 1 } + i = 1 ( m κ + 1 ) m P ( Y n > x , X n + i > x , U n + i = 1 , U n + i + 1 = 0 = = U n + m ) = i = 0 κ 1 m P ( X n i > x ) p n i , n i + 1 , , n + m ( 1 , 0 , , 0 ) 1 { m κ 1 } + i = 1 ( m κ + 1 ) m i = 0 κ 1 P ( X n i > x , X n + i > x ) p n i , n i + 1 , , n , n + i , n + i + 1 , , n + m ( 1 , 0 , , 0 , 1 , 0 , , 0 )

and i = 0 κ 1 m p 1 , 2 , , m + i + 1 ( 1 , 0 , , 0 ) = p 1 , , m ( 0 , , 0 ) .□

Taking