Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access September 9, 2020

Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors

  • Yu Zhang , Xinsheng Liu , Yuncai Yu and Hongchang Hu
From the journal Open Mathematics

Abstract

In this article, an errors-in-variables regression model in which the errors are negatively superadditive dependent (NSD) random variables is studied. First, the Marcinkiewicz-type strong law of large numbers for NSD random variables is established. Then, we use the strong law of large numbers to investigate the asymptotic normality of least square (LS) estimators for the unknown parameters. In addition, the mean consistency of LS estimators for the unknown parameters is also obtained. Some results for independent random variables and negatively associated random variables are extended and improved to the case of NSD setting. At last, two simulations are presented to verify the asymptotic normality and mean consistency of LS estimators in the model.

MSC 2010: 62F12; 60F05; 62E20; 62J05

1 Introduction

To correct the effects of sampling errors, Deaton [1] proposed the errors-in-variables (EV) regression model, which is somewhat more practical than the ordinary regression model, and hence has attracted more attention.

In this article, we consider the following linear regression model:

(1.1) y i = β 0 + β 1 x i + e i ,   U i = x i + ω i ,  1 i n ,

where β 0 , β 1 , x 1 , x 2 , are unknown parameters or constants, ( e 1 , ω 1 ) , ( e 2 , ω 2 ) , are random vectors and U i , y i , 1 i n are observable variables. From (1.1), we have

(1.2) y i = β 0 + β 1 U i + e i β 1 ω i ,  1 i n .

Consider formally (1.2) as an usual regression model of y i on U i with errors e i β 1 ω i , we obtain the least square (LS) estimators of β 1 and β 0 as

(1.3) β ˜ 1 n = i = 1 n ( U i U ¯ n ) ( y i y ¯ n ) i = 1 n ( U i U ¯ n ) 2 , β ˜ 0 n = y ¯ n β ˜ 1 n U ¯ n ,

where U ¯ n = 1 n i = 1 n U i , y ¯ n = 1 n i = 1 n y i , ω ¯ n = 1 n i = 1 n ω i , and x ¯ n = 1 n i = 1 n x i .

In the last few decades, many authors focused on investigating the EV models. In the case of independent random errors, the consistency of LS estimators in the linear EV model was established by Liu and Chen [2] and Chen et al. [3]; Miao et al. [4] obtained the central limit theorem for the LS estimators in the simple linear EV regression model; Xu and Li [5] studied the consistency of LS estimators in the linear EV regression model with replicate observations; Miao et al. [6] established some limit behaviors of estimators in the simple linear EV regression model; Miao and Yang [7] obtained the loglog law for the LS estimators in the EV regression model; Miao et al. [8] investigated the consistency and asymptotic normality of LS estimators in the simple linear EV regression model, and so on. In the case of dependent random errors, Fazekas and Kukush [9] obtained the consistency of the regression parameter of the nonlinear functional EV models under mixing conditions; Fan et al. [10] established the asymptotic properties for the LS estimators of the unknown parameters in the simple linear EV regression model with stationary-mixing errors; Miao et al. [11] derived the asymptotic normality and strong consistency of the estimators in the simple linear EV model with negatively associated (NA) errors; Miao et al. [12] obtained the weak consistency and strong consistency of LS estimators for the unknown parameters with martingale difference errors; Shen [13] studied some asymptotic properties of estimators in the EV model with martingale difference errors, and so forth. In this article, we consider model (1.1) under the assumptions that the random errors are negatively superadditive dependent (NSD) random variables whose concept was proposed by Hu [14] as follows.

Definition 1.1

[15] A function ϕ : R n R is called superadditive if

ϕ ( x y ) + ϕ ( x y ) ϕ ( x ) + ϕ ( y )

for all x , y R n , where is for component-wise maximum and is for component-wise minimum.

Definition 1.2

[14] A random vector X = ( X 1 , X 2 , , X n ) is said to be NSD if

(1.4) E ϕ ( X 1 , X 2 , , X n ) E ϕ ( X 1 , X 2 , , X n ) ,

where X 1 , X 2 , , X n are independent such that X i and X i have the same distribution for each i, and ϕ is a superadditive function such that the expectations in (1.4) exist.

A sequence { X n , n 1 } of random variables is said to be NSD, if for all n 1 , ( X 1 , X 2 , , X n ) is NSD.

Since then, a series of useful results on NSD sequences of random variables have been established. Hu [14] and Christofides and Vaggelatou [16] reported that the family of NSD sequences contains NA (in particular, independent) sequences and some more sequences of random variables which are not much deviated from being NA. Hu [14] gave an example illustrating that NSD sequences do not imply NA sequences (see ref. [17]). Moreover, Hu [14] derived some basic properties and three structural theorems of NSD. Eghbal et al. [18] provided two maximal inequalities and strong law of large numbers of quadratic forms of NSD random variables. Some Rosenthal-type inequalities for maximum partial sums of NSD sequences were established by Wang et al. [19]. The complete convergence and complete moment for arrays of rowwise NSD random variables were obtained by Meng et al. [20]. Amini et al. [21] investigated the complete convergence of moving average for NSD random variables. Yu et al. [22] studied the M-test in linear models with NSD errors. Zeng and Liu [23] established the asymptotic normality of difference-based estimator in a partially linear model with NSD errors. For more details about NSD random variables, one can refer to [24,25,26,27,28,29], and so on.

However, we have not found the studies on the asymptotic normality and mean consistency of LS estimators in the EV regression model with NSD random errors in the literature. In this article, we mainly investigate the asymptotic properties of the estimators for the unknown parameters in the simple linear EV regression model, which was proposed by Deaton [1] to correct the effects of sampling errors and is somewhat more practical than the ordinary regression model. Many authors have obtained the asymptotic properties of the estimators for the unknown parameters in the EV regression model for independent random errors, which are not reasonable in real practice. In this article, we assume that the random errors are NSD, which include independent and NA random variables as special cases. So to study model (1.1) with NSD random errors is of considerable significance. The main novelties of this article can be outlined as follows. First, the Marcinkiewicz-type strong law of large numbers for NSD random variables is established, which extends and improves the classical Marcinkiewicz-type strong law of large numbers for independent and identically distributed random variables to NSD random variables with non-identical distribution. Second, we use the strong law of large numbers to investigate the asymptotic normality of LS estimators for the unknown parameters. In addition, the mean consistency of LS estimators for the unknown parameters is also obtained. These results include the corresponding ones of independent random errors and some dependent random errors as special cases. Finally, two simulation studies are carried out to verify the validity of the results that we have established.

Definition 1.3

[29] A sequence { X n , n 1 } of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

P ( | X n | > x ) C P ( | X | > x )

for all x 0 and n 1 .

The remainder of this article is organized as follows. Under some mild conditions, the asymptotic normality and quadratic-mean consistency of LS estimators for the unknown parameters in model (1.1) with NSD random errors are established in Section 2. We give some preliminary lemmas in Section 3. We provide the proofs of the main results in Section 4. In Section 5, two simulations are carried out to study the numerical performance of the results that we have established.

Throughout this article, let C be a positive constant whose values may vary at different places. P stands for convergence in probability, d stands for convergence in distribution, and a . s . represents almost sure convergence. means “defined as.”

2 Main results

Model (1.1) to be studied can be exactly described as follows:

(2.1) y i = β 0 + β 1 x i + e i , U i = x i + ω i , 1 i n ; E e i = E ω i = 0 , 1 i n ,

where ( U i , y i ) , 1 i n , are observable vectors, while x i , 1 i n , are constants, β 0 and β 1 are unknown parameters. Denote T n = i = 1 n ( x i x ¯ n ) 2 for all n 1 .

Based on the notations above and by simple calculation, we have

(2.2) β ˜ 1 n β 1 = i = 1 n ( ω i ω ¯ n ) e i + i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) β 1 i = 1 n ( ω i ω ¯ n ) 2 i = 1 n ( U i U ¯ n ) 2

and

(2.3) β ˜ 0 n β 0 = ( β 1 β ˜ 1 n ) x ¯ n + ( β 1 β ˜ 1 n ) ω ¯ n + e ¯ n β 1 ω ¯ n .

To obtain our results, the following conditions are sufficient.

( C 1 ) 0 < E e i 2 = h 1 < and 0 < E ω i 2 = h 2 < ;

( C 2 ) lim n n T n = 0 ;

( C 3 ) r n = O ( n p ) for some p > 1 / 2 , where r n = max 1 i n | x i x ¯ n | T n ;

( C 4 ) lim inf n σ 1 n c 1 > 0 , where σ 1 n 2 = Var 1 T n i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) ;

( C 5 ) lim inf n σ 0 n c 2 > 0 , where σ 0 n 2 = Var 1 n i = 1 n ( e i β 1 ω i ) ;

( C 6 ) T n n x ¯ n 2 as n ;

( C 7 ) lim n n T n = 0 ;

( C 8 ) | ω i | M for some M 0 and all 1 i n .

2.1 Asymptotic normality

In this subsection, we state the asymptotic normality of LS estimators β ˜ 1 n and β ˜ 0 n for the unknown parameters β 1 and β 0 .

Theorem 2.1

In model (2.1), let { e i , i 1 } and { ω i , i 1 } be both stationary NSD sequences of random variables, and they are independent with each other. Suppose that conditions (C1)–(C4) hold. Then,

(2.4) T n σ 1 n ( β ˜ 1 n β 1 ) d N ( 0 , 1 ) ,

where N ( 0 , 1 ) represents the standard normal distribution.

Theorem 2.2

In model (2.1), let { e i , i 1 } and { ω i , i 1 } be both strictly stationary NSD sequences of random variables, and they are independent with each other. Suppose that conditions (C1)–(C3) and (C5)–(C6) hold. Then,

(2.5) n σ 0 n ( β ˜ 0 n β 0 ) d N 0 , 1 .

Remark 2.1

Since the family of NSD sequences of random variables includes independent and NA sequences, the results of Theorems 2.1 and 2.2 also hold for independent random errors and NA random errors.

2.2 Mean consistency

In this subsection, we state the quadratic-mean consistency of LS estimators β ˜ 1 n and β ˜ 0 n for the unknown parameters β 1 and β 0 .

Theorem 2.3

In model (2.1), let { e i , i 1 } and { ω i , i 1 } be both stationary NSD sequences of random variables, and they are independent of each other. Suppose that conditions ( C 1 ) , ( C 7 ) , and ( C 8 ) hold. Then,

(2.6) lim n E | β ˜ 1 n β 1 | 2 = 0 .

Remark 2.2

As independent random variables are special NSD random variables, Theorem 2.3 generalizes and improves the corresponding result of Liu and Chen [2] for independent and identically distributed random errors in the case of NSD setting.

Theorem 2.4

Suppose that the conditions of Theorem 2.3 are satisfied. Then,

(2.7) lim n E | β ˜ 0 n β 0 | 2 = 0 .

Remark 2.3

As independent and NA random variables are special NSD random variables, the result of Theorem 2.4 also holds for NA random errors.

3 Preliminary lemmas

In this section, we present some important lemmas which will be used to prove the main results of the article.

Lemma 3.1

[14] Suppose that ( X 1 , X 2 , , X n ) is NSD.

  1. ( X 1 , X 2 , , X n ) is also NSD.

  2. If g 1 , g 2 , , g n are all non-decreasing or non-increasing functions, then ( ( g 1 ( X 1 ) , g 2 ( X 2 ) , , g n ( X n ) ) is NSD.

Lemma 3.2

[19, Rosenthal-type inequality] Let p > 1 and { X n , n 1 } be a sequence of NSD random variables with E X n = 0 and E | X n | p < . Then, there exists a positive constant D p depending only on p such that for all n 1 ,

E max 1 k n i = 1 k X i p D p i = 1 n E | X i | p

for 1 < p 2 and

E max 1 k n i = 1 k X i p D p i = 1 n E | X i | p + i = 1 n E X i 2 p / 2

for p > 2 .

From Lemmas 3.1 and 3.2, we can easily derive the following corollary.

Corollary 3.1

(Khintchine-Kolmogorov-type convergence theorem) Let { X n , n 1 } be an NSD sequence of random variables with i = 1 Var ( X i ) < , then

i = 1 ( X i E X i )

converges a.s.

Lemma 3.3

[14] Suppose that X = ( X 1 , X 2 , , X n ) and Z = ( Z 1 , Z 2 , , Z n ) are independent random vectors. If X and Z are both NSD, then ( X 1 + Z 1 , X 2 + Z 2 , , X n + Z n ) is NSD.

Lemma 3.4

[22,23] Let { X n , n 1 } be an NSD sequence of random variables with E X n = 0 and sup j 1 i : | i j | n | Cov ( X i , X j ) | 0 as n . Assume that { a n i , 1 i n } is an array of real numbers with i = 1 n a n i 2 = O ( 1 ) and max 1 i n | a n i | 0 as n . If { X n , n 1 } is uniformly integral in L 2 , then

σ n 1 i = 1 n a n i X i d N ( 0 , 1 ) ,

where σ n 2 = Var ( i = 1 n a n i X i ) .

Remark 3.1

[11] If { X n , n 1 } is a stationary sequence with E X n 2 < , then { X n , n 1 } is uniformly integral in L 2 . In addition, the assumption E X 1 2 + 2 j = 2 E X 1 X j > 0 implies sup j 1 i : | i j | n | Cov ( X i , X j ) | 0 .

Lemma 3.5

[29] Let { X n , n 1 } be a sequence of random variables which is stochastically dominated by a random variable X. For any a > 0 and β > 0 , the following two statements hold:

E | X n | β I ( | X n | a ) C 1 [ E | X | β I ( | X | a ) + a β P ( | X | > a ) ] , E | X n | β I ( | X n | > a ) C 2 E | X | β I ( | X | > a ) ,

where C 1 and C 2 are positive constants. Thus,

E | X n | β C E | X | β ,

where C is a positive constant.

Lemma 3.6

(Marcinkiewicz-type strong law of large numbers) Let { X n , n 1 } be an NSD sequence of random variables which is stochastically dominated by a random variable X with E | X | p < for 0 < p < 2 . Assume that E X n = 0 if 1 p < 2 . Then,

(3.1) n 1 / p i = 1 n X i a . s . 0 .

Proof

Denote X n = n 1 / p I ( X n n 1 / p ) + X n I ( | X n | < n 1 / p ) + n 1 / p I ( X n n 1 / p ) , then we obtain by E | X | p < that

n = 1 P ( X n X n ) = n = 1 P ( | X n | n 1 / p ) C n = 1 P ( | X | n 1 / p ) C E | X | p < .

Hence, it follows by the Borel-Cantelli lemma that

(3.2) X n = X n a . s .

Thus, to prove (3.1), it suffices to show that

(3.3) n 1 / p i = 1 n X i a . s . 0 .

Hence, to prove (3.3), it suffices to show that

(3.4) n 1 / p i = 1 n E X i a . s . 0

and

(3.5) n 1 / p i = 1 n ( X i E X i ) a . s . 0 .

First, we will prove (3.4). It will be divided into the following two cases:

  1. If p = 1 , it follows from E X n = 0 and E | X | p < that

    | E X n I ( | X n | n ) | C E | X | I ( | X | > n ) 0

    as n and

    lim n n P ( | X n | > n ) lim n E | X n | I ( | X n | > n ) C lim n E | X | I ( | X | > n ) = 0 .

    Hence,

    | E X n | n P ( | X n | > n ) + | E X n I ( | X n | n ) | 0

    as n . By the Toeplitz lemma, (3.4) holds.

  2. If p 1 , by the Kronecker lemma, to prove (3.4), it suffices to show that

(3.6) n = 1 | E X n | n 1 / p < .

For 0 < p < 1 , it follows from E | X | p < and Lemma 3.5 that

n = 1 | E X n | n 1 / p C n = 1 n 1 / p P ( | X | n 1 / p ) + E | X | I ( | X | < n 1 / p ) n 1 / p C + C n = 1 j = 1 n n 1 / p E | X | I ( j 1 | X | p < j ) = C + C j = 1 n = j n 1 / p E | X | I ( j 1 | X | p < j ) C + C j = 1 j 1 / p + 1 E | X | p j 1 p / p I ( j 1 | X | p < j ) C + C j = 1 j 1 / p + 1 E | X | p j 1 p / p I ( j 1 | X | p < j ) .

For 1 < p < 2 , it follows from E X n = 0 , E | X | p < and Lemma 3.5 that

n = 1 | E X n | n 1 / p C n = 1 n 1 / p P ( | X | n 1 / p ) + | E X I ( | X | < n 1 / p ) | n 1 / p C n = 1 n 1 / p P ( | X | n 1 / p ) + | E X I ( | X | < n 1 / p ) | n 1 / p C + C n = 1 n 1 / p E | X | I ( | X | n 1 / p ) = C + C n = 1 j = n n 1 / p E | X | I ( j | X | p < j + 1 ) = C + C j = 1 n = 1 j n 1 / p E | X | I ( j | X | p < j + 1 ) C + C j = 1 j 1 / p + 1 E | X | p j ( 1 p ) / p I ( j | X | p < j + 1 ) ] = C + C j = 1 E | X | p I ( j | X | p < j + 1 ) < .

Hence, (3.6) holds.

Next, we will prove (3.5).

By Lemma 3.1, we know that { X n E X n , n 1 } is still an NSD sequence of random variables. Hence, by Corollary 3.1 and Kronecker lemma, to prove (3.5), we only need to show that

n = 1 Var ( X n / n 1 / p ) < .

By the Markov inequality, Lemma 3.5 and E | X | p < , we have

n = 1 Var X n n 1 / p n = 1 E ( X n ) 2 n 2 / p C n = 1 n 2 / p P ( | X | n 1 / p ) + E X 2 I ( | X | < n 1 / p ) n 2 / p C + C n = 1 n 2 / p k = 1 n E X 2 I ( k 1 | X | p < k ) = C + C k = 1 n = k n 2 / p E X 2 I ( k 1 | X | p < k ) C + C k = 1 k 2 / p + 1 E | X | p k 2 p / p I ( k 1 X p < k ) = C + C k = 1 E | X | p I ( k 1 | X | p < k ) < .

This completes the proof of Lemma 3.6.□

Remark 3.2

Lemma 3.5 is the Marcinkiewicz-type strong law of large numbers for NSD sequences of random variables, which extend and improve the classical Marcinkiewicz-type strong law of large numbers for independent and identically distributed random variables to NSD random variables with non-identical distribution. It also holds for NA random variables with non-identical distribution.

4 Proofs of the main results

Proof of Theorem 2.1

In view of (2.2), we have

T n σ 1 n ( β ˜ 1 n β 1 ) = T n σ 1 n 1 T n i = 1 n ( ω i ω ¯ n ) e i + i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) β 1 i = 1 n ( ω i ω ¯ n ) 2 1 T n i = 1 n ( U i U ¯ n ) 2 = 1 σ 1 n T n i = 1 n ( ω i ω ¯ n ) e i + i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) β 1 i = 1 n ( ω i ω ¯ n ) 2 1 T n i = 1 n ( U i U ¯ n ) 2 .

Hence, by Slutsky’s theorem, to prove (2.4), it suffices to show that

(4.1) i = 1 n ( ω i ω ¯ n ) 2 T n P 0 ,

(4.2) i = 1 n ( ω i ω ¯ n ) e i T n P 0 ,

(4.3) T n σ 1 n i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) T n d N ( 0 , 1 ) ,

and

(4.4) i = 1 n ( U i U ¯ n ) 2 T n P 1 .

First, we prove (4.1). Note that

i = 1 n ( ω i ω ¯ n ) 2 T n i = 1 n ω i 2 T n ,

hence, (4.1) follows from ( C 2 ) .

Second, we prove (4.2). Similar to the proof of (4.1), we can get that

(4.5) i = 1 n ( e i e ¯ n ) 2 T n P 0 .

Since

i = 1 n ( ω i ω ¯ n ) e i 1 2 i = 1 n ( ω i ω ¯ n ) 2 + i = 1 n ( e i e ¯ n ) 2 ,

and together with (4.1) and (4.5), (4.2) follows.

Third, we prove (4.4). By the Cauchy-Schwarz inequality, we have, for any m > 0 ,

i = 1 n | x i x ¯ n | | ω i ω ¯ n | m i = 1 n ( x i x ¯ n ) 2 1 m i = 1 n ( ω i ω ¯ n ) 2 m i = 1 n ( x i x ¯ n ) 2 + 1 m i = 1 n ( ω i ω ¯ n ) 2 2 = m 2 T n + 1 2 m i = 1 n ( ω i ω ¯ n ) 2 .

Note that

i = 1 n ( U i U ¯ n ) 2 = i = 1 n ( x i x ¯ n ) 2 + 2 i = 1 n ( x i x ¯ n ) ( ω i ω ¯ n ) + i = 1 n ( ω i ω ¯ n ) 2 .

Thus,

(4.6) i = 1 n ( U i U ¯ n ) 2 T n = 2 i = 1 n ( x i x ¯ n ) ( ω i ω ¯ n ) + i = 1 n ( ω i ω ¯ n ) 2 2 i = 1 n ( x i x ¯ n ) ( ω i ω ¯ n ) + i = 1 n ( ω i ω ¯ n ) 2 m T n + 1 + 2 m 2 m i = 1 n ( ω i ω ¯ n ) 2 .

By (4.1) and (4.6), we have

(4.7) 1 T n i = 1 n ( U i U ¯ n ) 2 1 m + 1 + 2 m 2 m 1 T n i = 1 n ( ω i ω ¯ n ) 2 P m .

Since m > 0 is arbitrary and from (4.7), it follows that

1 T n i = 1 n ( U i U ¯ n ) 2 1 P 0 ,

which implies (4.4).

Finally, we will prove (4.3). Denote X i = e i β 1 ω i and a n i = x i x ¯ n σ 1 n T n , then

T n σ 1 n i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) T n = i = 1 n a n i X i .

By Lemmas 3.1 and 3.3, we know that { X i , i 1 } is still an NSD sequence of random variables. It is easy to check that i = 1 n a n i 2 = O 1 and max 1 i n | a n i | 0 by ( C 2 ) and ( C 3 ) . By the stationarity of { X i , i 1 } and E X i 2 = E ( ε i β 1 δ i ) 2 C E ε i 2 + C E δ i 2 < , we know that { X i , i 1 } is uniformly integral in L 2 (see Remark 3.1). From ( C 4 ) , it follows that

Var i = 1 n a n i X i = Var i = 1 n x i x ¯ n σ 1 n T n ( e i β 1 ω i ) = Var i = 1 n ( x i x ¯ n ) ( e i β 1 ω i ) σ 1 n 2 T n = 1 .

By ( C 3 ) and ( C 4 ) , we have

σ 1 n 2 = Var i = 1 n x i x ¯ n T n ( e i β 1 ω i ) max 1 i n | x i x ¯ n | T n 2 Var i = 1 n X i < 1 n E i = 1 n X i 2 = E X 1 2 + 2 n 1 n E X 1 X 2 + n 2 n E X 1 X 3 + + 1 n E X 1 X n E X 1 2 + 2 n 1 n E X 1 X 2 + + n m + 1 n E X 1 X m .

Hence, for any fixed m,

σ 1 n 2 < E X 1 2 + 2 ( E X 1 X 2 + + E X 1 X m )

as n . Hence,

0 < σ 1 n 2 < E X 1 2 + 2 j = 2 E X 1 X j

as m . Thus, by Remark 3.1, we have sup j 1 i : | i j | n | Cov ( X i , X j ) | 0 as n . Therefore, (4.3) follows from Lemma 3.4.

This completes the proof of Theorem 2.1.□

Proof of Theorem 2.2

In view of (2.3), we have

n σ 0 n ( β ˜ 0 n β 0 ) = n σ 0 n [ ( β 1 β ˜ 1 n ) x ¯ n + ( β 1 β ˜ 1 n ) ω ¯ n + e ¯ n β 1 ω ¯ n ] = n σ 0 n ( e ¯ n β 1 ω ¯ n ) + n σ 0 n ( x ¯ n + ω ¯ n ) ( β 1 β ˜ 1 n ) .

Note that

n σ 0 n ( e ¯ n β 1 ω ¯ n ) = n σ 0 n 1 n i = 1 n ( e i β 1 ω i ) = 1 σ 0 n n i = 1 n ( e i β 1 ω i ) .

Denote Y i = e i β 1 ω i and b n i = 1 σ 0 n n , then 1 σ 0 n n i = 1 n ( e i β 1 ω i ) = i = 1 n b n i Y i . By the stationarity of { Y i , i 1 } and E Y i 2 = E ( ε i β 1 δ i ) 2 C E ε i 2 + C E δ i 2 < , we know that { Y i , i 1 } is uniformly integral in L 2 (see Remark 3.1). From ( C 5 ) , it follows that

Var i = 1 n b n i Y i = Var i = 1 n 1 σ 0 n n ( e i β 1 ω i ) = Var i = 1 n ( e i β 1 ω i ) n σ 0 n 2 = 1 .

By ( C 5 ) , we have

σ 0 n 2 = Var i = 1 n e i β 1 ω i n = 1 n Var i = 1 n Y i = 1 n E i = 1 n Y i 2 = E Y 1 2 + 2 n 1 n E Y 1 Y 2 + n 2 n E Y 1 Y 3 + + 1 n E Y 1 Y n E Y 1 2 + 2 n 1 n E Y 1 Y 2 + + n m + 1 n E Y 1 Y m .

Hence, for any fixed m,

σ 0 n 2 E Y 1 2 + 2 ( E Y 1 Y 2 + + E Y 1 Y m )

as n . Hence,

0 < σ 0 n 2 E Y 1 2 + 2 j = 2 E Y 1 Y j

as m . Thus, by Remark 3.1, we have

sup j 1 i : | i j | n | Cov ( Y i , Y j ) | 0

as n . Hence, by Lemma 3.4, one can get that

1 σ 0 n n i = 1 n ( e i β 1 ω i ) d N ( 0 , 1 ) .

Thus, by Theorem 2.1, to prove (2.5), it suffices to show that

(4.8) n T n ( x ¯ n + ω ¯ n ) = x ¯ n n T n + n T n 1 n i = 1 n ω i P 0 .

By ( C 6 ) and Lemma 3.6, we derive that

(4.9) x ¯ n n T n 0

as n and

(4.10) 1 n i = 1 n ω i a . s . 0 .

Therefore, (4.8) follows from (4.9) and (4.10).

This completes the proof of Theorem 2.2.□

Proof of Theorem 2.3

By simple calculation, we have

β ˜ 1 n β 1 = i = 1 n ( U i U ¯ n ) e i β 1 i = 1 n ( x i x ¯ n ) ω i β 1 i = 1 n ( ω i ω ¯ n ) 2 i = 1 n ( U i U ¯ n ) 2 = i = 1 n ( U i U ¯ n ) 2 1 ( Δ 1 + Δ 2 + Δ 3 ) .

Since

| β ˜ 1 n β 1 | 2 C i = 1 n ( U i U ¯ n ) 2 2 Δ 1 2 + Δ 2 2 + Δ 3 2 ,

we have

E | β ˜ 1 n β 1 | 2 C E i = 1 n ( U i U ¯ n ) 2 2 Δ 1 2 + Δ 2 2 + Δ 3 2 .

Hence, to prove (2.6), it suffices to show that

(4.11) lim n E i = 1 n ( U i U ¯ n ) 2 2 Δ 1 2 = 0 ,

(4.12) lim n E i = 1 n ( U i U ¯ n ) 2 2 Δ 2 2 = 0 ,

and

(4.13) lim n E i = 1 n ( U i U ¯ n ) 2 2 Δ 3 2 = 0 .

By ( C 8 ) , we derive that i = 1 n ω i ω ¯ n 2 i = 1 n ω i 2 n C . Hence, by the Cauchy-Schwarz inequality, we have

i = 1 n ( x i x ¯ i ) ω i 2 i = 1 n ( x i x ¯ i ) 2 i = 1 n ω i 2 n C 2 T n .

Thus,

i = 1 n ( U i U ¯ n ) 2 = T n + 2 i = 1 n ( x i x ¯ n ) ω i + i = 1 n ( ω i ω ¯ n ) 2 T n C n T n .

From ( C 7 ) , it follows that

C n T n T n = C n T n 0

as n . Thus,

i = 1 n ( U i U ¯ n ) 2 T n C n T n C T n

as n . Hence, by the Cauchy-Schwarz inequality and ( C 1 ) , we obtain that

E i = 1 n ( U i U ¯ n ) 2 2 Δ 1 2 E i = 1 n ( U i U ¯ n ) 2 2 j = 1 n ( U j U ¯ n ) 2 k = 1 n e k 2 = E i = 1 n ( U i U ¯ n ) 2 1 k = 1 n e k 2 C n T n 1 .

Therefore, (4.10) follows from ( C 7 ) . Similar to the proof of (4.9), one can get (4.12) and (4.13).

This completes the proof of Theorem 2.3.□

Proof of Theorem 2.4

In view of (2.3), we have

β ˜ 0 n β 0 = ( β 1 β ˜ 1 n ) x ¯ n + ( β 1 β ˜ 1 n ) ω ¯ n + 1 n i = 1 n ( e i β ω i ) .

Hence,

| β ˜ 0 n β 0 | 2 C ( β 1 β ˜ 1 n ) 2 x ¯ n 2 + ( β 1 β ˜ 1 n ) 2 ω ¯ n 2 + 1 n 2 i = 1 n ( e i β 1 ω i ) 2 .

Then,

E | β ˜ 0 n β 0 | 2 C E ( β 1 β ˜ 1 n ) 2 x ¯ n 2 + ( β 1 β ˜ 1 n ) 2 ω ¯ n 2 + 1 n 2 i = 1 n ( e i β 1 ω i ) 2 .

By Theorem 2.3 and ( C 8 ) , we can derive that

E [ ( β 1 β ˜ 1 n ) 2 x ¯ n 2 ] = x ¯ n 2 E ( β 1 β ˜ 0 n ) 2 0

as n and

E [ ( β 1 β ˜ 1 n ) 2 ω ¯ n 2 ] C E ( β 1 β ˜ 1 n ) 2 0

as n . By Lemmas 3.1 and 3.3, we know that { e i β δ i , i 1 } is still an NSD sequence of random variables. Thus, from ( C 1 ) and Lemma 3.2, we can get that

E 1 n 2 i = 1 n ( e i β 1 ω i ) 2 = 1 n 2 E i = 1 n ( e i β 1 ω i ) 2 1 n 2 E max 1 k n i = 1 k ( e i β 1 ω i ) 2 C n 2 i = 1 n E ( e i β 1 ω i ) 2 C n 2 i = 1 n ( E e i 2 + β 1 2 E ω i 2 ) C n 0

as n .□

5 Numerical simulations

In this section, we verify the asymptotic normality and mean consistency of LS estimators in the EV regression model with NSD errors.

The data are generated from model (1.2). For 0 < u < 0.8 and n 4 , let normal random vectors ( e 1 , e 2 , , e n ) N ( 0 , ) and ( ω 1 , ω 2 , , ω n ) N ( 0 , ) , where 0 represents zero vector and

= 4 5 + u u u 2 0 0 0 0 0 u 4 5 + u u u 2 0 0 0 0 u 2 u 4 5 + u u 0 0 0 0 0 u 2 u 4 5 + u 0 0 0 0 0 0 0 0 4 5 + u u u 2 0 0 0 0 0 u 4 5 + u u u 2 0 0 0 0 u 2 u 4 5 + u u 0 0 0 0 0 u 2 u 4 5 + u n × n .

By the definition of NA random variables (see [17]), we know that ( e 1 , e 2 , , e n ) and ( ω 1 , ω 2 , , ω n ) are NA vectors, and thus NSD vectors.

5.1 Simulation example 1

In order to show the asymptotic normality of LS estimators in the model, we choose u = 0.6 and x i = i for 1 i n . For the fixed β 1 = 3 and β 0 = 5 , taking the sample sizes n as n = 100 , 500 and 1,000, respectively. We use R software to compute σ 1 n 1 T n ( β ˜ 1 n β 1 ) and σ 0 n 1 n ( β ˜ 0 n β 0 ) for 1,000 times, respectively, and present the histograms and Quantile–Quantile (Q–Q) plots of them in Figures 1–6.

Figure 1 
                  Histogram and normal Q–Q plot of 
                        
                           
                           
                              
                                 
                                    σ
                                 
                                 
                                    1
                                    n
                                 
                                 
                                    −
                                    1
                                 
                              
                              
                                 
                                    
                                       
                                          T
                                       
                                       
                                          n
                                       
                                    
                                 
                              
                              (
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    1
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              )
                              ≜
                              μ
                           
                           {\sigma }_{1n}^{-1}\sqrt{{T}_{n}}({\tilde{\beta }}_{1n}-{\beta }_{1})\triangleq \mu 
                        
                      with 
                        
                           
                           
                              n
                              =
                              100
                           
                           n=100
                        
                     .
Figure 1

Histogram and normal Q–Q plot of σ 1 n 1 T n ( β ˜ 1 n β 1 ) μ with n = 100 .

Figure 2 
                  Histogram and normal Q–Q plot of 
                        
                           
                           
                              
                                 
                                    σ
                                 
                                 
                                    0
                                    n
                                 
                                 
                                    −
                                    1
                                 
                              
                              
                                 n
                              
                              (
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    0
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              )
                              ≜
                              η
                           
                           {\sigma }_{0n}^{-1}\sqrt{n}({\tilde{\beta }}_{0n}-{\beta }_{0})\triangleq \eta 
                        
                      with 
                        
                           
                           
                              n
                              =
                              100
                           
                           n=100
                        
                     .
Figure 2

Histogram and normal Q–Q plot of σ 0 n 1 n ( β ˜ 0 n β 0 ) η with n = 100 .

Figure 3 
                  Histogram and normal Q–Q plot of 
                        
                           
                           
                              
                                 
                                    σ
                                 
                                 
                                    1
                                    n
                                 
                                 
                                    −
                                    1
                                 
                              
                              
                                 
                                    
                                       
                                          T
                                       
                                       
                                          n
                                       
                                    
                                 
                              
                              (
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    1
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              )
                              ≜
                              μ
                           
                           {\sigma }_{1n}^{-1}\sqrt{{T}_{n}}({\tilde{\beta }}_{1n}-{\beta }_{1})\triangleq \mu 
                        
                      with 
                        
                           
                           
                              n
                              =
                              500
                           
                           n=500
                        
                     .
Figure 3

Histogram and normal Q–Q plot of σ 1 n 1 T n ( β ˜ 1 n β 1 ) μ with n = 500 .

Figure 4 
                  Histogram and normal Q–Q plot of 
                        
                           
                           
                              
                                 
                                    σ
                                 
                                 
                                    0
                                    n
                                 
                                 
                                    −
                                    1
                                 
                              
                              
                                 n
                              
                              (
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    0
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              )
                              ≜
                              η
                           
                           {\sigma }_{0n}^{-1}\sqrt{n}({\tilde{\beta }}_{0n}-{\beta }_{0})\triangleq \eta 
                        
                      with 
                        
                           
                           
                              n
                              =
                              500
                           
                           n=500
                        
                     .
Figure 4

Histogram and normal Q–Q plot of σ 0 n 1 n ( β ˜ 0 n β 0 ) η with n = 500 .

Figure 5 
                  Histogram and normal Q–Q plot of 
                        
                           
                           
                              
                                 
                                    σ
                                 
                                 
                                    1
                                    n
                                 
                                 
                                    −
                                    1
                                 
                              
                              
                                 
                                    
                                       
                                          T
                                       
                                       
                                          n
                                       
                                    
                                 
                              
                              (
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    1
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              )
                              ≜
                              μ
                           
                           {\sigma }_{1n}^{-1}\sqrt{{T}_{n}}({\tilde{\beta }}_{1n}-{\beta }_{1})\triangleq \mu 
                        
                      with 
                        
                           
                           
                              n
                              =
                              1
                              ,
                              000
                           
                           n=1,000
                        
                     .
Figure 5

Histogram and normal Q–Q plot of σ 1 n 1 T n ( β ˜ 1 n β 1 ) μ with n = 1 , 000 .

Figure 6 
                  Histogram and normal Q–Q plot of 
                        
                           
                           
                              
                                 
                                    σ
                                 
                                 
                                    0
                                    n
                                 
                                 
                                    −
                                    1
                                 
                              
                              
                                 n
                              
                              (
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    0
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              )
                              ≜
                              η
                           
                           {\sigma }_{0n}^{-1}\sqrt{n}({\tilde{\beta }}_{0n}-{\beta }_{0})\triangleq \eta 
                        
                      with 
                        
                           
                           
                              n
                              =
                              1
                              ,
                              000
                           
                           n=1,000
                        
                     .
Figure 6

Histogram and normal Q–Q plot of σ 0 n 1 n ( β ˜ 0 n β 0 ) η with n = 1 , 000 .

It can be seen from Figures 1–6 that the histograms and Q–Q plots show good fit of the distribution for σ 1 n 1 T n ( β ˜ 1 n β 1 ) and σ 0 n 1 n ( β ˜ 0 n β 0 ) to standard normal distribution as the sample size n increases. The simulation results verify the validity of our theoretical conclusions in Theorems 2.1 and 2.2.

5.2 Simulation example 2

In order to verify the mean consistency of LS estimators in the model, we choose u = 0.4 and x i = i for all 1 i n . For the fixed β 1 = 2 and β 0 = 4 and β 1 = 5 and β 0 = 3 , taking the sample sizes n as n = 100 , 200, 300, 500, 800 and 1,000, respectively. We use R software to compute β ˜ 1 n β 1 and β ˜ 0 n β 0 for 600 times, respectively, and provide the boxplots of them in Figures 7–10.

Figure 7 
                  Boxplots of 
                        
                           
                           
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    1
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              ≜
                              λ
                           
                           {\tilde{\beta }}_{1n}-{\beta }_{1}\triangleq \lambda 
                        
                      with 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              =
                              2
                           
                           {\beta }_{1}=2
                        
                      and 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              =
                              4
                           
                           {\beta }_{0}=4
                        
                     .
Figure 7

Boxplots of β ˜ 1 n β 1 λ with β 1 = 2 and β 0 = 4 .

Figure 8 
                  Boxplots of 
                        
                           
                           
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    0
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              ≜
                              φ
                           
                           {\tilde{\beta }}_{0n}-{\beta }_{0}\triangleq \varphi 
                        
                      with 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              =
                              2
                           
                           {\beta }_{1}=2
                        
                      and 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              =
                              4
                           
                           {\beta }_{0}=4
                        
                     .
Figure 8

Boxplots of β ˜ 0 n β 0 φ with β 1 = 2 and β 0 = 4 .

Figure 9 
                  Boxplots of 
                        
                           
                           
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    1
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              ≜
                              λ
                           
                           {\tilde{\beta }}_{1n}-{\beta }_{1}\triangleq \lambda 
                        
                      with 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              =
                              5
                           
                           {\beta }_{1}=5
                        
                      and 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              =
                              3
                           
                           {\beta }_{0}=3
                        
                     .
Figure 9

Boxplots of β ˜ 1 n β 1 λ with β 1 = 5 and β 0 = 3 .

Figure 10 
                  Boxplots of 
                        
                           
                           
                              
                                 
                                    
                                       β
                                       ˜
                                    
                                 
                                 
                                    0
                                    n
                                 
                              
                              −
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              ≜
                              φ
                           
                           {\tilde{\beta }}_{0n}-{\beta }_{0}\triangleq \varphi 
                        
                      with 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    1
                                 
                              
                              =
                              5
                           
                           {\beta }_{1}=5
                        
                      and 
                        
                           
                           
                              
                                 
                                    β
                                 
                                 
                                    0
                                 
                              
                              =
                              3
                           
                           {\beta }_{0}=3
                        
                     .
Figure 10

Boxplots of β ˜ 0 n β 0 φ with β 1 = 5 and β 0 = 3 .

It can be seen from Figures 7–10 that β ˜ 1 n β 1 and β ˜ 0 n β 0 get closer and closer to zero and the ranges of β ˜ 1 n β 1 and β ˜ 0 n β 0 decrease as the sample size n increases. The simulation results directly reflect the conclusions in Theorems 2.3 and 2.4.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61374183) and the Project of Guangxi Education Department (No. 2017KY0720).

References

[1] A. Deaton, Panel data from a time series of cross-sections, J. Econ. 30 (1985), 109–126.10.1016/0304-4076(85)90134-4Search in Google Scholar

[2] J. X. Liu and X. R. Chen, Consistency of LS estimator in simple linear EV regression models, Acta Math. Sci. B 25 (2005), 50–58.10.1016/S0252-9602(17)30260-6Search in Google Scholar

[3] P. Y. Chen, L. L. Wen, and S. H. Sung, Strong and weak consistency of least squares estimators in simple linear EV regression models, J. Statist. Plann. Inference 205 (2020), 64–73, 10.1016/j.jspi.2019.06.004.Search in Google Scholar

[4] Y. Miao, G. Y. Yang, and L. M. Shen, The central limit theorem for LS estimator in simple linear EV regression models, Commun. Stat. Theory Methods 36 (2007), 2263–2272.10.1080/03610920701215266Search in Google Scholar

[5] S. F. Xu and N. Li, Consistency for the LS estimator in the linear EV regression model with replicate observations, J. Korean Statist. Soc. 42 (2013), no. 4, 451–458.10.1016/j.jkss.2013.01.006Search in Google Scholar

[6] Y. Miao, K. Wang, and F. F. Zhao, Some limit behaviors for the LS estimator in simple linear EV regression models, Stat. Probab. Lett. 81 (2007), 92–102.10.1016/j.spl.2010.09.023Search in Google Scholar

[7] Y. Miao and G. Y. Yang, The loglog law for LS estimator in simple linear EV regression models, Statistics 45 (2011), 155–162.10.1080/02331880903450576Search in Google Scholar

[8] Y. Miao, K. Wang, and F. Zhao, Some limit behaviors for the LS estimator in simple linear EV regression models, Statist. Probab. Lett. 81 (2011), 92–102.10.1016/j.spl.2010.09.023Search in Google Scholar

[9] I. Fazekas and A. G. Kukush, Asymptotic properties of an estimator in nonlinear functional errors-invariables models with dependent error terms, Comput. Math. Appl. 34 (1997), no. 10, 23–39.10.1016/S0898-1221(97)00204-6Search in Google Scholar

[10] G. L. Fan, H. Y. Liang, J. F. Wang, and H. X. Xu, Asymptotic properties for LS estimators in EV regression model with dependent errors, AStA Adv. Stat. Anal. 94 (2010), 89–103.10.1007/s10182-010-0124-3Search in Google Scholar

[11] Y. Miao, F. F. Zhao, K. Wang, and Y. P. Chen, Asymptotic normality and strong consistency of LS estimators in the EV regression model with NA errors, Stat. Papers 54 (2013), no. 1, 193–206.10.1007/s00362-011-0418-xSearch in Google Scholar

[12] Y. Miao, Y. L. Wang, and H. J. Zheng, Consistency of LS estimators in the EV regression model with martingale difference errors, Statistics 49 (2015), no. 1, 104–118.10.1080/02331888.2014.903950Search in Google Scholar

[13] A. T. Shen, Asymptotic properties of LS estimators in the errors-in-variables model with MD errors, Statist. Papers 60 (2019), no. 4, 1193–1206.10.1007/s00362-016-0869-1Search in Google Scholar

[14] T. Z. Hu, Negatively superadditive dependence of random variables with applications, Chin. J. Appl. Probab. Stat. 16 (2000), 133–144.Search in Google Scholar

[15] J. H. B. Kemperman, On the FKG-inequality for measures on a partially ordered space, Nederl. Akad. Wetensch. Proc. Ser. A 80 (1977), 313–331.10.1016/1385-7258(77)90027-0Search in Google Scholar

[16] T. C. Christofides and E. A. Vaggelatou, Connection between supermodular ordering and positive/negative association, J. Multivariate Anal. 88 (2004), 138–151.10.1016/S0047-259X(03)00064-2Search in Google Scholar

[17] K. Joag-Dev and F. Proschan, Negative association of random variables with applications, Ann. Stat. 11 (1983), 286–295.10.1214/aos/1176346079Search in Google Scholar

[18] N. Eghbal, M. Amini, and A. Bozorgnia, Some maximal inequalities for quadratic forms of negative superadditive dependence random variables, Statist. Probab. Lett. 80 (2010), no. 7–8, 587–591.10.1016/j.spl.2009.12.014Search in Google Scholar

[19] X. J. Wang, X. Deng, L. L. Zheng, and S. H. Hu, Complete convergence for arrays of rowwise negatively superadditive-dependent random variables and its applications, Statistics 48 (2014), no. 4, 834–850.10.1080/02331888.2013.800066Search in Google Scholar

[20] B. Meng, D. C. Wang, and Q. Y. Wu, Complete convergence and complete moment convergence for arrays of rowwise negatively superadditive dependent random variables, Comm. Statist. Theory Methods 47 (2018), no. 16, 3910–3922.10.1080/03610926.2017.1364391Search in Google Scholar

[21] M. Amini, A. Bozorgnia, H. Naderi, and A. Volodin, On complete convergence of moving average processes for NSD sequences, Sib. Adv. Math. 25 (2015), no. 1, 11–20.10.3103/S1055134415010022Search in Google Scholar

[22] Y. C. Yu, H. C. Hu, L. Liu, and S. Y. Huang, M-test in linear models with negatively super-additive dependent errors, J. Inequal. Appl. 2017 (2017), 235, 10.1186/s13660-017-1509-6.Search in Google Scholar PubMed PubMed Central

[23] Z. Zeng and X. D. Liu, A difference-based approach in the partially linear model with dependent errors, J. Inequal. Appl. 2018 (2018), 267, 10.1186/s13660-018-1857-x.Search in Google Scholar PubMed PubMed Central

[24] Y. C. Yu, X. S. Liu, L. Liu, and P. Zhao, Detection of multiple change points for linear processes under negatively super-additive dependence, J. Inequal. Appl. 2019 (2019), 216, 10.1186/s13660-019-2169-5.Search in Google Scholar

[25] X. J. Wang, Y. Wu, and S. H. Hu, Strong and weak consistency of LS estimators in the EV regression model with negatively superadditive-dependent errors, AStA Adv. Stat. Anal. 102 (2018), 41–65.10.1007/s10182-016-0286-8Search in Google Scholar

[26] A. Kheyri, M. Amini, H. Jabbari, and A. Bozorgnia, Kernel density estimation under negative superadditive dependence and its application for real data, J. Stat. Comput. Simul. 89 (2019), no. 12, 2373–2392.10.1080/00949655.2019.1619738Search in Google Scholar

[27] H. C. Hu, Y. Zhang, and X. Pan, Asymptotic normality of DHD estimators in a partially linear model, Stat. Papers 57 (2016), 567–587.10.1007/s00362-015-0666-2Search in Google Scholar

[28] Y. Zhang, X. S. Liu, and H. C. Hu, Weak consistency of M-estimator in linear regression model with asymptotically almost negatively associated errors, Comm. Statist. Theory Methods 49 (2020), no. 11, 2800–2816.10.1080/03610926.2019.1584307Search in Google Scholar

[29] A. T. Shen, Y. Zhang, and A. Volodin, Applications of the Rosenthal-type inequality for negatively superadditive dependent random variables, Metrika 78 (2015), 295–311.10.1007/s00184-014-0503-ySearch in Google Scholar

Received: 2019-11-23
Revised: 2020-06-09
Accepted: 2020-06-22
Published Online: 2020-09-09

© 2020 Yu Zhang et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.3.2024 from https://www.degruyter.com/document/doi/10.1515/math-2020-0052/html
Scroll to top button