Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors

Yu Zhang 1 , Xinsheng Liu 1 , Yuncai Yu 1 ,  and Hongchang Hu 2
  • 1 State Key Laboratory of Mechanics and Control of Mechanical Structures, Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
  • 2 College of Mathematics and Statistics, Hubei Normal University, , Huangshi, China
Yu Zhang
  • State Key Laboratory of Mechanics and Control of Mechanical Structures, Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
  • Email
  • Search for other articles:
  • degruyter.comGoogle Scholar
, Xinsheng Liu
  • Corresponding author
  • State Key Laboratory of Mechanics and Control of Mechanical Structures, Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
  • Search for other articles:
  • degruyter.comGoogle Scholar
, Yuncai Yu
  • State Key Laboratory of Mechanics and Control of Mechanical Structures, Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
  • Email
  • Search for other articles:
  • degruyter.comGoogle Scholar
and Hongchang Hu

Abstract

In this article, an errors-in-variables regression model in which the errors are negatively superadditive dependent (NSD) random variables is studied. First, the Marcinkiewicz-type strong law of large numbers for NSD random variables is established. Then, we use the strong law of large numbers to investigate the asymptotic normality of least square (LS) estimators for the unknown parameters. In addition, the mean consistency of LS estimators for the unknown parameters is also obtained. Some results for independent random variables and negatively associated random variables are extended and improved to the case of NSD setting. At last, two simulations are presented to verify the asymptotic normality and mean consistency of LS estimators in the model.

1 Introduction

To correct the effects of sampling errors, Deaton [1] proposed the errors-in-variables (EV) regression model, which is somewhat more practical than the ordinary regression model, and hence has attracted more attention.

In this article, we consider the following linear regression model:

yi=β0+β1xi+ei, Ui=xi+ωi, 1in,
where β0,β1,x1,x2, are unknown parameters or constants, (e1,ω1),(e2,ω2), are random vectors and Ui,yi,1in are observable variables. From (1.1), we have
yi=β0+β1Ui+eiβ1ωi, 1in.

Consider formally (1.2) as an usual regression model of yi on Ui with errors eiβ1ωi, we obtain the least square (LS) estimators of β1 and β0 as

β˜1n=i=1n(UiU¯n)(yiy¯n)i=1n(UiU¯n)2,β˜0n=y¯nβ˜1nU¯n,
where U¯n=1ni=1nUi, y¯n=1ni=1nyi, ω¯n=1ni=1nωi, and x¯n=1ni=1nxi.

In the last few decades, many authors focused on investigating the EV models. In the case of independent random errors, the consistency of LS estimators in the linear EV model was established by Liu and Chen [2] and Chen et al. [3]; Miao et al. [4] obtained the central limit theorem for the LS estimators in the simple linear EV regression model; Xu and Li [5] studied the consistency of LS estimators in the linear EV regression model with replicate observations; Miao et al. [6] established some limit behaviors of estimators in the simple linear EV regression model; Miao and Yang [7] obtained the loglog law for the LS estimators in the EV regression model; Miao et al. [8] investigated the consistency and asymptotic normality of LS estimators in the simple linear EV regression model, and so on. In the case of dependent random errors, Fazekas and Kukush [9] obtained the consistency of the regression parameter of the nonlinear functional EV models under mixing conditions; Fan et al. [10] established the asymptotic properties for the LS estimators of the unknown parameters in the simple linear EV regression model with stationary-mixing errors; Miao et al. [11] derived the asymptotic normality and strong consistency of the estimators in the simple linear EV model with negatively associated (NA) errors; Miao et al. [12] obtained the weak consistency and strong consistency of LS estimators for the unknown parameters with martingale difference errors; Shen [13] studied some asymptotic properties of estimators in the EV model with martingale difference errors, and so forth. In this article, we consider model (1.1) under the assumptions that the random errors are negatively superadditive dependent (NSD) random variables whose concept was proposed by Hu [14] as follows.

Definition 1.1

[15] A function ϕ:RnR is called superadditive if

ϕ(xy)+ϕ(xy)ϕ(x)+ϕ(y)

for all x,yRn, where is for component-wise maximum and is for component-wise minimum.

Definition 1.2

[14] A random vector X=(X1,X2,,Xn) is said to be NSD if

Eϕ(X1,X2,,Xn)Eϕ(X1,X2,,Xn),
where X1,X2,,Xn are independent such that Xi and Xi have the same distribution for each i, and ϕ is a superadditive function such that the expectations in (1.4) exist.

A sequence {Xn,n1} of random variables is said to be NSD, if for all n1, (X1,X2,,Xn) is NSD.

Since then, a series of useful results on NSD sequences of random variables have been established. Hu [14] and Christofides and Vaggelatou [16] reported that the family of NSD sequences contains NA (in particular, independent) sequences and some more sequences of random variables which are not much deviated from being NA. Hu [14] gave an example illustrating that NSD sequences do not imply NA sequences (see ref. [17]). Moreover, Hu [14] derived some basic properties and three structural theorems of NSD. Eghbal et al. [18] provided two maximal inequalities and strong law of large numbers of quadratic forms of NSD random variables. Some Rosenthal-type inequalities for maximum partial sums of NSD sequences were established by Wang et al. [19]. The complete convergence and complete moment for arrays of rowwise NSD random variables were obtained by Meng et al. [20]. Amini et al. [21] investigated the complete convergence of moving average for NSD random variables. Yu et al. [22] studied the M-test in linear models with NSD errors. Zeng and Liu [23] established the asymptotic normality of difference-based estimator in a partially linear model with NSD errors. For more details about NSD random variables, one can refer to [24,25,26,27,28,29], and so on.

However, we have not found the studies on the asymptotic normality and mean consistency of LS estimators in the EV regression model with NSD random errors in the literature. In this article, we mainly investigate the asymptotic properties of the estimators for the unknown parameters in the simple linear EV regression model, which was proposed by Deaton [1] to correct the effects of sampling errors and is somewhat more practical than the ordinary regression model. Many authors have obtained the asymptotic properties of the estimators for the unknown parameters in the EV regression model for independent random errors, which are not reasonable in real practice. In this article, we assume that the random errors are NSD, which include independent and NA random variables as special cases. So to study model (1.1) with NSD random errors is of considerable significance. The main novelties of this article can be outlined as follows. First, the Marcinkiewicz-type strong law of large numbers for NSD random variables is established, which extends and improves the classical Marcinkiewicz-type strong law of large numbers for independent and identically distributed random variables to NSD random variables with non-identical distribution. Second, we use the strong law of large numbers to investigate the asymptotic normality of LS estimators for the unknown parameters. In addition, the mean consistency of LS estimators for the unknown parameters is also obtained. These results include the corresponding ones of independent random errors and some dependent random errors as special cases. Finally, two simulation studies are carried out to verify the validity of the results that we have established.

Definition 1.3

[29] A sequence {Xn,n1} of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that

P(|Xn|>x)CP(|X|>x)
for all x0 and n1.

The remainder of this article is organized as follows. Under some mild conditions, the asymptotic normality and quadratic-mean consistency of LS estimators for the unknown parameters in model (1.1) with NSD random errors are established in Section 2. We give some preliminary lemmas in Section 3. We provide the proofs of the main results in Section 4. In Section 5, two simulations are carried out to study the numerical performance of the results that we have established.

Throughout this article, let C be a positive constant whose values may vary at different places. P stands for convergence in probability, d stands for convergence in distribution, and a.s. represents almost sure convergence. means “defined as.”

2 Main results

Model (1.1) to be studied can be exactly described as follows:

yi=β0+β1xi+ei,Ui=xi+ωi,1in;Eei=Eωi=0,1in,
where (Ui,yi),1in, are observable vectors, while xi,1in, are constants, β0 and β1 are unknown parameters. Denote Tn=i=1n(xix¯n)2 for all n1.

Based on the notations above and by simple calculation, we have

β˜1nβ1=i=1n(ωiω¯n)ei+i=1n(xix¯n)(eiβ1ωi)β1i=1n(ωiω¯n)2i=1n(UiU¯n)2
and
β˜0nβ0=(β1β˜1n)x¯n+(β1β˜1n)ω¯n+e¯nβ1ω¯n.
To obtain our results, the following conditions are sufficient.

(C1)0<Eei2=h1< and 0<Eωi2=h2<;

(C2)limnnTn=0;

(C3)rn=O(np) for some p>1/2, where rn=max1in|xix¯n|Tn;

(C4)liminfnσ1nc1>0, where σ1n2=Var1Tni=1n(xix¯n)(eiβ1ωi);

(C5)liminfnσ0nc2>0, where σ0n2=Var1ni=1n(eiβ1ωi);

(C6)Tnnx¯n2 as n;

(C7)limnnTn=0;

(C8)|ωi|M for some M0 and all 1in.

2.1 Asymptotic normality

In this subsection, we state the asymptotic normality of LS estimators β˜1n and β˜0n for the unknown parameters β1 and β0.

Theorem 2.1

In model (2.1), let {ei,i1} and {ωi,i1} be both stationary NSD sequences of random variables, and they are independent with each other. Suppose that conditions (C1)–(C4) hold. Then,

Tnσ1n(β˜1nβ1)dN(0,1),
where N(0,1) represents the standard normal distribution.

Theorem 2.2

In model (2.1), let {ei,i1} and {ωi,i1} be both strictly stationary NSD sequences of random variables, and they are independent with each other. Suppose that conditions (C1)–(C3) and (C5)–(C6) hold. Then,

nσ0n(β˜0nβ0)dN0,1.

Remark 2.1

Since the family of NSD sequences of random variables includes independent and NA sequences, the results of Theorems 2.1 and 2.2 also hold for independent random errors and NA random errors.

2.2 Mean consistency

In this subsection, we state the quadratic-mean consistency of LS estimators β˜1n and β˜0n for the unknown parameters β1 and β0.

Theorem 2.3

In model (2.1), let {ei,i1} and {ωi,i1} be both stationary NSD sequences of random variables, and they are independent of each other. Suppose that conditions (C1), (C7), and (C8) hold. Then,

limnE|β˜1nβ1|2=0.

Remark 2.2

As independent random variables are special NSD random variables, Theorem 2.3 generalizes and improves the corresponding result of Liu and Chen [2] for independent and identically distributed random errors in the case of NSD setting.

Theorem 2.4

Suppose that the conditions of Theorem 2.3 are satisfied. Then,

limnE|β˜0nβ0|2=0.

Remark 2.3

As independent and NA random variables are special NSD random variables, the result of Theorem 2.4 also holds for NA random errors.

3 Preliminary lemmas

In this section, we present some important lemmas which will be used to prove the main results of the article.

Lemma 3.1

[14] Suppose that (X1,X2,,Xn) is NSD.

  1. (i) (X1,X2,,Xn) is also NSD.
  2. (ii) If g1,g2,,gn are all non-decreasing or non-increasing functions, then ((g1(X1),g2(X2),,gn(Xn)) is NSD.

Lemma 3.2

[19, Rosenthal-type inequality] Let p>1 and {Xn,n1} be a sequence of NSD random variables with EXn=0 and E|Xn|p<. Then, there exists a positive constant Dp depending only on p such that for all n1,

Emax1kni=1kXipDpi=1nE|Xi|p
for 1<p2 and
Emax1kni=1kXipDpi=1nE|Xi|p+i=1nEXi2p/2
for p>2.

From Lemmas 3.1 and 3.2, we can easily derive the following corollary.

Corollary 3.1

(Khintchine-Kolmogorov-type convergence theorem) Let {Xn,n1} be an NSD sequence of random variables with i=1Var(Xi)<, then

i=1(XiEXi)
converges a.s.

Lemma 3.3

[14] Suppose that X=(X1,X2,,Xn) and Z=(Z1,Z2,,Zn) are independent random vectors. If X and Z are both NSD, then (X1+Z1,X2+Z2,,Xn+Zn) is NSD.

Lemma 3.4

[22,23] Let {Xn,n1} be an NSD sequence of random variables with EXn=0 and supj1i:|ij|n|Cov(Xi,Xj)|0 as n. Assume that {ani,1in} is an array of real numbers with i=1nani2=O(1) and max1in|ani|0 as n. If {Xn,n1} is uniformly integral in L2, then

σn1i=1naniXidN(0,1),
where σn2=Var(i=1naniXi).

Remark 3.1

[11] If {Xn,n1} is a stationary sequence with EXn2<, then {Xn,n1} is uniformly integral in L2. In addition, the assumption EX12+2j=2EX1Xj>0 implies supj1i:|ij|n|Cov(Xi,Xj)|0.

Lemma 3.5

[29] Let {Xn,n1} be a sequence of random variables which is stochastically dominated by a random variable X. For any a>0 and β>0, the following two statements hold:

E|Xn|βI(|Xn|a)C1[E|X|βI(|X|a)+aβP(|X|>a)],E|Xn|βI(|Xn|>a)C2E|X|βI(|X|>a),
where C1 and C2 are positive constants. Thus,
E|Xn|βCE|X|β,
where C is a positive constant.

Lemma 3.6

(Marcinkiewicz-type strong law of large numbers) Let {Xn,n1} be an NSD sequence of random variables which is stochastically dominated by a random variable X with E|X|p< for 0<p<2. Assume that EXn=0 if 1p<2. Then,

n1/pi=1nXia.s.0.

Proof

Denote Xn=n1/pI(Xnn1/p)+XnI(|Xn|<n1/p)+n1/pI(Xnn1/p), then we obtain by E|X|p< that

n=1P(XnXn)=n=1P(|Xn|n1/p)Cn=1P(|X|n1/p)CE|X|p<.
Hence, it follows by the Borel-Cantelli lemma that
Xn=Xna.s.
Thus, to prove (3.1), it suffices to show that
n1/pi=1nXia.s.0.
Hence, to prove (3.3), it suffices to show that
n1/pi=1nEXia.s.0
and
n1/pi=1n(XiEXi)a.s.0.
First, we will prove (3.4). It will be divided into the following two cases:
  1. (i)If p=1, it follows from EXn=0 and E|X|p< that
    |EXnI(|Xn|n)|CE|X|I(|X|>n)0
    as n and
    limnnP(|Xn|>n)limnE|Xn|I(|Xn|>n)ClimnE|X|I(|X|>n)=0.
    Hence,
    |EXn|nP(|Xn|>n)+|EXnI(|Xn|n)|0
    as n. By the Toeplitz lemma, (3.4) holds.
  2. (ii)If p1, by the Kronecker lemma, to prove (3.4), it suffices to show that
n=1|EXn|n1/p<.
For 0<p<1, it follows from E|X|p< and Lemma 3.5 that
n=1|EXn|n1/pCn=1n1/pP(|X|n1/p)+E|X|I(|X|<n1/p)n1/pC+Cn=1j=1nn1/pE|X|I(j1|X|p<j)=C+Cj=1n=jn1/pE|X|I(j1|X|p<j)C+Cj=1j1/p+1E|X|pj1p/pI(j1|X|p<j)C+Cj=1j1/p+1E|X|pj1p/pI(j1|X|p<j).
For 1<p<2, it follows from EXn=0, E|X|p< and Lemma 3.5 that
n=1|EXn|n1/pCn=1n1/pP(|X|n1/p)+|EXI(|X|<n1/p)|n1/pCn=1n1/pP(|X|n1/p)+|EXI(|X|<n1/p)|n1/pC+Cn=1n1/pE|X|I(|X|n1/p)=C+Cn=1j=nn1/pE|X|I(j|X|p<j+1)=C+Cj=1n=1jn1/pE|X|I(j|X|p<j+1)C+Cj=1j1/p+1E|X|pj(1p)/pI(j|X|p<j+1)]=C+Cj=1E|X|pI(j|X|p<j+1)<.
Hence, (3.6) holds.

Next, we will prove (3.5).

By Lemma 3.1, we know that {XnEXn,n1} is still an NSD sequence of random variables. Hence, by Corollary 3.1 and Kronecker lemma, to prove (3.5), we only need to show that

n=1Var(Xn/n1/p)<.
By the Markov inequality, Lemma 3.5 and E|X|p<, we have
n=1VarXnn1/pn=1E(Xn)2n2/pCn=1n2/pP(|X|n1/p)+EX2I(|X|<n1/p)n2/pC+Cn=1n2/pk=1nEX2I(k1|X|p<k)=C+Ck=1n=kn2/pEX2I(k1|X|p<k)C+Ck=1k2/p+1E|X|pk2p/pI(k1Xp<k)=C+Ck=1E|X|pI(k1|X|p<k)<.

This completes the proof of Lemma 3.6.□

Remark 3.2

Lemma 3.5 is the Marcinkiewicz-type strong law of large numbers for NSD sequences of random variables, which extend and improve the classical Marcinkiewicz-type strong law of large numbers for independent and identically distributed random variables to NSD random variables with non-identical distribution. It also holds for NA random variables with non-identical distribution.

4 Proofs of the main results

Proof of Theorem 2.1

In view of (2.2), we have

Tnσ1n(β˜1nβ1)=Tnσ1n1Tni=1n(ωiω¯n)ei+i=1n(xix¯n)(eiβ1ωi)β1i=1n(ωiω¯n)21Tni=1n(UiU¯n)2=1σ1nTni=1n(ωiω¯n)ei+i=1n(xix¯n)(eiβ1ωi)β1i=1n(ωiω¯n)21Tni=1n(UiU¯n)2.
Hence, by Slutsky’s theorem, to prove (2.4), it suffices to show that
i=1n(ωiω¯n)2TnP0,
i=1n(ωiω¯n)eiTnP0,
Tnσ1ni=1n(xix¯n)(eiβ1ωi)TndN(0,1),
and
i=1n(UiU¯n)2TnP1.

First, we prove (4.1). Note that

i=1n(ωiω¯n)2Tni=1nωi2Tn,
hence, (4.1) follows from (C2).

Second, we prove (4.2). Similar to the proof of (4.1), we can get that

i=1n(eie¯n)2TnP0.
Since
i=1n(ωiω¯n)ei12i=1n(ωiω¯n)2+i=1n(eie¯n)2,
and together with (4.1) and (4.5), (4.2) follows.

Third, we prove (4.4). By the Cauchy-Schwarz inequality, we have, for any m>0,

i=1n|xix¯n||ωiω¯n|mi=1n(xix¯n)21mi=1n(ωiω¯n)2mi=1n(xix¯n)2+1mi=1n(ωiω¯n)22=m2Tn+12mi=1n(ωiω¯n)2.
Note that
i=1n(UiU¯n)2=i=1n(xix¯n)2+2i=1n(xix¯n)(ωiω¯n)+i=1n(ωiω¯n)2.

Thus,

i=1n(UiU¯n)2Tn=2i=1n(xix¯n)(ωiω¯n)+i=1n(ωiω¯n)22i=1n(xix¯n)(ωiω¯n)+i=1n(ωiω¯n)2mTn+1+2m2mi=1n(ωiω¯n)2.
By (4.1) and (4.6), we have
1Tni=1n(UiU¯n)21m+1+2m2m1Tni=1n(ωiω¯n)2Pm.
Since m>0 is arbitrary and from (4.7), it follows that
1Tni=1n(UiU¯n)21P0,
which implies (4.4).

Finally, we will prove (4.3). Denote Xi=eiβ1ωi and ani=xix¯nσ1nTn, then

Tnσ1ni=1n(xix¯n)(eiβ1ωi)Tn=i=1naniXi.

By Lemmas 3.1 and 3.3, we know that {Xi,i1} is still an NSD sequence of random variables. It is easy to check that i=1nani2=O1 and max1in|ani|0 by (C2) and (C3). By the stationarity of {Xi,i1} and EXi2=E(εiβ1δi)2CEεi2+CEδi2<, we know that {Xi,i1} is uniformly integral in L2 (see Remark 3.1). From (C4), it follows that

Vari=1naniXi=Vari=1nxix¯nσ1nTn(eiβ1ωi)=Vari=1n(xix¯n)(eiβ1ωi)σ1n2Tn=1.
By (C3) and (C4), we have
σ1n2=Vari=1nxix¯nTn(eiβ1ωi)max1in|xix¯n|Tn2Vari=1nXi<1nEi=1nXi2=EX12+2n1nEX1X2+n2nEX1X3++1nEX1XnEX12+2n1nEX1X2++nm+1nEX1Xm.
Hence, for any fixed m,
σ1n2<EX12+2(EX1X2++EX1Xm)
as n. Hence,
0<σ1n2<EX12+2j=2EX1Xj
as m. Thus, by Remark 3.1, we have supj1i:|ij|n|Cov(Xi,Xj)|0 as n. Therefore, (4.3) follows from Lemma 3.4.

This completes the proof of Theorem 2.1.□

Proof of Theorem 2.2

In view of (2.3), we have

nσ0n(β˜0nβ0)=nσ0n[(β1β˜1n)x¯n+(β1β˜1n)ω¯n+e¯nβ1ω¯n]=nσ0n(e¯nβ1ω¯n)+nσ0n(x¯n+ω¯n)(β1β˜1n).
Note that
nσ0n(e¯nβ1ω¯n)=nσ0n1ni=1n(eiβ1ωi)=1σ0nni=1n(eiβ1ωi).
Denote Yi=eiβ1ωi and bni=1σ0nn, then 1σ0nni=1n(eiβ1ωi)=i=1nbniYi. By the stationarity of {Yi,i1} and EYi2=E(εiβ1δi)2CEεi2+CEδi2<, we know that {Yi,i1} is uniformly integral in L2 (see Remark 3.1). From (C5), it follows that
Vari=1nbniYi=Vari=1n1σ0nn(eiβ1ωi)=Vari=1n(eiβ1ωi)nσ0n2=1.
By (C5), we have
σ0n2=Vari=1neiβ1ωin=1nVari=1nYi=1nEi=1nYi2=EY12+2n1nEY1Y2+n2nEY1Y3++1nEY1YnEY12+2n1nEY1Y2++nm+1nEY1Ym.
Hence, for any fixed m,
σ0n2EY12+2(EY1Y2++EY1Ym)
as n. Hence,
0<σ0n2EY12+2j=2EY1Yj
as m. Thus, by Remark 3.1, we have
supj1i:|ij|n|Cov(Yi,Yj)|0
as n. Hence, by Lemma 3.4, one can get that
1σ0nni=1n(eiβ1ωi)dN(0,1).
Thus, by Theorem 2.1, to prove (2.5), it suffices to show that
nTn(x¯n+ω¯n)=x¯nnTn+nTn1ni=1nωiP0.
By (C6) and Lemma 3.6, we derive that
x¯nnTn0
as n and
1ni=1nωia.s.0.
Therefore, (4.8) follows from (4.9) and (4.10).

This completes the proof of Theorem 2.2.□

Proof of Theorem 2.3

By simple calculation, we have

β˜1nβ1=i=1n(UiU¯n)eiβ1i=1n(xix¯n)ωiβ1i=1n(ωiω¯n)2i=1n(UiU¯n)2=i=1n(UiU¯n)21(Δ1+Δ2+Δ3).
Since
|β˜1nβ1|2Ci=1n(UiU¯n)22Δ12+Δ22+Δ32,
we have
E|β˜1nβ1|2CEi=1n(UiU¯n)22Δ12+Δ22+Δ32.
Hence, to prove (2.6), it suffices to show that
limnEi=1n(UiU¯n)22Δ12=0,
limnEi=1n(UiU¯n)22Δ22=0,
and
limnEi=1n(UiU¯n)22Δ32=0.
By (C8), we derive that i=1nωiω¯n2i=1nωi2nC. Hence, by the Cauchy-Schwarz inequality, we have
i=1n(xix¯i)ωi2i=1n(xix¯i)2i=1nωi2nC2Tn.
Thus,
i=1n(UiU¯n)2=Tn+2i=1n(xix¯n)ωi+i=1n(ωiω¯n)2TnCnTn.
From (C7), it follows that
CnTnTn=CnTn0
as n. Thus,
i=1n(UiU¯n)2TnCnTnCTn
as n. Hence, by the Cauchy-Schwarz inequality and (C1), we obtain that
Ei=1n(UiU¯n)22Δ12Ei=1n(UiU¯n)22j=1n(UjU¯n)2k=1nek2=Ei=1n(UiU¯n)21k=1nek2CnTn1.
Therefore, (4.10) follows from (C7). Similar to the proof of (4.9), one can get (4.12) and (4.13).

This completes the proof of Theorem 2.3.□

Proof of Theorem 2.4

In view of (2.3), we have

β˜0nβ0=(β1β˜1n)x¯n+(β1β˜1n)ω¯n+1ni=1n(eiβωi).
Hence,
|β˜0nβ0|2C(β1β˜1n)2x¯n2+(β1β˜1n)2ω¯n2+1n2i=1n(eiβ1ωi)2.
Then,
E|β˜0nβ0|2CE(β1β˜1n)2x¯n2+(β1β˜1n)2ω¯n2+1n2i=1n(eiβ1ωi)2.
By Theorem 2.3 and (C8), we can derive that
E[(β1β˜1n)2x¯n2]=x¯n2E(β1β˜0n)20
as n and
E[(β1β˜1n)2ω¯n2]CE(β1β˜1n)20
as n. By Lemmas 3.1 and 3.3, we know that {eiβδi,i1} is still an NSD sequence of random variables. Thus, from (C1) and Lemma 3.2, we can get that
E1n2i=1n(eiβ1ωi)2=1n2Ei=1n(eiβ1ωi)21n2Emax1kni=1k(eiβ1ωi)2Cn2i=1nE(eiβ1ωi)2Cn2i=1n(Eei2+β12Eωi2)Cn0
as n.□

5 Numerical simulations

In this section, we verify the asymptotic normality and mean consistency of LS estimators in the EV regression model with NSD errors.

The data are generated from model (1.2). For 0<u<0.8 and n4, let normal random vectors (e1,e2,,en)N(0,) and (ω1,ω2,,ωn)N(0,), where 0 represents zero vector and

=45+uuu200000u45+uuu20000u2u45+uu00000u2u45+u0000000045+uuu200000u45+uuu20000u2u45+uu00000u2u45+un×n.
By the definition of NA random variables (see [17]), we know that (e1,e2,,en) and (ω1,ω2,,ωn) are NA vectors, and thus NSD vectors.

5.1 Simulation example 1

In order to show the asymptotic normality of LS estimators in the model, we choose u=0.6 and xi=i for 1in. For the fixed β1=3 and β0=5, taking the sample sizes n as n=100, 500 and 1,000, respectively. We use R software to compute σ1n1Tn(β˜1nβ1) and σ0n1n(β˜0nβ0) for 1,000 times, respectively, and present the histograms and Quantile–Quantile (Q–Q) plots of them in Figures 1–6.

Figure 1
Figure 1

Histogram and normal Q–Q plot of σ1n1Tn(β˜1nβ1)μ with n=100.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 2
Figure 2

Histogram and normal Q–Q plot of σ0n1n(β˜0nβ0)η with n=100.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 3
Figure 3

Histogram and normal Q–Q plot of σ1n1Tn(β˜1nβ1)μ with n=500.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 4
Figure 4

Histogram and normal Q–Q plot of σ0n1n(β˜0nβ0)η with n=500.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 5
Figure 5

Histogram and normal Q–Q plot of σ1n1Tn(β˜1nβ1)μ with n=1,000.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 6
Figure 6

Histogram and normal Q–Q plot of σ0n1n(β˜0nβ0)η with n=1,000.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

It can be seen from Figures 1–6 that the histograms and Q–Q plots show good fit of the distribution for σ1n1Tn(β˜1nβ1) and σ0n1n(β˜0nβ0) to standard normal distribution as the sample size n increases. The simulation results verify the validity of our theoretical conclusions in Theorems 2.1 and 2.2.

5.2 Simulation example 2

In order to verify the mean consistency of LS estimators in the model, we choose u=0.4 and xi=i for all 1in. For the fixed β1=2 and β0=4 and β1=5 and β0=3, taking the sample sizes n as n=100, 200, 300, 500, 800 and 1,000, respectively. We use R software to compute β˜1nβ1 and β˜0nβ0 for 600 times, respectively, and provide the boxplots of them in Figures 7–10.

Figure 7
Figure 7

Boxplots of β˜1nβ1λ with β1=2 and β0=4.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 8
Figure 8

Boxplots of β˜0nβ0φ with β1=2 and β0=4.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 9
Figure 9

Boxplots of β˜1nβ1λ with β1=5 and β0=3.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

Figure 10
Figure 10

Boxplots of β˜0nβ0φ with β1=5 and β0=3.

Citation: Open Mathematics 18, 1; 10.1515/math-2020-0052

It can be seen from Figures 7–10 that β˜1nβ1 and β˜0nβ0 get closer and closer to zero and the ranges of β˜1nβ1 and β˜0nβ0 decrease as the sample size n increases. The simulation results directly reflect the conclusions in Theorems 2.3 and 2.4.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61374183) and the Project of Guangxi Education Department (No. 2017KY0720).

References

  • [1]

    A. Deaton, Panel data from a time series of cross-sections, J. Econ. 30 (1985), 109–126.

    • Crossref
    • Export Citation
  • [2]

    J. X. Liu and X. R. Chen, Consistency of LS estimator in simple linear EV regression models, Acta Math. Sci. B 25 (2005), 50–58.

    • Crossref
    • Export Citation
  • [3]

    P. Y. Chen, L. L. Wen, and S. H. Sung, Strong and weak consistency of least squares estimators in simple linear EV regression models, J. Statist. Plann. Inference 205 (2020), 64–73, .

    • Crossref
    • Export Citation
  • [4]

    Y. Miao, G. Y. Yang, and L. M. Shen, The central limit theorem for LS estimator in simple linear EV regression models, Commun. Stat. Theory Methods 36 (2007), 2263–2272.

    • Crossref
    • Export Citation
  • [5]

    S. F. Xu and N. Li, Consistency for the LS estimator in the linear EV regression model with replicate observations, J. Korean Statist. Soc. 42 (2013), no. 4, 451–458.

    • Crossref
    • Export Citation
  • [6]

    Y. Miao, K. Wang, and F. F. Zhao, Some limit behaviors for the LS estimator in simple linear EV regression models, Stat. Probab. Lett. 81 (2007), 92–102.

  • [7]

    Y. Miao and G. Y. Yang, The loglog law for LS estimator in simple linear EV regression models, Statistics 45 (2011), 155–162.

    • Crossref
    • Export Citation
  • [8]

    Y. Miao, K. Wang, and F. Zhao, Some limit behaviors for the LS estimator in simple linear EV regression models, Statist. Probab. Lett. 81 (2011), 92–102.

    • Crossref
    • Export Citation
  • [9]

    I. Fazekas and A. G. Kukush, Asymptotic properties of an estimator in nonlinear functional errors-invariables models with dependent error terms, Comput. Math. Appl. 34 (1997), no. 10, 23–39.

    • Crossref
    • Export Citation
  • [10]

    G. L. Fan, H. Y. Liang, J. F. Wang, and H. X. Xu, Asymptotic properties for LS estimators in EV regression model with dependent errors, AStA Adv. Stat. Anal. 94 (2010), 89–103.

    • Crossref
    • Export Citation
  • [11]

    Y. Miao, F. F. Zhao, K. Wang, and Y. P. Chen, Asymptotic normality and strong consistency of LS estimators in the EV regression model with NA errors, Stat. Papers 54 (2013), no. 1, 193–206.

    • Crossref
    • Export Citation
  • [12]

    Y. Miao, Y. L. Wang, and H. J. Zheng, Consistency of LS estimators in the EV regression model with martingale difference errors, Statistics 49 (2015), no. 1, 104–118.

    • Crossref
    • Export Citation
  • [13]

    A. T. Shen, Asymptotic properties of LS estimators in the errors-in-variables model with MD errors, Statist. Papers 60 (2019), no. 4, 1193–1206.

    • Crossref
    • Export Citation
  • [14]

    T. Z. Hu, Negatively superadditive dependence of random variables with applications, Chin. J. Appl. Probab. Stat. 16 (2000), 133–144.

  • [15]

    J. H. B. Kemperman, On the FKG-inequality for measures on a partially ordered space, Nederl. Akad. Wetensch. Proc. Ser. A 80 (1977), 313–331.

    • Crossref
    • Export Citation
  • [16]

    T. C. Christofides and E. A. Vaggelatou, Connection between supermodular ordering and positive/negative association, J. Multivariate Anal. 88 (2004), 138–151.

    • Crossref
    • Export Citation
  • [17]

    K. Joag-Dev and F. Proschan, Negative association of random variables with applications, Ann. Stat. 11 (1983), 286–295.

    • Crossref
    • Export Citation
  • [18]

    N. Eghbal, M. Amini, and A. Bozorgnia, Some maximal inequalities for quadratic forms of negative superadditive dependence random variables, Statist. Probab. Lett. 80 (2010), no. 7–8, 587–591.

    • Crossref
    • Export Citation
  • [19]

    X. J. Wang, X. Deng, L. L. Zheng, and S. H. Hu, Complete convergence for arrays of rowwise negatively superadditive-dependent random variables and its applications, Statistics 48 (2014), no. 4, 834–850.

    • Crossref
    • Export Citation
  • [20]

    B. Meng, D. C. Wang, and Q. Y. Wu, Complete convergence and complete moment convergence for arrays of rowwise negatively superadditive dependent random variables, Comm. Statist. Theory Methods 47 (2018), no. 16, 3910–3922.

    • Crossref
    • Export Citation
  • [21]

    M. Amini, A. Bozorgnia, H. Naderi, and A. Volodin, On complete convergence of moving average processes for NSD sequences, Sib. Adv. Math. 25 (2015), no. 1, 11–20.

    • Crossref
    • Export Citation
  • [22]

    Y. C. Yu, H. C. Hu, L. Liu, and S. Y. Huang, M-test in linear models with negatively super-additive dependent errors, J. Inequal. Appl. 2017 (2017), 235, .

    • Crossref
    • Export Citation
  • [23]

    Z. Zeng and X. D. Liu, A difference-based approach in the partially linear model with dependent errors, J. Inequal. Appl. 2018 (2018), 267, .

    • Crossref
    • PubMed
    • Export Citation
  • [24]

    Y. C. Yu, X. S. Liu, L. Liu, and P. Zhao, Detection of multiple change points for linear processes under negatively super-additive dependence, J. Inequal. Appl. 2019 (2019), 216, .

    • Crossref
    • Export Citation
  • [25]

    X. J. Wang, Y. Wu, and S. H. Hu, Strong and weak consistency of LS estimators in the EV regression model with negatively superadditive-dependent errors, AStA Adv. Stat. Anal. 102 (2018), 41–65.

    • Crossref
    • Export Citation
  • [26]

    A. Kheyri, M. Amini, H. Jabbari, and A. Bozorgnia, Kernel density estimation under negative superadditive dependence and its application for real data, J. Stat. Comput. Simul. 89 (2019), no. 12, 2373–2392.

    • Crossref
    • Export Citation
  • [27]

    H. C. Hu, Y. Zhang, and X. Pan, Asymptotic normality of DHD estimators in a partially linear model, Stat. Papers 57 (2016), 567–587.

    • Crossref
    • Export Citation
  • [28]

    Y. Zhang, X. S. Liu, and H. C. Hu, Weak consistency of M-estimator in linear regression model with asymptotically almost negatively associated errors, Comm. Statist. Theory Methods 49 (2020), no. 11, 2800–2816.

    • Crossref
    • Export Citation
  • [29]

    A. T. Shen, Y. Zhang, and A. Volodin, Applications of the Rosenthal-type inequality for negatively superadditive dependent random variables, Metrika 78 (2015), 295–311.

    • Crossref
    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1]

    A. Deaton, Panel data from a time series of cross-sections, J. Econ. 30 (1985), 109–126.

    • Crossref
    • Export Citation
  • [2]

    J. X. Liu and X. R. Chen, Consistency of LS estimator in simple linear EV regression models, Acta Math. Sci. B 25 (2005), 50–58.

    • Crossref
    • Export Citation
  • [3]

    P. Y. Chen, L. L. Wen, and S. H. Sung, Strong and weak consistency of least squares estimators in simple linear EV regression models, J. Statist. Plann. Inference 205 (2020), 64–73, .

    • Crossref
    • Export Citation
  • [4]

    Y. Miao, G. Y. Yang, and L. M. Shen, The central limit theorem for LS estimator in simple linear EV regression models, Commun. Stat. Theory Methods 36 (2007), 2263–2272.

    • Crossref
    • Export Citation
  • [5]

    S. F. Xu and N. Li, Consistency for the LS estimator in the linear EV regression model with replicate observations, J. Korean Statist. Soc. 42 (2013), no. 4, 451–458.

    • Crossref
    • Export Citation
  • [6]

    Y. Miao, K. Wang, and F. F. Zhao, Some limit behaviors for the LS estimator in simple linear EV regression models, Stat. Probab. Lett. 81 (2007), 92–102.

  • [7]

    Y. Miao and G. Y. Yang, The loglog law for LS estimator in simple linear EV regression models, Statistics 45 (2011), 155–162.

    • Crossref
    • Export Citation
  • [8]

    Y. Miao, K. Wang, and F. Zhao, Some limit behaviors for the LS estimator in simple linear EV regression models, Statist. Probab. Lett. 81 (2011), 92–102.

    • Crossref
    • Export Citation
  • [9]

    I. Fazekas and A. G. Kukush, Asymptotic properties of an estimator in nonlinear functional errors-invariables models with dependent error terms, Comput. Math. Appl. 34 (1997), no. 10, 23–39.

    • Crossref
    • Export Citation
  • [10]

    G. L. Fan, H. Y. Liang, J. F. Wang, and H. X. Xu, Asymptotic properties for LS estimators in EV regression model with dependent errors, AStA Adv. Stat. Anal. 94 (2010), 89–103.

    • Crossref
    • Export Citation
  • [11]

    Y. Miao, F. F. Zhao, K. Wang, and Y. P. Chen, Asymptotic normality and strong consistency of LS estimators in the EV regression model with NA errors, Stat. Papers 54 (2013), no. 1, 193–206.

    • Crossref
    • Export Citation
  • [12]

    Y. Miao, Y. L. Wang, and H. J. Zheng, Consistency of LS estimators in the EV regression model with martingale difference errors, Statistics 49 (2015), no. 1, 104–118.

    • Crossref
    • Export Citation
  • [13]

    A. T. Shen, Asymptotic properties of LS estimators in the errors-in-variables model with MD errors, Statist. Papers 60 (2019), no. 4, 1193–1206.

    • Crossref
    • Export Citation
  • [14]

    T. Z. Hu, Negatively superadditive dependence of random variables with applications, Chin. J. Appl. Probab. Stat. 16 (2000), 133–144.

  • [15]

    J. H. B. Kemperman, On the FKG-inequality for measures on a partially ordered space, Nederl. Akad. Wetensch. Proc. Ser. A 80 (1977), 313–331.

    • Crossref
    • Export Citation
  • [16]

    T. C. Christofides and E. A. Vaggelatou, Connection between supermodular ordering and positive/negative association, J. Multivariate Anal. 88 (2004), 138–151.

    • Crossref
    • Export Citation
  • [17]

    K. Joag-Dev and F. Proschan, Negative association of random variables with applications, Ann. Stat. 11 (1983), 286–295.

    • Crossref
    • Export Citation
  • [18]

    N. Eghbal, M. Amini, and A. Bozorgnia, Some maximal inequalities for quadratic forms of negative superadditive dependence random variables, Statist. Probab. Lett. 80 (2010), no. 7–8, 587–591.

    • Crossref
    • Export Citation
  • [19]

    X. J. Wang, X. Deng, L. L. Zheng, and S. H. Hu, Complete convergence for arrays of rowwise negatively superadditive-dependent random variables and its applications, Statistics 48 (2014), no. 4, 834–850.

    • Crossref
    • Export Citation
  • [20]

    B. Meng, D. C. Wang, and Q. Y. Wu, Complete convergence and complete moment convergence for arrays of rowwise negatively superadditive dependent random variables, Comm. Statist. Theory Methods 47 (2018), no. 16, 3910–3922.

    • Crossref
    • Export Citation
  • [21]

    M. Amini, A. Bozorgnia, H. Naderi, and A. Volodin, On complete convergence of moving average processes for NSD sequences, Sib. Adv. Math. 25 (2015), no. 1, 11–20.

    • Crossref
    • Export Citation
  • [22]

    Y. C. Yu, H. C. Hu, L. Liu, and S. Y. Huang, M-test in linear models with negatively super-additive dependent errors, J. Inequal. Appl. 2017 (2017), 235, .

    • Crossref
    • Export Citation
  • [23]

    Z. Zeng and X. D. Liu, A difference-based approach in the partially linear model with dependent errors, J. Inequal. Appl. 2018 (2018), 267, .

    • Crossref
    • PubMed
    • Export Citation
  • [24]

    Y. C. Yu, X. S. Liu, L. Liu, and P. Zhao, Detection of multiple change points for linear processes under negatively super-additive dependence, J. Inequal. Appl. 2019 (2019), 216, .

    • Crossref
    • Export Citation
  • [25]

    X. J. Wang, Y. Wu, and S. H. Hu, Strong and weak consistency of LS estimators in the EV regression model with negatively superadditive-dependent errors, AStA Adv. Stat. Anal. 102 (2018), 41–65.

    • Crossref
    • Export Citation
  • [26]

    A. Kheyri, M. Amini, H. Jabbari, and A. Bozorgnia, Kernel density estimation under negative superadditive dependence and its application for real data, J. Stat. Comput. Simul. 89 (2019), no. 12, 2373–2392.

    • Crossref
    • Export Citation
  • [27]

    H. C. Hu, Y. Zhang, and X. Pan, Asymptotic normality of DHD estimators in a partially linear model, Stat. Papers 57 (2016), 567–587.

    • Crossref
    • Export Citation
  • [28]

    Y. Zhang, X. S. Liu, and H. C. Hu, Weak consistency of M-estimator in linear regression model with asymptotically almost negatively associated errors, Comm. Statist. Theory Methods 49 (2020), no. 11, 2800–2816.

    • Crossref
    • Export Citation
  • [29]

    A. T. Shen, Y. Zhang, and A. Volodin, Applications of the Rosenthal-type inequality for negatively superadditive dependent random variables, Metrika 78 (2015), 295–311.

    • Crossref
    • Export Citation
OPEN ACCESS

Journal + Issues

Open Mathematics is a fully peer-reviewed, open access, electronic journal that publishes significant and original works in all areas of mathematics. The journal publishes both research papers and comprehensive and timely survey articles. Open Math aims at presenting high-impact and relevant research on topics across the full span of mathematics.

Search

  • View in gallery

    Histogram and normal Q–Q plot of σ1n1Tn(β˜1nβ1)μ with n=100.

  • View in gallery

    Histogram and normal Q–Q plot of σ0n1n(β˜0nβ0)η with n=100.

  • View in gallery

    Histogram and normal Q–Q plot of σ1n1Tn(β˜1nβ1)μ with n=500.

  • View in gallery

    Histogram and normal Q–Q plot of σ0n1n(β˜0nβ0)η with n=500.

  • View in gallery

    Histogram and normal Q–Q plot of σ1n1Tn(β˜1nβ1)μ with n=1,000.

  • View in gallery

    Histogram and normal Q–Q plot of σ0n1n(β˜0nβ0)η with n=1,000.

  • View in gallery

    Boxplots of β˜1nβ1λ with β1=2 and β0=4.

  • View in gallery

    Boxplots of β˜0nβ0φ with β1=2 and β0=4.

  • View in gallery

    Boxplots of β˜1nβ1λ with β1=5 and β0=3.

  • View in gallery

    Boxplots of β˜0nβ0φ with β1=5 and β0=3.