Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter October 22, 2013

The effect of round-off error on long memory processes

  • Gabriele La Spada EMAIL logo and Fabrizio Lillo

Abstract

We study how the round-off (or discretization) error changes the statistical properties of a Gaussian long memory process. We show that the autocovariance and the spectral density of the discretized process are asymptotically rescaled by a factor smaller than one, and we compute exactly this scaling factor. Consequently, we find that the discretized process is also long memory with the same Hurst exponent as the original process. We consider the properties of two estimators of the Hurst exponent, namely the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA). By using analytical considerations and numerical simulations we show that, in presence of round-off error, both estimators are severely negatively biased in finite samples. Under regularity conditions we prove that the LW estimator applied to discretized processes is consistent and asymptotically normal. Moreover, we compute the asymptotic properties of the DFA for a generic (i.e., non-Gaussian) long memory process and we apply the result to discretized processes.


Corresponding author: Gabriele La Spada, Princeton University – Economics, Fisher Hall, Princeton, NJ 08540, USA, Phone: +1 505 204 6885, e-mail:

Acknowledgements

FL acknowledges partial support by the grant SNS11LILLB “Price formation, agents heterogeneity, and market efficiency”. We are grateful to Yacine Aït-Sahalia, Fulvio Corsi, Gabriele Di Cerbo, Ulrich K. Müller, Christopher Sims, and two anonymous referees for their helpful comments and suggestions. We wish to thank J. D. Farmer for useful discussions inspiring the beginning of this work. Any remaining errors are our responsibility.

Appendix A: Distributional properties

In this appendix we consider the distributional properties of the discretization of a generic stationary Gaussian process.

From (5) the m-th moment of the discretized process can be written as E[Xdm]=n=qn(nδ)m. Since X is Gaussian distributed, the variance Dd of the discretized process can be calculated explicitly. From the above expression with m=2 we obtain

(34)Dd=Dn=n22χ[erf(2n+122χ)erf(2n122χ)] (34)

Left panel of Figure 1 shows the ratio Dd/D as a function of the scaling parameter χ. It is worth noting that this ratio is not monotonic. For small χ the ratio goes to zero because δ is very large relatively to D and essentially all the probability mass falls in the bin centered at zero. In this regime the variance ratio goes to zero as

DdD22πe1/8χχ.

When χ>>1 the ratio tends to one because the effect of discretization becomes irrelevant. In this regime Dd/D≃1+1/(12χ).

Analogously it is possible to calculate the kurtosis κd=E[Xd4]/(E[Xd2])2 of the discretized process. It is direct to show that the kurtosis is

(35)κd=n=1n4[erf(2n+122χ)erf(2n122χ)](n=1n2[erf(2n+122χ)erf(2n122χ)])2 (35)

For small χ the kurtosis diverges as

κdπ8χe1/8χ

because the fourth moment goes to zero slower than the squared second moment. For large χ the kurtosis converges as expected to the Gaussian value 3 as κd≃3–1/(120χ2). Note that the kurtosis reaches its asymptotic value 3 from below, since it reaches a minimum of roughly 2.982 at χ≃0.53 and then converges to three from below.

Appendix B: Proofs for Section 3

Proof of Proposition 1. The discretized process, Xd(t), is a non-linear transformation of the underlying real-valued process, X(t). Let us denote the discretization transformation with g(‧), so that Xd(t)=g(X(t)). From (10) we know that

γd(k)=j=1gj2ρj(k)k=0,1,

where ρ is the autocorrelation function of the underlying continuous process, and gj are defined as in (11).

Since the function g(x) is an odd function, gj=0 for j even, while all the odd coefficients are non-vanishing. Therefore, the discretization function has Hermite rank 1 and can be written as an infinite sum of Hermite (odd) polynomials. The generic gj coefficient is

gj=12πDn=nδ(n1/2)δ(n+1/2)δHj(xD)ex2/2Ddx=D2πn=nχ(n1/2)/χ(n+1/2)/χHj(x)ex2/2dx

The first Hermite polynomial is H1(x)=x and the coefficient g1 is

(36)g1D=2πχk=0exp[(2k+1)28χ]=12πχϑ2(0,e1/2χ) (36)

where ϑa(u, q) is the elliptic theta function. For large χ, g1D, while for small χ the coefficient g1 goes to zero as

g1D2πe1/8χχ

The second non-vanishing coefficient g3 is given by

(37)g3D=13πχk=0exp[(2k+1)28χ]+148πχ3k=0(2k+1)2exp[(2k+1)28χ] (37)

In principle one could calculate all the coefficients gj. Here we want to focus on the case when the correlation coefficient ρ is small (i.e., k is large), so that it suffices to consider the first two coefficients g1 and g3. Therefore, from (10) we have

(38)γd(k)=g12Dγ(k)(1+O(γ(k)2))ask. (38)

If we plug (1) and (36) into (38), we get the result.   □

Proof of Proposition 2. Under the assumption that L, we can write i=0bikβi=b0(1+O(kβ1)); by substituting this expression for L(k) into (12) we get the result.   □

Proof of Corollary 1. It follows from the definition of autocorrelation function and Proposition 1.   □

Proof of Proposition 3. From Proposition 1 we can write γd(k)~(ϑ2(0,e1/2χ)2πχ)2k2H2L(k), as k→∞. Then, the proposition follows from Theorem 3.3 (a) in Palma (2007).   □

Proof of Proposition 4. For the sake of simplicity, we consider the case of an underlying process with unit variance, namely D=1. In order to extend the proof to non-unit variance we need simply to do the transformation L(k)→L(k)/D. Obviously, L(k)/D is still slowly varying at infinity. Moreover, we consider only the case I=∞, the case I<∞ being trivial.

This proof is divided in two parts: in the first part we prove (17) and (18) under the assumption that L; in the second part we prove (19) under the additional Assumption 1.

First part From Proposition 2 and Theorem 2.1 in Beran (1994) we know that the spectral density ϕd(ω) exists. Moreover, if L, then ∃K>0 s.t. kK,L(k)=i=0bikβi and the series converges absolutely. Then, from Herglotz’s theorem and (10) we can write

ϕd(ω)=Dd2π+1πk=1K1γd(k)cos(kω)+1πkKj=0g2j+12(k2H2i=0bikβi)2j+1cos(kω)

As ω→0+, the first two terms on the RHS converge to a constant, while the third term diverges.

Since ibikβi converges absolutely ∀kK, we can write

i=0bikβi=b0+b1kβi+b2kβ2(1+o(1))

Therefore,

(39)kKj=0g2j+12(k2H2i=0bikβi)2j+1cos(kω)=kK{g12k2H2[b0+b1kβ1+b2kβ2(1+o(1))]+g32k6H6[b03+3b02b1kβ1(1+o(1))]+g52k10H10b05(1+o(1))}cos(kω)=g12b0k=1k2H2cos(kω)+g12b1k=1k2H2β1cos(kω)+O(k=1k2H2β2cos(kω))+g32b03k=1k6H6cos(kω)+O(k=1k6H6β1cos(kω))+O(k=1k10H10cos(kω))+O(1), (39)

where the term O(1) comes from substituting kK with k=1.

Then, we introduce the following representation of a trigonometric series

(40)k=1kscos(kω)=12(Lis(eiω)+Lis(eiω)) (40)

where Lis(z)=k=1zk/ks is the polylogarithm. The series expansion of Lis(ez) for small z is (see Gradshteyn and Ryzhik 1980)

(41)Lis(ez)=Γ(1s)(z)s1+k=0ζ(sk)k!zk,ifs (41)
(42)Lis(ez)=zs1(s1)![Hs1ln(z)]+k=0,ks1ζ(sk)k!zk,ifs (42)

where ζ(‧) is the analytic continuation of the Riemann zeta function over the complex plane and Hs is the sth harmonic number.

Finally, we plug (41) or (42) into (40), and then we apply (40) into (39). By substituting (36) for g12 and after some algebraic manipulation we get the result.

Second part Under Assumption 1 (i) the series L(k)=i=0bikβi converges absolutely ∀k≥1. Therefore, by Mertens’ theorem,[9] for any j≥0 we can write

(i=0bikβi)2j+1=i=0b˜j,ikβ˜j,ik1,

where the series on the RHS is the Cauchy product. Note that b˜0,i=bi and β˜0,i=βii, while b˜j,0=b02j+1 and β˜j,0=0j. By absolute convergence of the original series the Cauchy product also converges absolutely ∀k≥1.

From Herglotz’s theorem and Proposition 2 we can write

ϕd(ω)=Dd2π+1πk=1j=0i=0g2j+12b˜j,ikαj,icos(kω),

where αj,i(2j+1)(22H)+β˜j,i.

Since {g2j+12} are bounded above (because jg2j+12=Dd<), Assumption 1 (i) implies that the double series jig2j+12b˜j,ikαj,i converges absolutely ∀k≥1. Moreover, under Assumption 1 (ii) there is only a finite number of terms in L(k) (and therefore in jig2j+12b˜j,ikαj,i) that are not summable over k. Thus, from Rudin (1976) (Chapter 8, Theorem 8.3) we can invert the order of summation and write

ϕd(ω)=Dd2π+1πj=0g2j+12i=0b˜j,ik=1kαj,icos(kω)

In other words, the Fourier transform of the series becomes the series of the Fourier transforms. From this point on the proof is very similar to that of Lemma 1 and therefore omitted for brevity.   □

Proof of Corollary 2. If L is analytic at infinity, then L(k)=n=0bnkn for large k for some {bn}, and the series converges absolutely within its radius of convergence. Then, L with β1, and by applying Proposition 4 and (36) we get (20). It is straightforward to see that c2>0 and c1>0.   □

Proof of Corollary 3. For a fGn L(k)=D2((1+1k)2H+(11k)2H2)k2, which is an even analytic function at infinity and whose power series converges absolutely ∀k≥1. Because L(k) is analytic and even, βi=2ii≥0 and we can write

Γ(2H1βi)=(1)2i1Γ(12H)Γ(2H)Γ(2(i+1H))0asi.

Thus, the autocovariance of a fGn satisfies Assumption 1, and therefore from Corollary 2 it follows that the spectral density of a discretized fGn satisfies (20) with c0 given by (19).

From Corollary 2 we already know that the second-order term is strictly positive if H≥5/6. Hence, we just need to prove that c0>0 if H<5/6. Since c0 is given by (19) we can write

(43)c0=Dd2π+g12πi=0(2H2(i+1))ζ(2(i+1H))+j=1i=0g2j+12πD2j+1b˜j,iζ((2j+1)(22H)+β˜j,i) (43)

where we used the fact that for a fGn bi=(2H2(i+1))D and βi=2ii≥0. The symbol () denotes the generalized binomial coefficient.

First, since {bi} are strictly positive and (2j+1)(2–2H)>1 ∀j≥1 if H<5/6, the third term on the RHS of (43) is strictly positive for H<5/6.

Second, from Sinai (1976) and Beran (1994) we know that the spectral density of a fGn satisfies

ϕ(ω)=2cϕ*(1cosω)j=|2πj+ω|2H1,ω[π,π].

where cϕ*=D2πsin(πH)Γ(2H+1). For small ω H∈[0.5, 1] the behavior of the above spectral density follows by Taylor expansion at zero:

(44)ϕ(ω)=cϕ*|ω|12H112cϕ*|ω|32H+o(|ω|32H). (44)

On the other hand, because L(k) is analytic with β1=2, it satisfies Assumption 2; therefore, the fGn satisfies the conditions of Lemma 1. Following the proof of that lemma, after some algebraic manipulations, we get

(45)ϕ(ω)=cϕ*|ω|12H+c0*112cϕ*|ω|32H+o(|ω|32H), (45)

where c0*=Dπ1(12+i=0(2H2(i+1))ζ(2(i+1H))). By comparing (44) and (45) it follows that c0* must be equal to zero, and therefore

(46)i=0(2H2(i+1))ζ(2(i+1H))=12. (46)

Finally, note that from (10) we know that Dd=j=0g2j+12. Hence, by plugging (46) into (43) we obtain

c0=j=0g2j+122πg122π+j=1i=0g2j+12πD2j+1b˜j,iζ((2j+1)(22H)+β˜j,i)=j=1g2j+122π+j=1i=0g2j+12πD2j+1b˜j,iζ((2j+1)(22H)+β˜j,i)>0

Appendix C: Proofs for Section 4

Proof of Lemma 1. From Theorem 2.1 in Beran (1994) we know that the spectral density ϕ(ω) exists and from Hergotz’s theorem it is the discrete Fourier transform of the autocovariance γ(k)=k2H2ibikβi.

Let αi=2–2H+βii≥0. Under Assumption 1 (ii) there is only a finite number of terms in the autocovariance of X that are not summable, and therefore there will be only a finite number of divergent terms in the spectral density. Moreover, under Assumption 1 (i) the series i=0Ibikβi converges absolutely ∀k≥1. Therefore, by Rudin (1976) (Theorem 8.3) we can write

(47)ϕ(ω)=D2π+1πi=0Ibik=1kαicos(kω) (47)

By using the polylogarithm representation (40) introduced above, for small ω we can plug (41) and (42) into (47). Let I1={i:αi} and I2={i:αi}. Then, under Assumption 1 (iii) we can rearrange the double series in (47) in the following way

(48)ϕ(ω)=D2π+1πb0(Γ(2H1)sin((1H)π)|ω|12H+j=0(1)jζ(22H2j)(2j)!ω2j)+1πiI1bi(Γ(1αi)sin(παi2)|ω|αi1+j=0(1)jζ(αi2j)(2j)!ω2j)+1πiI2bi(|ω|αi1(αi1)![sin(παi2)Hαi1+π2cos(παi2)sin(παi2)ln|ω|]  +j=0,jαi1(1)jζ(αi2j)(2j)!ω2j),asω0+, (48)

where ζ(‧) is the analytic continuation of the Riemann zeta function over the complex plane and Hs is the sth harmonic number.

Under Assumption 1 we can collect all the terms of the same oder and rearrange (48) in powers of ω. Let c0* be the term of order O(1) in (48); then,

c0*=D2π+1πi:αi1Ibiζ(αi).

Under Assumption 2 α1≠1, and therefore, if also α1≠2, we can write

(49)ϕ(ω)=cϕb0|ω|12H+c0*+c1*|ω|12H+β1+o(|ω|min(0,12H+β1))asω0+, (49)

where cϕ is defined as in Proposition 3 and c1*=π1b1Γ(2H1β1)sin(2Hβ12π).

If α1=2, by Assumption 2 c0*0 and we can write

(50)ϕ(ω)=cϕb0|ω|12H+c0*+o(1),asω0+. (50)

Putting together (49) and (50), and noting that if c0*=0 in (49), then by Assumption 2 β1≤2, we get the result

ϕ(ω)=cϕ|ω|12H(b0+cβ|ω|β+o(|ω|β)),asω0+,

where cβ≠0 and β∈(0, 2].

Proof of Theorem 1. Following the proof of Theorem 4 in DGH, because j0=1 we can write Yt as a signal-plus-noise process Yt=Wt+Zt, where

Wt=g1H1(Xt)=g1XtZt=j=j1gjHj(Xt)

where j1 is the second non-vanishing term in the Hermite expansion and Hj(‧) is the jth Hermite polynomial.

Part (i) If L, the spectral density of Wt satisfies Assumption A in DGH, i.e.,

ϕw(ω)=cw|ω|12H(1+o(1)),asω0+,

where cw=(g12/D)cϕb0. Moreover, since Xt is a stationary purely non-deterministic Gaussian process, it is also linear with finite fourth moments. Consequently, Wt=g1Xt is also linear with finite fourth moments. Then, we can write

Wt=j=0ajεtj

where j=0aj2< and εt are i.i.d. Gaussian variables with zero mean and unit variance. Let α(ω)=j=0ajeijω. From Proposition 4 in DGH it follows that Wt satisfies also Assumption B therein.

We show below that the spectral density of Zt satisfies ϕz(ω)C|ω|12Hz, as ω→0+, for some C>0 and Hz≥0.5 such that

(51)H>Hz={0.5if  j1(22H)>10.5j1(1H)if  j1(22H)<1εif  j1(22H)=1 (51)

for any ε∈(0.5, H).

Indeed, if j1(2–2H)>1 from (10)

k=1γz(k)=k=1j=j1gj2(ρ(k))jCk=1kj1(22H)<

for some C>0. Therefore, Hz=0.5<H.

If j1(2–2H)<1, we can prove that

ϕz(ω)=gj12Dj1b0cϕ|ω|12Hz(1+o(1))C|ω|12Hzasω0+,

for some C>0 and Hz=H–(j1–1)(1–H)<H∈(0.5, 1). Similarly, if j1(2–2H)=1, we can prove that

ϕz(ω)=CIn|ω|1(1+o(1))C|ω|ε,asω0+,

for some C>0 and for any ε>0. The proof of the above results is a special case of the proof of Proposition 4, and thus omitted. The results above prove (51).

Since Wt satisfies Assumptions A and B in DGH and the spectral density ϕz satisfies the asymptotic conditions above, consistency of H^LWY follows from Theorem 3 (i) in DGH.

Moreover, if we write the periodogram of Yt as IY(ωj)=IW(ωj)+vj, where IW is the periodogram of the “signal” Wt and vj is the contribution of the “noise” Zt at the jth Fourier frequency, it is straightforward to show (see DGH pp. 225–226) that

(52)H^LWYH=(H^LWXH)(1+oP(1))(m1j=1m(log(jm)+1)vjcwωj12H)(1+oP(1))+OP(logmm)=(H^LWXH)(1+oP(1))+OP((mn)HHz+logmm), (52)

where H^LWX denotes the LW estimator of {Xt} if the sequence {Xt} were observed.

Note that, roughly speaking, vj represents the sample estimate of the higher-order terms of the spectral density of Yt at the jth Fourier frequency. For the discretization of a fGn we know from Corollary 3 that the second-order term of the spectral density is strictly positive for all H; therefore, in that case, we expect that the second term on the RHS of the first line of (52) will induce a negative finite sample bias on H^LWY. The order of magnitude of this finite sample bias is OP((m/n)HHz).

Part (ii) Under Assumptions 1 and 2, from Lemma 1 it follows that Xt and therefore Wt satisfy Assumption T(α0, β) in DGH, with α0=2H–1 and β defined as in Lemma 1. Moreover, under Assumption 3 we can combine the second part of Proposition 5 in DGH with Proposition 3 and Theorem 2 therein, and under the assumption that m=o(m2β/(2β+1)) we can write

(53)H^LWXH=(mn)β(cβb0)Bβ2Vm2(1+oP(1))+op(m1/2+(mn)β) (53)

where cβ is defined as in Lemma 1, Bβ=(2π)ββ/(β+1)2, and

Vm=m1j=1m(log(jm)+1)(ηjEηj)

with ηj=IX(ωj)/ϕx(ωj).

Let r=HHz. By plugging (53) into (52) we obtain

(54)H^LWYH=Vm2(mn)β(cβb0)Bβ2+OP((mn)r+logmm)+op(m1/2+(mn)β+Vm). (54)

Moreover, under Assumption 3 and m=o(m2β/(2β+1)), by Robinson’s (1995) Theorem 2

(55)mVmdN(0,1),asn. (55)

Therefore, Vm=OP(m–1/2) and from (54) follows (26).

Part (iii) If m=o(n2r/(2r+1)), equation (27) follows from applying (55) in (54).   □

Proof of Corollary 4. The result of the corollary follows directly from Theorem 1, and from noticing that the second non-vaninshing Hermite coefficient for the discretized process is g3≠0, so that j1=3.

For the proof of Theorem 2 we need the following lemmas. Note that the proofs of the lemmas are at the end of this Appendix.

Lemma 2Letm, α<1, and

(56)R(α)=α(α1)1mB2({1t})2!tα2dt (56)
(57)R˜(α)=1mB2({1t})2!tα2(α(α1)logt1+2α)dt (57)

where B2(x)=1/6–x+x2is the third Bernoulli polynomial and {x} represents the fractional part of the real number x. Then, both R(α) and R˜(α) converge to a finite number as m→∞.

Note that R(α) and R˜(α) are the remainders of a first order Euler-Maclaurin expansion of the sums k=1mkα and k=1mkαlogk, respectively.

Lemma 3Leti,α>0, and α≠1, 2. Then, as i→∞,

k=1i(ik)kα=A0(α)i2α+A1(α)i+O(1)

where A0(α)=1(1α)(2α) and A1(α)=(α+2)(α+3)121α)+R(α) and R(‧) is defined as in (56).

Lemma 4Let i. Then, as i→∞,

k=1i(ik)k1=ilni+(512+R1)i+O(1)k=1i(ik)k2=(53+R2)ilni+O(1)

where R1R(α=–1) and R2R(α=–2).

Before proving Theorem 2 we prove the following proposition.

Proposition 5Under the assumptions of Theorem 2, let us define Σm=Cov(Y(i), Y(j)), i.e., the covariance matrix of the integrated process (Y(1), …, Y(m)). Then,

(i) if β≠2H–1, then

Σm=A2H(2H1)(i2H(1+O(imin(2H1,β)))+j2H(1+O(jmin(2H1,β)))+|ij|2H(1+O(|ij|min(2H1,β)))]

(ii) if β=2H–1, then

Σm=A2H(2H1)[i2H(1+O(i12Hlni))+j2H(1+O(j12Hlnj))+|ij|2H(1+O(|ij|12Hln|ij|))]

Proof. Under the assumptions on X(t), for 1≤i, jm we can write

(58)Σm=Cov(Y(i),Y(j))=k=1il=1jCov(X(k),X(l))=k=1i(ik)γ(k)++k=1j(jk)γ(k)k=1|ij|(|ij|k)γ(k)+min(i,j)D (58)

where D is the variance of the process X(t). By substituting the explicit functional form for γ(k) we get

k=1i(ik)γ(k)Ak=1i(ik)k2H2+Mk=1i(ik)k2H2β

for some M>0 sufficiently large. By Lemma 3 we have Ak=1i(ik)k2H2=A(i2H2H(2H1)+O(i)).

Now, we consider the following cases:

(i) β≠2H–1. In this case we have to distinguish two cases.

If β≠2H, we can use Lemma 3 and obtain

k=1i(ik)k2H2β=A0(2H2β)i2Hβ+A1(2H2β)i+O(1)=i2HO(imin(2H1,β))

If β=2H, then we can use Lemma 4

k=1i(ik)k2H2β=(53+R2)ilni+O(1)=O(i)

Then, we repeat the same calculation for the second and third term in (58). By noting that min(i, j) is either of order O(i) or O(j) and putting together all the terms, we obtain the result.

(ii) β=2H–1. In this case we can use Lemma 4

k=1i(ik)k2H2β=(ilni+(512+R1)i+O(1))=i2HO(i12Hlni)

Then, we repeat the same calculation for the second and third term in (58). By noting that min(i, j) is either of order O(i) or O(j) and putting together all the terms, we obtain the result.   □

Proof of Theorem 2. First, for j∈{1, …, [n/m]} let us define the vector:

Y(j)=(Y(1+m(j1)),,Y(mj))

where x means the transpose of x.

Then, following Bardet and Kammoun (2008),

F12(m)=1m(Y(1)PE1Y(1))(Y(1)PE1Y(1))=1m(Y(1)Y(1)Y(1)PE1Y(1)),

where E1 is the vector subspace of m generated by the two vectors e1=(1, …, 1) and e2=(1, 2, …, m), PE1 is the matrix of the orthogonal projection on E1, and the second equality holds because the projection operator is idempotent. As a consequence,

E[F12(m)]=1m(Tr(Σm)Tr(PE1Σm))

where Tr(·) is the trace of a square matrix.

Case (i) If β≠2H–1, from Proposition 5 we get

Tr(Σm)=AH(2H1)m2H+1(01x2Hdx+O(mmin(2H1,β))+O(m1)),

where the error O(m–1) comes from approximating the sum with the integral; therefore,

(59)Tr(Σm)=Am2H+1H(2H1)12H+1(1+O(mmin(2H1,β))) (59)

For the term Tr(PE1Σm), we use the following representation of the projection operator

(60)PE1=2m(m1)((2m+1)3(i+j)+6ij1+m). (60)

Then, using (60) and Proposition 5 we can write

Tr(PE1Σm)=2Am2H+1m2m(m1)2H(2H1)[i=1mj=1m1m2((2+1m)3i+jm+6ijm(m+1))((im)2H(1+O(imin(2H1,β)))+(jm)2H(1+O(jmin(2H1,β)))+(|ij|m)2H(1+O(|ij|min(2H1,β))))].

Approximating sums with integrals we get

(61)Tr(PE1Σm)=Am2H+1H(2H1)(1+O(mmin(2H1,β))+O(m1))0101[(23(x+y)+6xy)(x2H+y2H|xy|2H)]dxdy=Am2H+1H(2H1)1+H(4+H)(1+H)(2+H)(1+2H)(1+O(mmin(2H1,β))) (61)

Putting together (59) and (61) we obtain

1m(Tr(Σm)Tr(PE1Σm))=AH(2H1)f(H)m2H(1+O(mmin(2H1,β)))

which is the formula of E[F12(m)].

Case (ii) If β≠2H–1, the proof is exactly the same, except for replacing all the terms O(i–min (2H–1,β)) with the terms O(i1–2Hln i).   □

Proof of Corollary 5. It follows from the autocovariance of a fractional Gaussian noise (see formula (3)). The proof is very similar to the proof of Theorem 2, and thus omitted. However, a complete proof can be found in Bardet and Kammoun (2008) (see Proof of Property 3.1 therein).   □

Proof of Corollary 6. It follows from Proposition 2 and Theorem 2.   □

Proofs of lemmas for Section 4

Proof of Lemma 2. First we prove that R(α) converges. Because |B2({1–t})|≤B2(0)=1/6 for all t, we have |R(α)||α(α1)|121mt2+αdt, where the integral on the right-hand side converges as m→∞ because 2–α>1.

Now we prove that R˜(α) converges. Pick ε>0 s.t. 2–αε>1. One can always find such ε because α<1 by assumption. Because log t is slowly varying, there exists T>1 s.t. t–2+αlog t<t–2+α+ε for all t>T. Because |B2({1–t})|≤1/6 for all t≥1, we can write

|R˜(α)|<1121Tt2+α(α(α1)logt1+2α)dt+α(α+1)12Tmt2+α+εdt

where the second integral converges as m→∞ because 2–αε>1.

Proof of Lemma 3. By using Euler-Maclaurin formula up to the first order we obtain

ik=1ikα=i(i1α11α+iα+12+B22!(α)(iα11)+R(α))

where B2=1/6 is the third Bernoulli number, and R(α) is the remainder of the Euler-Maclaurin expansion given by (56). From Lemma 2 we know that R(α) converges, as i→∞. So, we can write

ik=1ikα=i2α1α+A1(α)i+i1α2αiα12

where A1(‧) is defined as in Lemma 3. Note that the second-order term is O(i).

Similarly, we can write

k=1ik1α=i2α2α+i1α2+A1(α1)+(1α)iα12

Note that in this case the second-order term is O(i1–α).

Putting together these two terms we have k=1i(ik)kα=A0(α)i2α+A1(α)i+O(1). where A0(‧) is defined as in Lemma 3. Note that the terms of order i1–α cancel out exactly.   □

Proof of Lemma 4. The proof is similar to the proof of Lemma 3, and thus omitted.

Appendix D: Sign process

Taking the sign of a stochastic process can be thought of as an extreme form of discretization. Hence, to study the asymptotic properties of the sign process we can use the same technique outlined in Section 3.1 for general nonlinear transformations of Gaussian processes. By decomposing the sign transformation on the basis of Hermite polynomials we get the following

Proposition 6Let {X(t)}t be a stationary Gaussian process with autocovariance function given by Definition 1. Then, the autocovariance γs(k) and the autocorrelation ρs(k) of the sign process satisfy

(62)γs(k)=ρs(k)=2πarcsinρ(k) (62)

Therefore, also the sign transformation preserves the long memory property and the Hurst exponent. Moreover, if the autocorrelation ρ is small (e.g., if the lag k is large) we have

(63)γs(k)=ρs(k)=2π(ρ(k)+ρ3(k)6)+O(ρ5(k)). (63)

This expression has been obtained several times, as, for example, in the context of binary time series [see Keenan (1982)]. Note that, trivially, when the discretization is obtained by taking the sign function the variance of the discretized process is Ds=1.

All the results on the discretized process presented above hold true for the sign process as well, with (ϑ2(0,e1/2χ)2πχ)2 replaced by 2πD and g3=13π. Numerical results are available from the authors by request.

Proof of Proposition 6. As the discretization, the sign transformation is an odd function, and therefore gj=0 when j is even. When j is odd the coefficients of the sign function in Hermite polynomials are

gj=20Hj(x)ex2/22πdx=2j+1/2Γ(j+1/2)π(2j+1)!.

By inserting these value in (10) we obtain the autocorrelation (and autocovariance) function of the sign of a Gaussian process

γs(k)=ρs(k)=j=1gj2ρ(k)j=2πarcsinρ(k)

References

Abadir, K. A., W. Distaso, and L. Giraitis. 2007. “Nonstationarity-Extended Local Whittle Estimation.” Journal of Econometrics 141: 1353–1384.10.1016/j.jeconom.2007.01.020Search in Google Scholar

Alessio, E., A. Carbone, G. Castelli, and V. Frappietro. 2002. “Second-Order Moving Average and Scaling of Stochastic Time Series.” European Physical Journal B 27 (2): 197–200.10.1140/epjb/e20020150Search in Google Scholar

Alfarano, S., and T. Lux. 2007. “A Noise Trader Model as a Generator of Apparent Financial Power Laws and Long Memory.” Macroeconomic Dynamics 11: 80–101.10.1017/S1365100506060299Search in Google Scholar

Andrews, D. W. K., and Y. Sun. 2004. “Adaptive Polynomial Whittle Estimation of Long-Range Dependence.” Econometrica 72: 569–614.10.1111/j.1468-0262.2004.00501.xSearch in Google Scholar

Arianos, S., and A. Carbone. 2007. “Detrending Moving Average Algorithm: A Closed-Form Approximation of the Scaling Law.” Physica A 382 (1): 9–15.10.1016/j.physa.2007.02.074Search in Google Scholar

Arteche, J. 2004. “Gaussian Semiparametric Estimation in Long Memory in Stochastic Volatility and Signal-plus-noise Models.” Journal of Econometrics 119: 131–154.10.1016/S0304-4076(03)00158-1Search in Google Scholar

Bali, R., and G. L. Hite. 1998. “Ex Dividend Day Stock Price Behavior: Discreteness or Tax-Induced Clienteles?” Journal of Financial Economics 47: 127–159.10.1016/S0304-405X(97)00041-XSearch in Google Scholar

Bardet, J.-M., and I. Kammoun. 2008. “Asymptotic Properties of the Detrended Fluctuation Analysis of Long Range Dependent Processes.” IEEE Transactions on Information Theory 54: 2041–2052.10.1109/TIT.2008.920328Search in Google Scholar

Barrett, J. F., and D. G. Lampard. 1955. “An Expression for Some Second-Order Probability Distributions and its Application to Noise Problems.” IRE Transactions on Information Theory 1: 10–15.10.1109/TIT.1955.1055122Search in Google Scholar

Beran, J. 1994. Statistics for Long-Memory Processes. Boca Raton, Florida, USA: Chapman & Hall/CRC.Search in Google Scholar

Corsi, F., and R. Renò. 2011. “Is Volatility Really Long Memory?” Working Paper.Search in Google Scholar

Dahlhaus, R. 1989. “Efficient Parameter Estimation for Self-Similar Processes.” Annals of Statistics 17: 1749–1766.10.1214/aos/1176347393Search in Google Scholar

Dalla, V., L. Giraitis, and J. Hidalgo. 2006. “Consistent Estimation of the Memory Parameter for Nonlinear Time Series.” Journal of Time Series Analysis 27: 211–251.10.1111/j.1467-9892.2005.00464.xSearch in Google Scholar

Delattre, S., and J. Jacod. 1997. “A Central Limit Theorem for Normalized Functions of the Increments of a Diffusion Process, in the Presence of Round-Off Errors.” Bernoulli 3: 1–28.10.2307/3318650Search in Google Scholar

Di Matteo, T., T. Aste, and M. Dacorogna. 2005. “Long-Term Memories of Developed and Emerging Markets: Using The Scaling Analysis to Characterize their Stage of Development.” Journal of Banking & Finance 29: 827–851.10.1016/j.jbankfin.2004.08.004Search in Google Scholar

Dittmann, I., and C. W. J. Granger. 2002. “Properties of Nonlinear Transformations of Fractionally Integrated Processes.” Journal of Econometrics 110: 113–133.10.1016/S0304-4076(02)00089-1Search in Google Scholar

Embrechts, P., C. Klüppelberg, and T. Mikosch. 1997. Modelling Extremal Events: For Insurance and Finance. Berlin, Heidelberg: Springer-Verlag.10.1007/978-3-642-33483-2Search in Google Scholar

Engle, R. F., and J. R. Russell. 2010. “Analysis of High-Frequency Data.” In Handbook of Financial Econometrics, edited by Y. Aït-Sahalia and L. P. Hansen, Vol. 1, 383–426. Amsterdam: North-Holland.10.1016/B978-0-444-50897-3.50010-9Search in Google Scholar

Fox, R., and M. S. Taqqu. 1986. “Large-Sample Properties of Parameter Estimates for Strongly Dependent Stationary Gaussian Time Series.” Annals of Statistics 14: 517–532.10.1214/aos/1176349936Search in Google Scholar

Giraitis, L., and D. Surgailis. 1990. “A Central Limit Theorem for Quadratic Forms in Strongly Dependent Linear Variables and Application to Asymptotical Normality of Whittleõs Estimate.” Probability Theory and Related Fields 86: 87–104.10.1007/BF01207515Search in Google Scholar

Gottlieb, G., and A. Kalay. 1985. “Implication of the Discreteness of Observed Stock Prices.” Journal of Finance 40: 135–153.10.1111/j.1540-6261.1985.tb04941.xSearch in Google Scholar

Gradshteyn, I. S., and I. M. Ryzhik. 1980. Tables of Integrals, Series, and Products 4th ed. New York: Academic Press.Search in Google Scholar

Grech, D., and Z. Mazur. 2013. “On the Scaling Ranges of Detrended Fluctuation Analysis for Long-Term Memory Correlated Short Series of Data.” Physica A 392 (10): 2384–2397.10.1016/j.physa.2013.01.049Search in Google Scholar

Gu, G.-F., and W.-X. Zhou. 2010. “Detrending Moving Average Algorithm for Multifractals.” Physical Review E 82 (1): 011136.10.1103/PhysRevE.82.011136Search in Google Scholar

Hansen, P. R., and A. Lunde. 2010. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error. CREATES Research Papers 2010-08, School of Economics and Management, University of Aarhus.10.2139/ssrn.1550269Search in Google Scholar

Hardy, G. H. 1991. Divergent Series. Providence, RI: AMS Chelsea.Search in Google Scholar

Harris, L. 1990. “Estimation of Stock Price Variances and Serial Covariances from Discrete Observations.” The Journal of Financial and Quantitative Analysis 25: 291–306.10.2307/2330697Search in Google Scholar

Hosking, J. R. M. 1996. “Asymptotic Distributions of the Sample Mean, Autocovariances, and Autocorrelations of Long-Memory Time Series.” Journal of Econometrics 73: 261–284.10.1016/0304-4076(95)01740-2Search in Google Scholar

Hurvich, C. M., and B. K. Ray. 2003. “The Local Whittle Estimator of Long-Memory Stochastic Volatility.” Journal of Financial Econometrics 1: 445–470.10.1093/jjfinec/nbg018Search in Google Scholar

Hurvich, C. M., E. Moulines, and P. Soulier. 2005. “Estimating Long Memory in Volatility.” Econometrica 73: 1283–1328.10.1111/j.1468-0262.2005.00616.xSearch in Google Scholar

Jiang, Z.-Q., and W.-X. Zhou. 2011. “Multifractal Detrending Moving Average Cross-Correlation Analysis.” Physical Review E 84 (1): 016106.10.1103/PhysRevE.84.016106Search in Google Scholar PubMed

Keenan, D. M. 1982. “A Time Series Analysis of Binary Data.” Journal of the American Statistical Association 77: 816–821.10.1080/01621459.1982.10477892Search in Google Scholar

Künsch, H. R. 1987. “Statistical Aspects of Self-Similar Processes.” In Proceedings of the 1st World Congress of the Bernoulli Society, edited by Yu. A. Prohorov and V. V. Sazonov, Vol. 1, 67–74. Utrecht: Science Press.10.1515/9783112314227-005Search in Google Scholar

La Spada, G., J. D. Farmer, and F. Lillo. 2011. “Tick Size and Price Diffusion.” In: Econophysics of Order-driven Markets, edited by F. Abergel, B.K. Chakrabarti, A. Chakraborti and M. Mitra, 173–188. Milano, Italy: Springer.10.1007/978-88-470-1766-5_12Search in Google Scholar

Lillo, F., and J. D. Farmer. 2004. “The Long Memory of the Efficient Market.” Studies in Nonlinear Dynamics & Econometrics 8: 1.10.2202/1558-3708.1226Search in Google Scholar

Mandelbrot, B. B., and J. W. van Ness. 1968. “Fractional Brownian Motion, Fractional Noises and Applications.” SIAM Review 10: 422–437.10.1137/1010093Search in Google Scholar

Peng, C.-K., S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger. 1994. “Mosaic Organization of DNA Nucleotides.” Physical Review E 49: 1685–1689.10.1103/PhysRevE.49.1685Search in Google Scholar

Phillips, P. C. B., and K. Shimotsu. 2004. “Local Whittle Estimation in Nonstationary and Unit Root Cases.” Annals of Statistics 32: 656–692.10.1214/009053604000000139Search in Google Scholar

Robinson, P. M. 1995. “Gaussian Semiparametric Estimation of Long Range Dependence.” Annals of Statistics 23: 1630–1661.10.1214/aos/1176324317Search in Google Scholar

Rosenbaum, M. 2009. “Integrated Volatility and Round-Off Error.” Bernoulli 15: 687–720.10.3150/08-BEJ170Search in Google Scholar

Rossi, E., and P. Santucci de Magistris. 2011. “Estimation of Long Memory in Integrated Variance. CREATES Research Papers 2011–2011, School of Economics and Management.” University of Aarhus.10.2139/ssrn.1808619Search in Google Scholar

Rudin, W. 1976. Principles of Mathematical Analysis. 3rd ed. New York, USA: McGraw-Hill.Search in Google Scholar

Saint-Paul, G. 2011. “A “Discretized” Approach to Rational Inattention.” IDEI Working Paper, n. 597.Search in Google Scholar

Schmitt, F., D. Schertzer, and S. Lovejoy. 2000.“Multifractal Fluctuations in Finance.” International Journal of Theoretical and Applied Finance 3: 361–364.10.1142/S0219024900000206Search in Google Scholar

Shao, X., and W. B. Wu. 2007. “Local Whittle Estimation of Fractional Integration for Nonlinear Processes.” Econometric Theory 23: 899–929.10.1017/S0266466607070387Search in Google Scholar

Shao, Y.-H., G.-F. Gu, Z.-Q. Jiang, W.-X. Zhou, and D. Sornette. 2012. “Comparing the Performance of FA, DFA and DMA using Different Synthetic Long-Range Correlated Time Series.” Scientific Reports 2: 835.Search in Google Scholar

Shimotsu, K., and P. C. B. Phillips. 2005. “Exact Local Whittle Estimation of Fractional Integration.” Annals of Statistics 33: 1890–1933.10.1214/009053605000000309Search in Google Scholar

Shimotsu, K., and P. C. B. Phillips. 2006. “Local Whittle Estimation of Fractional Integration and Some of its Variants.” Journal of Econometrics 130: 209–233.10.1016/j.jeconom.2004.09.014Search in Google Scholar

Sinai, Y. G. 1976. “Self Similar Probability Distributions.” Theory of Probability and Its Applications 21: 64–80.10.1137/1121005Search in Google Scholar

Szpiro, G. G. 1998. “Tick Size, the Compass Rose and Market Nanostructure.” Journal of Banking & Finance 22: 1559–1569.10.1016/S0378-4266(98)00073-9Search in Google Scholar

Velasco, C. 1999. “Gaussian Semiparametric Estimation of Non-Stationary Time Series.” Journal of Time Series Analysis 20: 87–127.10.1111/1467-9892.00127Search in Google Scholar

Yamasaki, K., L. Muchnik, S. Havlin, A. Bunde, and H. E. Stanley. 2005. “Scaling and Memory in Volatility Return Intervals in Financial Markets.” Proceedings of the National Academy of Sciences of the United States of America 102: 9424–9428.10.1073/pnas.0502613102Search in Google Scholar PubMed PubMed Central

Zygmund, A. 1959. Trigonometric Series. Cambridge, UK: Cambridge University Press.Search in Google Scholar


Supplemental Material

The online version of this article (DOI:10.1515/snde-2013-0011) offers supplementary material, available to authorized users.


Published Online: 2013-10-22
Published in Print: 2014-9-1

©2014 by De Gruyter

Downloaded on 28.3.2024 from https://www.degruyter.com/document/doi/10.1515/snde-2013-0011/html
Scroll to top button