## Abstract

We study how the round-off (or discretization) error changes the statistical properties of a Gaussian long memory process. We show that the autocovariance and the spectral density of the discretized process are asymptotically rescaled by a factor smaller than one, and we compute exactly this scaling factor. Consequently, we find that the discretized process is also long memory with the same Hurst exponent as the original process. We consider the properties of two estimators of the Hurst exponent, namely the local Whittle (LW) estimator and the detrended fluctuation analysis (DFA). By using analytical considerations and numerical simulations we show that, in presence of round-off error, both estimators are severely negatively biased in finite samples. Under regularity conditions we prove that the LW estimator applied to discretized processes is consistent and asymptotically normal. Moreover, we compute the asymptotic properties of the DFA for a generic (i.e., non-Gaussian) long memory process and we apply the result to discretized processes.

## Acknowledgements

FL acknowledges partial support by the grant SNS11LILLB “Price formation, agents heterogeneity, and market efficiency”. We are grateful to Yacine Aït-Sahalia, Fulvio Corsi, Gabriele Di Cerbo, Ulrich K. Müller, Christopher Sims, and two anonymous referees for their helpful comments and suggestions. We wish to thank J. D. Farmer for useful discussions inspiring the beginning of this work. Any remaining errors are our responsibility.

## Appendix A: Distributional properties

In this appendix we consider the distributional properties of the discretization of a generic stationary Gaussian process.

From (5) the *m*-th moment of the discretized process can be written as *X* is Gaussian distributed, the variance *D _{d}* of the discretized process can be calculated explicitly. From the above expression with

*m*=2 we obtain

Left panel of Figure 1 shows the ratio *D _{d}*/

*D*as a function of the scaling parameter

*χ*. It is worth noting that this ratio is not monotonic. For small

*χ*the ratio goes to zero because

*δ*is very large relatively to

*D*and essentially all the probability mass falls in the bin centered at zero. In this regime the variance ratio goes to zero as

When *χ*>>1 the ratio tends to one because the effect of discretization becomes irrelevant. In this regime *D _{d}*/

*D*≃1+1/(12

*χ*).

Analogously it is possible to calculate the kurtosis

For small *χ* the kurtosis diverges as

because the fourth moment goes to zero slower than the squared second moment. For large *χ* the kurtosis converges as expected to the Gaussian value 3 as *κ _{d}*≃3–1/(120

*χ*

^{2}). Note that the kurtosis reaches its asymptotic value 3 from below, since it reaches a minimum of roughly 2.982 at χ≃0.53 and then converges to three from below.

## Appendix B: Proofs for Section 3

**Proof of Proposition 1.** The discretized process, *X _{d}*(

*t*), is a non-linear transformation of the underlying real-valued process,

*X*(

*t*). Let us denote the discretization transformation with

*g*(‧), so that

*X*(

_{d}*t*)=

*g*(

*X*(

*t*)). From (10) we know that

where *ρ* is the autocorrelation function of the underlying continuous process, and *g _{j}* are defined as in (11).

Since the function *g*(*x*) is an odd function, *g _{j}*=0 for

*j*even, while all the odd coefficients are non-vanishing. Therefore, the discretization function has Hermite rank 1 and can be written as an infinite sum of Hermite (odd) polynomials. The generic

*g*coefficient is

_{j}The first Hermite polynomial is *H*_{1}(*x*)=*x* and the coefficient *g*_{1} is

where *ϑ _{a}*(

*u, q*) is the elliptic theta function. For large

*χ*,

*χ*the coefficient

*g*

_{1}goes to zero as

The second non-vanishing coefficient *g*_{3} is given by

In principle one could calculate all the coefficients *g _{j}*. Here we want to focus on the case when the correlation coefficient

*ρ*is small (i.e.,

*k*is large), so that it suffices to consider the first two coefficients

*g*

_{1}and

*g*

_{3}. Therefore, from (10) we have

If we plug (1) and (36) into (38), we get the result. □

**Proof of Proposition 2.** Under the assumption that

**Proof of Corollary 1.** It follows from the definition of autocorrelation function and Proposition 1. □

**Proof of Proposition 3.** From Proposition 1 we can write *k*→∞. Then, the proposition follows from Theorem 3.3 (a) in Palma (2007). □

**Proof of Proposition 4.** For the sake of simplicity, we consider the case of an underlying process with unit variance, namely *D*=1. In order to extend the proof to non-unit variance we need simply to do the transformation *L*(*k*)→*L*(*k*)/*D*. Obviously, *L*(*k*)/*D* is still slowly varying at infinity. Moreover, we consider only the case *I*=∞, the case *I*<∞ being trivial.

This proof is divided in two parts: in the first part we prove (17) and (18) under the assumption that

**First part** From Proposition 2 and Theorem 2.1 in Beran (1994) we know that the spectral density *ϕ _{d}*(

*ω*) exists. Moreover, if

*K*>0 s.t.

As ω→0^{+}, the first two terms on the RHS converge to a constant, while the third term diverges.

Since *k*≥*K*, we can write

Therefore,

where the term *O*(1) comes from substituting

Then, we introduce the following representation of a trigonometric series

where *Li _{s}*(

*e*) for small

^{z}*z*is (see Gradshteyn and Ryzhik 1980)

where *ζ*(‧) is the analytic continuation of the Riemann zeta function over the complex plane and *H _{s}* is the

*s*th harmonic number.

Finally, we plug (41) or (42) into (40), and then we apply (40) into (39). By substituting (36) for

**Second part** Under Assumption 1 (i) the series *k*≥1. Therefore, by Mertens’ theorem,^{[9]}
for any *j*≥0 we can write

where the series on the RHS is the Cauchy product. Note that *i*, while *j*. By absolute convergence of the original series the Cauchy product also converges absolutely ∀*k*≥1.

From Herglotz’s theorem and Proposition 2 we can write

where

Since *k*≥1. Moreover, under Assumption 1 (ii) there is only a finite number of terms in *L*(*k*) (and therefore in *k*. Thus, from Rudin (1976) (Chapter 8, Theorem 8.3) we can invert the order of summation and write

In other words, the Fourier transform of the series becomes the series of the Fourier transforms. From this point on the proof is very similar to that of Lemma 1 and therefore omitted for brevity. □

**Proof of Corollary 2.** If *L* is analytic at infinity, then *k* for some {*b _{n}*}, and the series converges absolutely within its radius of convergence. Then,

*c*

_{2}>0 and

*c*

_{1}>0. □

**Proof of Corollary 3.** For a fGn *k*≥1. Because *L*(*k*) is analytic and even, *β _{i}*=2

*i*∀

*i*≥0 and we can write

Thus, the autocovariance of a fGn satisfies Assumption 1, and therefore from Corollary 2 it follows that the spectral density of a discretized fGn satisfies (20) with *c*_{0} given by (19).

From Corollary 2 we already know that the second-order term is strictly positive if *H*≥5/6. Hence, we just need to prove that *c*_{0}>0 if *H*<5/6. Since *c*_{0} is given by (19) we can write

where we used the fact that for a fGn *β _{i}*=2

*i*∀

*i*≥0. The symbol

First, since {*b _{i}*} are strictly positive and (2

*j*+1)(2–2

*H*)>1 ∀

*j*≥1 if

*H*<5/6, the third term on the RHS of (43) is strictly positive for

*H*<5/6.

Second, from Sinai (1976) and Beran (1994) we know that the spectral density of a fGn satisfies

where *ω* H∈[0.5, 1] the behavior of the above spectral density follows by Taylor expansion at zero:

On the other hand, because *L*(*k*) is analytic with *β*_{1}=2, it satisfies Assumption 2; therefore, the fGn satisfies the conditions of Lemma 1. Following the proof of that lemma, after some algebraic manipulations, we get

where

Finally, note that from (10) we know that

□

## Appendix C: Proofs for Section 4

**Proof of Lemma 1.** From Theorem 2.1 in Beran (1994) we know that the spectral density *ϕ*(*ω*) exists and from Hergotz’s theorem it is the discrete Fourier transform of the autocovariance

Let *α _{i}*=2–2

*H*+

*β*∀

_{i}*i*≥0. Under Assumption 1 (ii) there is only a finite number of terms in the autocovariance of

*X*that are not summable, and therefore there will be only a finite number of divergent terms in the spectral density. Moreover, under Assumption 1 (i) the series

*k*≥1. Therefore, by Rudin (1976) (Theorem 8.3) we can write

By using the polylogarithm representation (40) introduced above, for small *ω* we can plug (41) and (42) into (47). Let

where *ζ*(‧) is the analytic continuation of the Riemann zeta function over the complex plane and *H _{s}* is the

*s*th harmonic number.

Under Assumption 1 we can collect all the terms of the same oder and rearrange (48) in powers of *ω*. Let *O*(1) in (48); then,

Under Assumption 2 *α*_{1}≠1, and therefore, if also *α*_{1}≠2, we can write

where *c _{ϕ}* is defined as in Proposition 3 and

If *α*_{1}=2, by Assumption 2

Putting together (49) and (50), and noting that if *β*_{1}≤2, we get the result

where *c _{β}*≠0 and

*β*∈(0, 2].

□

**Proof of Theorem 1.** Following the proof of Theorem 4 in DGH, because *j*_{0}=1 we can write *Y _{t}* as a signal-plus-noise process

*Y*=

_{t}*W*+

_{t}*Z*, where

_{t}where *j*_{1} is the second non-vanishing term in the Hermite expansion and *H _{j}*(‧) is the

*j*th Hermite polynomial.

**Part (i)** If *W _{t}* satisfies Assumption A in DGH, i.e.,

where *X _{t}* is a stationary purely non-deterministic Gaussian process, it is also linear with finite fourth moments. Consequently,

*W*=

_{t}*g*

_{1}

*X*is also linear with finite fourth moments. Then, we can write

_{t}where *ε _{t}* are i.i.d. Gaussian variables with zero mean and unit variance. Let

*W*satisfies also Assumption B therein.

_{t}We show below that the spectral density of *Z _{t}* satisfies

*ω*→0

^{+}, for some

*C*>0 and

*H*≥0.5 such that

_{z}for any *ε*∈(0.5, *H*).

Indeed, if *j*_{1}(2–2*H*)>1 from (10)

for some *C*>0. Therefore, *H _{z}*=0.5<

*H*.

If *j*_{1}(2–2*H*)<1, we can prove that

for some *C*>0 and *H _{z}*=

*H*–(

*j*

_{1}–1)(1–

*H*)<

*H*∈(0.5, 1). Similarly, if

*j*

_{1}(2–2

*H*)=1, we can prove that

for some *C*>0 and for any *ε*>0. The proof of the above results is a special case of the proof of Proposition 4, and thus omitted. The results above prove (51).

Since *W _{t}* satisfies Assumptions A and B in DGH and the spectral density

*ϕ*satisfies the asymptotic conditions above, consistency of

_{z}Moreover, if we write the periodogram of *Y _{t}* as

*I*(

_{Y}*ω*)=

_{j}*I*(

_{W}*ω*)+

_{j}*v*, where

_{j}*I*is the periodogram of the “signal”

_{W}*W*and

_{t}*v*is the contribution of the “noise”

_{j}*Z*at the

_{t}*j*th Fourier frequency, it is straightforward to show (see DGH pp. 225–226) that

where *X _{t}*} if the sequence {

*X*} were observed.

_{t}Note that, roughly speaking, *v _{j}* represents the sample estimate of the higher-order terms of the spectral density of

*Y*at the

_{t}*j*th Fourier frequency. For the discretization of a fGn we know from Corollary 3 that the second-order term of the spectral density is strictly positive for all

*H*; therefore, in that case, we expect that the second term on the RHS of the first line of (52) will induce a negative finite sample bias on

**Part (ii)** Under Assumptions 1 and 2, from Lemma 1 it follows that *X _{t}* and therefore

*W*satisfy Assumption

_{t}*T*(

*α*

_{0},

*β*) in DGH, with

*α*

_{0}=2

*H*–1 and

*β*defined as in Lemma 1. Moreover, under Assumption 3 we can combine the second part of Proposition 5 in DGH with Proposition 3 and Theorem 2 therein, and under the assumption that

*m*=

*o*(

*m*

^{2}

^{β}^{/(2}

^{β}^{+1)}) we can write

where *c _{β}* is defined as in Lemma 1,

*B*=(2π)

_{β}*/(*

^{β}β*β*+1)

^{2}, and

with *η _{j}*=

*I*(

_{X}*ω*)/

_{j}*ϕ*(

_{x}*ω*).

_{j}Let *r*=*H*–*H _{z}*. By plugging (53) into (52) we obtain

Moreover, under Assumption 3 and *m*=*o*(*m*^{2}^{β}^{/(2}^{β}^{+1)}), by Robinson’s (1995) Theorem 2

Therefore, *V _{m}*=

*O*(

_{P}*m*

^{–1/2}) and from (54) follows (26).

**Part (iii)** If *m*=*o*(*n*^{2}^{r}^{/(2}^{r}^{+1)}), equation (27) follows from applying (55) in (54). □

**Proof of Corollary 4.** The result of the corollary follows directly from Theorem 1, and from noticing that the second non-vaninshing Hermite coefficient for the discretized process is *g*_{3}≠0, so that *j*_{1}=3.

□

For the proof of Theorem 2 we need the following lemmas. Note that the proofs of the lemmas are at the end of this Appendix.

**Lemma 2***Let**and*

*where B*_{2}(*x*)=1/6–*x*+*x*^{2}*is the third Bernoulli polynomial and* {*x*} *represents the fractional part of the real number x. Then, both R*(*α) and ** converge to a finite number as m*→∞.

Note that *R*(*α*) and

**Lemma 3***Let**α*>0, *and α*≠1, 2. *Then, as i*→∞,

*where ** and ** and R(‧) is defined as in (56).*

**Lemma 4***Let ** Then, as i*→∞,

*where R*_{1}≡*R*(*α*=–1) *and R*_{2}≡*R*(*α*=–2).

Before proving Theorem 2 we prove the following proposition.

**Proposition 5***Under the assumptions of Theorem 2, let us define* Σ* _{m}*=

*Cov*(

*Y*(

*i), Y*(

*j)), i.e., the covariance matrix of the integrated process*(

*Y*(1), …,

*Y*(

*m)). Then,*

(*i) if β*≠2*H*–1, *then*

(*ii) if β*=2*H*–1, *then*

**Proof.** Under the assumptions on *X*(*t*), for 1≤*i, j*≤*m* we can write

where D is the variance of the process X(t). By substituting the explicit functional form for γ(k) we get

for some *M*>0 sufficiently large. By Lemma 3 we have

Now, we consider the following cases:

(*i) β*≠2*H*–1. In this case we have to distinguish two cases.

If *β*≠2*H*, we can use Lemma 3 and obtain

If *β*=2*H*, then we can use Lemma 4

Then, we repeat the same calculation for the second and third term in (58). By noting that min(*i, j*) is either of order *O*(*i*) or *O*(*j*) and putting together all the terms, we obtain the result.

(*ii) β*=2*H*–1. In this case we can use Lemma 4

Then, we repeat the same calculation for the second and third term in (58). By noting that min(*i, j*) is either of order *O*(*i*) or *O*(*j*) and putting together all the terms, we obtain the result. □

**Proof of Theorem 2.** First, for *j*∈{1, …, [*n*/*m*]} let us define the vector:

where *x*^{⊤} means the transpose of *x*.

Then, following Bardet and Kammoun (2008),

where *E*_{1} is the vector subspace of *e*_{1}=(1, …, 1)^{⊤} and *e*_{2}=(1, 2, …, *m*)^{⊤}, *E*_{1}, and the second equality holds because the projection operator is idempotent. As a consequence,

where Tr(·) is the trace of a square matrix.

**Case (i)** If *β*≠2*H*–1, from Proposition 5 we get

where the error *O*(*m*^{–1}) comes from approximating the sum with the integral; therefore,

For the term

Then, using (60) and Proposition 5 we can write

Approximating sums with integrals we get

Putting together (59) and (61) we obtain

which is the formula of

**Case (ii)** If *β*≠2*H*–1, the proof is exactly the same, except for replacing all the terms *O*(*i*^{–min (2}^{H}^{–1,}^{β}^{)}) with the terms *O*(*i*^{1–2}* ^{H}*ln

*i*). □

**Proof of Corollary 5.** It follows from the autocovariance of a fractional Gaussian noise (see formula (3)). The proof is very similar to the proof of Theorem 2, and thus omitted. However, a complete proof can be found in Bardet and Kammoun (2008) (see Proof of Property 3.1 therein). □

**Proof of Corollary 6.** It follows from Proposition 2 and Theorem 2. □

### Proofs of lemmas for Section 4

**Proof of Lemma 2.** First we prove that *R*(*α*) converges. Because |*B*_{2}({1–*t*})|≤*B*_{2}(0)=1/6 for all *t*, we have *m*→∞ because 2–*α*>1.

Now we prove that *ε*>0 s.t. 2–*α*–*ε*>1. One can always find such *ε* because *α*<1 by assumption. Because log *t* is slowly varying, there exists *T*>1 s.t. *t*^{–2+}* ^{α}*log

*t*<

*t*

^{–2+}

^{α}^{+}

*for all*

^{ε}*t*>

*T*. Because |

*B*

_{2}({1–

*t*})|≤1/6 for all

*t*≥1, we can write

where the second integral converges as *m*→∞ because 2–*α*–*ε*>1.

**Proof of Lemma 3.** By using Euler-Maclaurin formula up to the first order we obtain

where *B*_{2}=1/6 is the third Bernoulli number, and *R*(*α*) is the remainder of the Euler-Maclaurin expansion given by (56). From Lemma 2 we know that *R*(*α*) converges, as *i*→∞. So, we can write

where *A*_{1}(‧) is defined as in Lemma 3. Note that the second-order term is *O*(*i*).

Similarly, we can write

Note that in this case the second-order term is *O*(*i*^{1–}* ^{α}*).

Putting together these two terms we have *A*_{0}(‧) is defined as in Lemma 3. Note that the terms of order *i*^{1–}* ^{α}* cancel out exactly. □

**Proof of Lemma 4.** The proof is similar to the proof of Lemma 3, and thus omitted.

## Appendix D: Sign process

Taking the sign of a stochastic process can be thought of as an extreme form of discretization. Hence, to study the asymptotic properties of the sign process we can use the same technique outlined in Section 3.1 for general nonlinear transformations of Gaussian processes. By decomposing the sign transformation on the basis of Hermite polynomials we get the following

**Proposition 6***Let ** be a stationary Gaussian process with autocovariance function given by Definition 1. Then, the autocovariance γ _{s}(k) and the autocorrelation ρ_{s}(k) of the sign process satisfy*

Therefore, also the sign transformation preserves the long memory property and the Hurst exponent. Moreover, if the autocorrelation *ρ* is small (e.g., if the lag *k* is large) we have

This expression has been obtained several times, as, for example, in the context of binary time series [see Keenan (1982)]. Note that, trivially, when the discretization is obtained by taking the sign function the variance of the discretized process is *D _{s}*=1.

All the results on the discretized process presented above hold true for the sign process as well, with

**Proof of Proposition 6.** As the discretization, the sign transformation is an odd function, and therefore *g _{j}*=0 when

*j*is even. When

*j*is odd the coefficients of the sign function in Hermite polynomials are

By inserting these value in (10) we obtain the autocorrelation (and autocovariance) function of the sign of a Gaussian process

□

## References

Abadir, K. A., W. Distaso, and L. Giraitis. 2007. “Nonstationarity-Extended Local Whittle Estimation.” *Journal of Econometrics* 141: 1353–1384.10.1016/j.jeconom.2007.01.020Search in Google Scholar

Alessio, E., A. Carbone, G. Castelli, and V. Frappietro. 2002. “Second-Order Moving Average and Scaling of Stochastic Time Series.” *European Physical Journal B* 27 (2): 197–200.10.1140/epjb/e20020150Search in Google Scholar

Alfarano, S., and T. Lux. 2007. “A Noise Trader Model as a Generator of Apparent Financial Power Laws and Long Memory.” *Macroeconomic Dynamics* 11: 80–101.10.1017/S1365100506060299Search in Google Scholar

Andrews, D. W. K., and Y. Sun. 2004. “Adaptive Polynomial Whittle Estimation of Long-Range Dependence.” *Econometrica* 72: 569–614.10.1111/j.1468-0262.2004.00501.xSearch in Google Scholar

Arianos, S., and A. Carbone. 2007. “Detrending Moving Average Algorithm: A Closed-Form Approximation of the Scaling Law.” *Physica A* 382 (1): 9–15.10.1016/j.physa.2007.02.074Search in Google Scholar

Arteche, J. 2004. “Gaussian Semiparametric Estimation in Long Memory in Stochastic Volatility and Signal-plus-noise Models.” *Journal of Econometrics* 119: 131–154.10.1016/S0304-4076(03)00158-1Search in Google Scholar

Bali, R., and G. L. Hite. 1998. “Ex Dividend Day Stock Price Behavior: Discreteness or Tax-Induced Clienteles?” *Journal of Financial Economics* 47: 127–159.10.1016/S0304-405X(97)00041-XSearch in Google Scholar

Bardet, J.-M., and I. Kammoun. 2008. “Asymptotic Properties of the Detrended Fluctuation Analysis of Long Range Dependent Processes.” *IEEE Transactions on Information Theory* 54: 2041–2052.10.1109/TIT.2008.920328Search in Google Scholar

Barrett, J. F., and D. G. Lampard. 1955. “An Expression for Some Second-Order Probability Distributions and its Application to Noise Problems.” *IRE Transactions on Information Theory* 1: 10–15.10.1109/TIT.1955.1055122Search in Google Scholar

Beran, J. 1994. *Statistics for Long-Memory Processes.* Boca Raton, Florida, USA: Chapman & Hall/CRC.Search in Google Scholar

Corsi, F., and R. Renò. 2011. “Is Volatility Really Long Memory?” Working Paper.Search in Google Scholar

Dahlhaus, R. 1989. “Efficient Parameter Estimation for Self-Similar Processes.” *Annals of Statistics* 17: 1749–1766.10.1214/aos/1176347393Search in Google Scholar

Dalla, V., L. Giraitis, and J. Hidalgo. 2006. “Consistent Estimation of the Memory Parameter for Nonlinear Time Series.” *Journal of Time Series Analysis* 27: 211–251.10.1111/j.1467-9892.2005.00464.xSearch in Google Scholar

Delattre, S., and J. Jacod. 1997. “A Central Limit Theorem for Normalized Functions of the Increments of a Diffusion Process, in the Presence of Round-Off Errors.” *Bernoulli* 3: 1–28.10.2307/3318650Search in Google Scholar

Di Matteo, T., T. Aste, and M. Dacorogna. 2005. “Long-Term Memories of Developed and Emerging Markets: Using The Scaling Analysis to Characterize their Stage of Development.” *Journal of Banking & Finance* 29: 827–851.10.1016/j.jbankfin.2004.08.004Search in Google Scholar

Dittmann, I., and C. W. J. Granger. 2002. “Properties of Nonlinear Transformations of Fractionally Integrated Processes.” *Journal of Econometrics* 110: 113–133.10.1016/S0304-4076(02)00089-1Search in Google Scholar

Embrechts, P., C. Klüppelberg, and T. Mikosch. 1997. *Modelling Extremal Events: For Insurance and Finance*. Berlin, Heidelberg: Springer-Verlag.10.1007/978-3-642-33483-2Search in Google Scholar

Engle, R. F., and J. R. Russell. 2010. “Analysis of High-Frequency Data.” In *Handbook of Financial Econometrics*, edited by Y. Aït-Sahalia and L. P. Hansen, Vol. 1, 383–426. Amsterdam: North-Holland.10.1016/B978-0-444-50897-3.50010-9Search in Google Scholar

Fox, R., and M. S. Taqqu. 1986. “Large-Sample Properties of Parameter Estimates for Strongly Dependent Stationary Gaussian Time Series.” *Annals of Statistics* 14: 517–532.10.1214/aos/1176349936Search in Google Scholar

Giraitis, L., and D. Surgailis. 1990. “A Central Limit Theorem for Quadratic Forms in Strongly Dependent Linear Variables and Application to Asymptotical Normality of Whittleõs Estimate.” *Probability Theory and Related Fields* 86: 87–104.10.1007/BF01207515Search in Google Scholar

Gottlieb, G., and A. Kalay. 1985. “Implication of the Discreteness of Observed Stock Prices.” *Journal of Finance* 40: 135–153.10.1111/j.1540-6261.1985.tb04941.xSearch in Google Scholar

Gradshteyn, I. S., and I. M. Ryzhik. 1980. *Tables of Integrals, Series, and Products* 4th ed. New York: Academic Press.Search in Google Scholar

Grech, D., and Z. Mazur. 2013. “On the Scaling Ranges of Detrended Fluctuation Analysis for Long-Term Memory Correlated Short Series of Data.” *Physica A* 392 (10): 2384–2397.10.1016/j.physa.2013.01.049Search in Google Scholar

Gu, G.-F., and W.-X. Zhou. 2010. “Detrending Moving Average Algorithm for Multifractals.” *Physical Review E* 82 (1): 011136.10.1103/PhysRevE.82.011136Search in Google Scholar

Hansen, P. R., and A. Lunde. 2010. Estimating the Persistence and the Autocorrelation Function of a Time Series that is Measured with Error. CREATES Research Papers 2010-08, School of Economics and Management, University of Aarhus.10.2139/ssrn.1550269Search in Google Scholar

Hardy, G. H. 1991. Divergent Series. Providence, RI: AMS Chelsea.Search in Google Scholar

Harris, L. 1990. “Estimation of Stock Price Variances and Serial Covariances from Discrete Observations.” *The Journal of Financial and Quantitative Analysis* 25: 291–306.10.2307/2330697Search in Google Scholar

Hosking, J. R. M. 1996. “Asymptotic Distributions of the Sample Mean, Autocovariances, and Autocorrelations of Long-Memory Time Series.” *Journal of Econometrics* 73: 261–284.10.1016/0304-4076(95)01740-2Search in Google Scholar

Hurvich, C. M., and B. K. Ray. 2003. “The Local Whittle Estimator of Long-Memory Stochastic Volatility.” *Journal of Financial Econometrics* 1: 445–470.10.1093/jjfinec/nbg018Search in Google Scholar

Hurvich, C. M., E. Moulines, and P. Soulier. 2005. “Estimating Long Memory in Volatility.” *Econometrica* 73: 1283–1328.10.1111/j.1468-0262.2005.00616.xSearch in Google Scholar

Jiang, Z.-Q., and W.-X. Zhou. 2011. “Multifractal Detrending Moving Average Cross-Correlation Analysis.” *Physical Review E* 84 (1): 016106.10.1103/PhysRevE.84.016106Search in Google Scholar
PubMed

Keenan, D. M. 1982. “A Time Series Analysis of Binary Data.” *Journal of the American Statistical Association* 77: 816–821.10.1080/01621459.1982.10477892Search in Google Scholar

Künsch, H. R. 1987. “Statistical Aspects of Self-Similar Processes.” In *Proceedings of the 1st World Congress of the Bernoulli Society*, edited by Yu. A. Prohorov and V. V. Sazonov, Vol. 1, 67–74. Utrecht: Science Press.10.1515/9783112314227-005Search in Google Scholar

La Spada, G., J. D. Farmer, and F. Lillo. 2011. “Tick Size and Price Diffusion.” In: *Econophysics of Order-driven Markets*, edited by F. Abergel, B.K. Chakrabarti, A. Chakraborti and M. Mitra, 173–188. Milano, Italy: Springer.10.1007/978-88-470-1766-5_12Search in Google Scholar

Lillo, F., and J. D. Farmer. 2004. “The Long Memory of the Efficient Market.” *Studies in Nonlinear Dynamics & Econometrics* 8: 1.10.2202/1558-3708.1226Search in Google Scholar

Mandelbrot, B. B., and J. W. van Ness. 1968. “Fractional Brownian Motion, Fractional Noises and Applications.” *SIAM Review* 10: 422–437.10.1137/1010093Search in Google Scholar

Peng, C.-K., S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger. 1994. “Mosaic Organization of DNA Nucleotides.” *Physical Review E* 49: 1685–1689.10.1103/PhysRevE.49.1685Search in Google Scholar

Phillips, P. C. B., and K. Shimotsu. 2004. “Local Whittle Estimation in Nonstationary and Unit Root Cases.” *Annals of Statistics* 32: 656–692.10.1214/009053604000000139Search in Google Scholar

Robinson, P. M. 1995. “Gaussian Semiparametric Estimation of Long Range Dependence.” *Annals of Statistics* 23: 1630–1661.10.1214/aos/1176324317Search in Google Scholar

Rosenbaum, M. 2009. “Integrated Volatility and Round-Off Error.” *Bernoulli* 15: 687–720.10.3150/08-BEJ170Search in Google Scholar

Rossi, E., and P. Santucci de Magistris. 2011. “Estimation of Long Memory in Integrated Variance. CREATES Research Papers 2011–2011, School of Economics and Management.” University of Aarhus.10.2139/ssrn.1808619Search in Google Scholar

Rudin, W. 1976. Principles of Mathematical Analysis. 3rd ed. New York, USA: McGraw-Hill.Search in Google Scholar

Saint-Paul, G. 2011. “A “Discretized” Approach to Rational Inattention.” IDEI Working Paper, n. 597.Search in Google Scholar

Schmitt, F., D. Schertzer, and S. Lovejoy. 2000.“Multifractal Fluctuations in Finance.” *International Journal of Theoretical and Applied Finance* 3: 361–364.10.1142/S0219024900000206Search in Google Scholar

Shao, X., and W. B. Wu. 2007. “Local Whittle Estimation of Fractional Integration for Nonlinear Processes.” *Econometric Theory* 23: 899–929.10.1017/S0266466607070387Search in Google Scholar

Shao, Y.-H., G.-F. Gu, Z.-Q. Jiang, W.-X. Zhou, and D. Sornette. 2012. “Comparing the Performance of FA, DFA and DMA using Different Synthetic Long-Range Correlated Time Series.” *Scientific Reports* 2: 835.Search in Google Scholar

Shimotsu, K., and P. C. B. Phillips. 2005. “Exact Local Whittle Estimation of Fractional Integration.” *Annals of Statistics* 33: 1890–1933.10.1214/009053605000000309Search in Google Scholar

Shimotsu, K., and P. C. B. Phillips. 2006. “Local Whittle Estimation of Fractional Integration and Some of its Variants.” *Journal of Econometrics* 130: 209–233.10.1016/j.jeconom.2004.09.014Search in Google Scholar

Sinai, Y. G. 1976. “Self Similar Probability Distributions.” *Theory of Probability and Its Applications* 21: 64–80.10.1137/1121005Search in Google Scholar

Szpiro, G. G. 1998. “Tick Size, the Compass Rose and Market Nanostructure.” *Journal of Banking & Finance* 22: 1559–1569.10.1016/S0378-4266(98)00073-9Search in Google Scholar

Velasco, C. 1999. “Gaussian Semiparametric Estimation of Non-Stationary Time Series.” *Journal of Time Series Analysis* 20: 87–127.10.1111/1467-9892.00127Search in Google Scholar

Yamasaki, K., L. Muchnik, S. Havlin, A. Bunde, and H. E. Stanley. 2005. “Scaling and Memory in Volatility Return Intervals in Financial Markets.” *Proceedings of the National Academy of Sciences of the United States of America* 102: 9424–9428.10.1073/pnas.0502613102Search in Google Scholar
PubMed
PubMed Central

Zygmund, A. 1959. *Trigonometric Series*. Cambridge, UK: Cambridge University Press.Search in Google Scholar

## Supplemental Material

The online version of this article (DOI:10.1515/snde-2013-0011) offers supplementary material, available to authorized users.

**Published Online:**2013-10-22

**Published in Print:**2014-9-1

©2014 by De Gruyter