Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Physics

formerly Central European Journal of Physics

Editor-in-Chief: Feng, Jonathan

Managing Editor: Lesna-Szreter, Paulina

1 Issue per year


IMPACT FACTOR 2016 (Open Physics): 0.745
IMPACT FACTOR 2016 (Central European Journal of Physics): 0.765

CiteScore 2016: 0.82

SCImago Journal Rank (SJR) 2015: 0.458
Source Normalized Impact per Paper (SNIP) 2015: 1.142

Open Access
Online
ISSN
2391-5471
See all formats and pricing
More options …
Volume 15, Issue 1 (Jun 2017)

Issues

After notes on self-similarity exponent for fractal structures

Manuel Fernández-Martínez
  • Corresponding author
  • Department of Sciences, University Centre of Defence at the Spanish Air Force Academy, MDE-UPCT, 30720 Santiago de la Ribera, Murcia, Spain
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Manuel Caravaca Garratón
  • Department of Sciences, University Centre of Defence at the Spanish Air Force Academy, MDE-UPCT, 30720 Santiago de la Ribera, Murcia, Spain
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2017-06-16 | DOI: https://doi.org/10.1515/phys-2017-0049

Abstract

Previous works have highlighted the suitability of the concept of fractal structure, which derives from asymmetric topology, to propound generalized definitions of fractal dimension. The aim of the present article is to collect some results and approaches allowing to connect the self-similarity index and the fractal dimension of a broad spectrum of random processes. To tackle with, we shall use the concept of induced fractal structure on the image set of a sample curve. The main result in this paper states that given a sample function of a random process endowed with the induced fractal structure on its image, it holds that the self-similarity index of that function equals the inverse of its fractal dimension.

Keywords: Fractal; fractal structure; fractal dimension; Hausdorff dimension; self-similarity index

PACS: 02.50.Ng; 02.50.Ey; 02.70.Uu; 05.45.Df; 05.40.Jc

1 Introduction

The main idea in this paper lies in the calculation of the fractal dimension of a curve with respect to a certain fractal structure to be induced on its image. It is worth pointing out that such a dimension leads to fruitful applications and results for those interested in the calculation the self-similarity exponent. The main references this work is based on are [1-5]. Next, we briefly describe how the present article is organized. The basics on fractal structures appear in Section 2. In Section 3, we analytically deal with the construction of fractal dimension III. In Section 4, we define a fractal dimension for a curve. This will be carried out in terms of a fractal structure induced on the image of that curve. In Section 5, we recall the basics on random functions and their increments, (fractional) Brownian motions, and stable processes. Finally, in Section 6, we provide some results linking the fractal dimension and the Hurst exponent of sample functions. In addition, we mathematically construct some accurate approaches to efficiently deal with the calculation of both quantities for processes with self-affine and stationary increments (c.f. Subsection 5.1). Observe that they include (fractional) Brownian motions and (fractional) Lévy stable processes (c.f. Corollary 8).

2 Fractal structures

Fractal structures were first sketched by Bandt and Retta in [6], and applied by Arenas and Sánchez-Granero in [7] to characterize non-Archimedean quasi-metrization.

We say that a family of subsets of X, Γ, is a covering of X if X = ∪AΓA. We shall write Γ1Γ2 to denote that the cover Γ1 is a refinement of Γ2. Further, Γ1 ≺≺ Γ2 indicates both Γ1Γ2 and for all BΓ2, B = ∪AΓ1, ABA. By a fractal structure on X, we understand a countable family of covers, Γ = {Γn}n∈ℕ, with Γn+1 ≺≺ Γn. Thus, Γn is the level n of Γ. It is worth mentioning that a set may appear > 1 times in a level of a fractal structure Γ.

A fractal structure is said to be starbase if St (x, Γ) = {St (x, Γn)}n∈ℕ is a neighborhood base for each xX, where St (x, Γn) = ∪AΓn,xAA. Further, Γ is locally finite if a given point xX belongs to a finite number of elements in each level of Γ.

3 Fractal dimension III

Along the sequel, we shall refer to both box and Hausdorff dimensions as classical fractal dimensions.

Being based on the Hausdorff dimension, next we recall how to analytically construct a new fractal dimension from the viewpoint of a fractal structure.

Let s ≥ 0, FX, and let the metric space (X, d) be endowed with a fractal structure Γ.

In addition, define Hns(F)={diam(A)s:AAn(F)},(1) where An(F)={AΓn:AF}, as well as its asymptotic behavior by letting n → ∞, Hs(F)=limnHns(F).(2)

Let w ≥ 0. Then diam(A)wdiam(F,Γn)wsdiam(A)s,(3) with A ∈ 𝒜n(F). It is worth noting that Eq. (3) is equivalent to Hnw(F)Hns(F)diam(F,Γn)ws.

Hence, Hw(F)Hs(F)limndiam(F,Γn)ws.

Assume that diam (F, Γn) → 0. If ℋs(F) < ∞ and s < w, then ℋw(F) = 0. Hence, a fractal dimension can be described as sup{s ≥ 0 : ℋs(F) = ∞}, or equivalently, by inf{s ≥ 0 : ℋs(F) = 0}, similarly to Hausdorff dimension. Going beyond, that hypothesis is necessary as it is highlighted along the counterexample that follows.

Definition 1

(c.f. [8, Definition 3.1]) The levels of the natural fractal structure on ℝd are described by Γn=k12n,k1+12n××kd2n,kd+12n:k1,,kdZ.

Counterexample 1

(c.f. [9, Counterexample 5.2] or [1, Remark 4.1]) There exist both FX and Γ on(X, d) such that sup{s0:Hs(F)=}inf{s0:Hs(F)=0}, with diam (F, Γn) ↛ 0.

Proof

Let F = [0, l]2 and Γ be the natural fractal structure on F ⊂ ℝ2 but adding F itself to each level of Γ. Thus, diam(F,Γn)=2 for each n ∈ ℕ, which leads to diam (F, Γn) ↛ 0. In addition, the following expression holds: Hns(F)=2s21+12n(s2).

Hence, Hs(F)=ifs<22s2ifs>2.

The problem of the existence of the limit in Eq. (2) could be avoided if the families 𝒜n(F) are properly replaced by the following coverings of F via elements of a certain level (≥ n) of Γ: An,3(F)={Am(F):mn}.(4) It is worth pointing out that if the families 𝒜n,3(F) are considered instead of 𝒜n(F), then the arguments above still remain valid.

Definition 2

(c.f. [1, Definition 4.2]) Suppose that diam (F, Γn) → 0. Let Γ be a fractal structure on (X, d) and FX. Define Hn,3s(F)=inf{Hms(F):mn},(5) where Hns(F) was defined in Eq. (1), and H3s(F)=limnHn,3s(F).

Then inf{s0:Hns(F)=0}=sup{s0:Hns(F)=} leads to the so-called fractal dimension III of F, dimΓ3(F).

From Definition 2, it holds that the quantity H3s(F) can be described in terms of dimΓ3(F): H3s(F)=ifs<dimΓ3(F)0ifs>dimΓ3(F), if diam (F, Γn) → 0. Accordingly, along the sequel, we shall work under the hypothesis diam (F, Γn) → 0.

The following remark points out that it is no longer necessary to consider lower/upper limits to define H3s(F).

Interestingly, dimΓ3(F) always exists for all FX, since {Hn,3s(F)}nN is monotonic in n (c.f. [1, Remark 4.4]).

Next, we theoretically connect fractal dimension III with classical definitions of fractal dimension and fractal dimension II (c.f. [8, Section 4]), as well.

Theorem 2

(c.f. [1, Theorem 4.5]) The two following hold:

  1. dimH(F)dimΓ3(F).

  2. If diam (A) = diam (F, Γn)for all A ∈ 𝒜n(F), thendim_B(F)dimΓ3(F).

The following result allows the calculation of fractal dimension III from handy Eqs. (1) and (2) under the assumption that ℋs(F) exists.

Theorem 3

(c.f. [1, Theorem 4.7])dimΓ3(F)=sup{s0:Hs(F)=}.

Let us collect some theoretical properties for fractal dimension III as a dimension function.

Proposition 4

(c.f. [1, Proposition 4.16]) The following statements hold.

  1. Fractal dimension III is monotonic.

  2. Fractal dimension III is (finitely) stable.

  3. There exists Γ on F countable such that dimΓ3(F)0.

  4. Fractal dimension III is not countably stable.

  5. There exists Γ on a given FX such that dimΓ3(F)dimΓ3(F¯).

4 A fractal dimension for curves

Next, we shall explore the fractal dimension of a curve from the viewpoint of fractal structures. It is worth pointing out that the only option regarding the classical fractal dimensions consists of calculating the dimension of the curve graph. Interestingly, the fractal dimension introduced in Definition 2 and calculated by handier Theorem 3) can be applied for that purpose.

First, we define a fractal structure induced on a curve from another fractal structure on [0, 1].

Definition 3

(c.f. [2, Definition 3.1]) Let α : [0, 1] → X be a parameterization of a curve, where Γ is a fractal structure on [0, 1] and X is a metric space. Thus, the levels of the fractal structure induced by Γ on α([0, 1]), Δ, are Δn = {α(A) : AΓn}.

Figure 1 enlightens how Definition 3 works. For illustration purposes, they have been depicted the first two levels of Δ on the image of a curve.

First two levels of Δ on the image of a Brownian motion
Figure 1

First two levels of Δ on the image of a Brownian motion

The fractal dimension of a curve in terms of an induced fractal structure via fractal dimension III (c.f. Definition 2 or Theorem 3) is defined in the following terms.

Definition 4

(c.f. [2, Definition 3.3]) The fractal dimension of a curve α is given by dim(α)=dimΔ3(α([0,1])).

If there is no further information regarding Γ, then we shall suppose that Γ is the natural fractal structure on [0, 1] (c.f. Definition 1).

Notice that to explore fractal patterns on curves throughout the classical fractal dimensions, we have to consider the graph of the curve instead of its image. However, it is worth pointing out that Definition 4 allows to study the complexity underlying the construction of curves since such a dimension can be calculated with respect to different parameterizations of a same curve. Further, it detects the overlaps of the elements in Γn.

Interestingly, Definition 4 can be applied even if α is not continuous. This allows α being a time series.

The next result contains several properties regarding the dimension introduced in Definition 4.

Theorem 5

(c.f. [2, Theorem 3.5]) The four following hold.

  1. If α is constant, then its dimension as a curve is equal to zero.

  2. If α is non-constant, then dim(α) ≥ 1.

  3. If α is Lipschitz, then dim(α) ≤ 1.

  4. If α is a non-constant Lipschitz function, then dim(α) = 1.

Next, we summarize how the values thrown by such a fractal dimension can be understood.

Remark 1

(c.f. [2, Section 3])

  • The fractal dimension of every non-constant continuous curve α satisfies that 1 ≤ dim(α) < ∞.

  • A greater value of dim(α) means that its oscillations are increased at any scale range.

  • Curves with smaller fractal dimensions are graphically depicted by means of smoother graphs.

  • If α is smooth, then its dimension is equal to 1.

  • The fractal dimension of a Brownian motion equals the value 2.

5 Self-similar processes

Along this section, we recall several concepts, results and notations to properly deal with upcoming Section 5.1. It is worth mentioning that the class of self-similar processes (first introduced in [10]) with stationary increments includes Brownian motions and generalizations of them in particular. Thus, the foundations on these kinds of random processes appear in both Subsections 5.2 and 5.3. Regarding additional details, see [11-13].

5.1 Random processes, increments and self-affinity features

Let t ∈ [0, ∞) denote the time and (X, 𝒜, P) be a probability space. By a random function (or a random process), we understand a collection X = {X(t, ω)}t∈[0,∞), where X(t, ω) is a random variable for all ω and all t. Further, we shall denote that two given random functions have the same (finite) joint distributions throughout the expression X(t, ω) ~ Y(t, ω). Let X be a random process. Then

  1. X is said to be self-similar of parameter H if there exists H > 0 such that X(αt, ω) ~ αH · X(t, ω) for all t and α > 0. In such a case, we say that H is the self-similarity index of X.

  2. The increments of X are named stationary provided that X(α+t, ω) − X(α, ω) ~ X(t, ω) − X(0, ω) for all t and α > 0. They are called self-affine with parameter H if for all h > 0 and s0 ≥ 0, we have X(s0 + γ, ω) -X(s0, ω) ~ hH · (X(s0 + hγ, ω) − X(s0, ω)).

If the increments of X are self-affine, then it holds that M(𝒯, ω) ~ 𝒯HM(1, ω), where M(𝒯, ω) shortly describes M(0, 𝒯, ω), and M(t, 𝒯, ω) is the cumulative range of X, given by M(t, 𝒯, ω) = suptst+𝒯{X(s, ω) - X(t, ω)} - inftst+𝒯{X(s, ω) - X(t, ω)} (c.f. [14, Corollary 3.6]).

Those H-self-similar random processes with stationary increments are useful for long-memory phenomena modeling purposes. Thus, if H(12,1), then we say that X displays long-memory (also long-range dependence). On the other hand, if H(0,12), then we shall understand that X shows short memory. Figure 2 displays some examples of 1024 point FBMs with parameters ranging in {0.25, 0.5, 0.75}.

From top to bottom, they have been displayed three 1024 point FBMs with self-similarity exponents ranging in {0.25, 0.5 (a BM), 0.75}
Figure 2

From top to bottom, they have been displayed three 1024 point FBMs with self-similarity exponents ranging in {0.25, 0.5 (a BM), 0.75}

The following remark connects the cumulative range of a random function with fractal structures.

Remark 2

(c.f. [3, Remark 1] and [15, Remark 2.3]) Let X be a random process with stationary increments, α : [0, 1] → ℝ be a sample function of X, Γ be the natural fractal structure on [0, 1], and Δ be the fractal structure induced on α([0, 1]) by Γ (c.f. Definition 3). Then for each n ∈ ℕ, {diam (A)}AΔn constitutes a sample of M(𝒯n, ω) for 𝒯n = 2-n.

Theorem 6

(c.f. [15, Lemma 3.4]) If X is a self-similar random process with parameter H and possesses stationary increments, then it has self-affine increments of that parameter.

5.2 (Fractional) Brownian motions

We can define a Brownian motion (BM in the sequel) as a random process BH = {BH(t, ω)}t≥0 rounder the three following conditions (c.f. [11, Section 16.1]):

  1. BH(t, ω) is continuous in t. Also, BH(0, ω) = 0 with probability 1.

  2. BH(t + β, ω) - BH(t, ω) ~ 𝒩(0, β).

  3. BH(t2m, ω) − BH(t2m−1, ω), . . ., BH(t2, ω) − BH(t1, ω) are independent if t1t2 ≤ ... ≤ t2m.

It is worth pointing out that (5.2) and (5.2) imply (5.2). Moreover, BH(t, ω) ~ 𝒩(0, t) for each t ≥ 0. Observe that the distribution of the increments BH(t + h, ω) - BH(t, ω) is independent of t. Hence, they are stationary. A Brownian process can be characterized as the unique probability distribution of functions with independent and stationary increments of finite variance. Also, as the unique self-similar Gaussian process with parameter H ∈ (0,1) and stationary increments (c.f. [16]). Accordingly, to generate sample functions with different properties, let us weaken some of the hypotheses above. In this way, FBMs and stable processes constitute two usual variants from Brownian processes. FBMs have normally distributed increments not being independent, whereas stable processes relax the finite variance condition which leads to discontinuous functions.

Thus, fractional Brownian motions (FBMs) can be described as follows (c.f. [11, Section 16.2]). A FBM with parameter α ∈ (0, 1) is a random process BH = {BH(t, ω)}t≥0 such that

  1. BH(t, ω) is continuous function in t and BH(0, ω) = 0 with probability 1.

  2. BH(t + β, ω) - BH(t, ω) ~ 𝒩(0, β2α).

It is worth mentioning that such a process exists for α ∈ (0, 1). In addition, the increments of a FBM are stationary by definition.

Remark 3

c.f. [14, Theorem 3.3]) FBMs are random processes whose increments are self-affine and stationary.

5.3 Stable processes and (fractional) Lévy stable motions

Another generalization of BMs leads to stable processes, introduced by Lévy. A stable process is a random function whose increments are independent and stationary. Stable processes are discontinuous (with probability 1) and their variance is infinite, except in certain cases, such as BMs.

In order to describe the probability distribution of this kind of processes, they are used Fourier transforms. In this ways, given a random variable X, the expression E[eiuX] (with u being real) gives the characteristic function of X. Let us define a stable process. To deal with, let ψ : ℝ → ℝ be a function, and assume that the increments of X lie under the following condition: E[eiu(X(t+h,ω)X(t,ω))]=ehψ(u), where X(t2m, ω) − X(t2m−1, ω), . . ., X(t2, ω) − X(t1, ω) are independent for t1t2 ≤ ... ≤ t2m. It is clear that these increments are stationary.

It is worth mentioning that stable processes exist for an appropriate choice of the function ψ. In fact, for functions of the form ψ(u) = λ|u|α with α ∈ (0, 2], then an α-stable symmetric process stands.

In terms of random variables, we say that X is stable if for each n ∈ ℕ, we can find a sequence {(γn, δn)}n∈ℕ such that j=1nXj(n)γnX+δn, where Xj(n) are (i.i.d.) random variables with the same distributions as X for j = 1,..., n. Equivalently, X is stable if for any m > 0, there exist n > 0 and c ∈ ℝ such that (µ(u))m = µ(nu) · eicu, where µ is the characteristic function of X. A random variable is said to be strictly stable if for any m > 0 there exists n > 0 such that (µ(u))m = µ(nu). For a strictly stable random variable, it holds that b(a)=ka1α, where α ∈ (0, 2]. Such a random variable is called α-stable. It is worth noting that for α-stable random variables and α ∈ (0, 2), it holds that E |X|γ < ∞, if and only if, γ < α. Further, there exists the moment of 2nd order of X, if and only if, α = 2. In that case, X becomes a Gaussian random variable so there exist all its moments. For α = 2, a Gaussian variable follows.

The fractional Lévy stable motion (FLSM, in the sequel) is a widely used generalization of a FBM to the α-stable case (c.f. [17-19]). This process is denoted as ZαH={ZαH(t,ω)}tR, where ZαH(t,ω) is given by 0|tu|H1α|u|H1αZα(du)+0t|tu|H1αZα(du) with Zα(du) being a symmetric Lévy α-stable independently scattered random measure (c.f. [16, 20]). Note that such an integral expression is well-defined for H ∈ (0, 1) and α ∈ (0, 2].

That integral can be understood as the following limit in the Lp-norm with p < α: tKH,α(t,u)Zα(du)=limmj=1mcj(Zα(uj)Zα(uj1)). This process is H—self-similar and has stationary increments (c.f. [19]). The H—self-similarity of ZαH is due to the integral representation provided above, and also to the d—self-similarity of the kernel KH,α(t, u) for d=H1α, provided that the integrator Zα(du) is 1α—self-similar. Hence, H=d+1α. Figure 3 displays some examples of 1024 point LSMs.

The figure above depicts three 210 point LSMs whose self-similarity indexes equal to 0.6, 0.75, and 0.8, respectively. Recall that α ∈ (0, 2) equals the value 1H∈(12,1) $\tfrac{1}{H}\in (\frac 12,1)$
Figure 3

The figure above depicts three 210 point LSMs whose self-similarity indexes equal to 0.6, 0.75, and 0.8, respectively. Recall that α ∈ (0, 2) equals the value 1H(12,1)

It is worth noting that any FBM is a FLSM for α = 2. Let H=1α. Then a Lévy stable motion of parameter α stands as a generalization of the BM to the α—stable case. Nevertheless, unlike the FBM, a Lévy α—stable motion is not the unique 1α—self-similar Lévy stable process of parameter α with stationary increments (such a statement is only valid for α ∈ (0, 1)).

Theorem 6 allows to justify the following

Remark 4

(c.f. [13, 19]). The increments of a FLSM are self-affine and stationary.

Table 1 summarizes all the self-similiar processes.

Table 1

The table above collects the (fractional) Lévy stable motion and its particular cases depending on the parameter α and the self-similarity exponent H

6 Some approaches to tackle with the calculation of the Hurst exponent

The aim of this section is to show how the fractal dimension introduced in Definition 4 allow to explore fractal patterns in random processes.

Let s > 0, α : [0, 1] → ℝ be a parameterization of a curve, Γ be the natural fractal structure on [0, 1], Δ be the fractal structure induced by Γ on α([0, 1]) ⊆ ℝ, and apply Definition 4 for fractal dimension calculation purposes.

Observe that each element AΔn can be written as A = A1A2, where A1, A2Δn+1. Under the assumption of a regular induced fractal structure Δ, we have diam(A1) ≃ diam(A2). Let us denote a = diam(A) and b = diam(A1) ≃ diam(A2). In addition, let rn=ba(0,1) be the ratio between elements from consecutive levels of Δ and define r as the mean of the list {rn}n∈ℕ. If as ≃ 2bs = 2rs as, then Hns(α([0,1]))=Hn+1s(α([0,1])). Thus, ℋs(α([0, 1])) < ∞. Therefore, we have s = dim(α).

On the other hand, if as ≃ 2bs = 2rs as, then 2rs ≃ 1, i.e., r21s.Hence,s1log2r provides a suitable estimation regarding the fractal dimension of the corresponding random process. Thus, dim(α)1log2r. These ideas, which will be formalized afterwards, lead to a strong connection between the Hurst exponent and the dimension of a broad range of processes.

Theorem 7

(c.f. [3, Theorem 1]) Assume that the increments of a random process 𝐗 are self-affine with parameter H and stationary. Further, let α : [0, 1] → ℝ be a sample function of 𝐗, Γ be the natural fractal structure on [0, 1], and Δ be the fractal structure induced by Γ on α([0, 1]). The two following hold.

  1. M(2n,ω)2HM(2(1+n),ω).

  2. If the 1Hmoment of M(2-n, ω) is finite, then H=1dim(α).

Theorem 7 is quite general. In fact, FBMs and LSMs satisfy all the hypothesis therein as the following result highlights.

Corollary 8

(c.f. [3, Corollary 1] and [3, Corollary 2]) Suppose that 𝐗 is either a FBM or a FLSM with parameter H. Let α : [0, 1] → ℝ be a sample function of 𝐗, Γ be the natural fractal structure on [0, 1], and Δ be the fractal structure induced by Γ on α([0, 1]). Then H=1dim(α).

6.1 FD algorithms

Next, we recall how FD algorithms works for fractal dimension calculation purposes. The next remark becomes especially appropriate to tackle with computational applications involving fractal structures.

Remark 5

[3, Remark 2] Each fractal structure consists of a countable number of levels. Nevertheless, we shall only consider a finite number of them in applications. That number depends on the data size available for each curve (actually, a time series). Thus, if l is the series length, then the deepest level to be reached is n ≃ log2 l.

Let dn denote the mean of {diam (A)}AΔn, that is a sample of M(2-n, ω) (c.f. Remark 2). Thus, dn approaches the mean of M(2-n, ω). Then by Theorem 7 (7), rn=dn+1dnequalsr=12H, a constant. Thus, H = -log2 r, and hence, Theorem 7 gives dim(α)1log2r. The previous arguments lead to a first approach we shall refer to as FD1 algorithm.

Algorithm (FD1)

[3, Algorithm 1]

  1. Let dn be the mean of {diam(A)}AΔn for 1 ≤ n ≤ log2 l.

  2. Define rn=dn+1dnfor1nlog2l1.

  3. Calculate r as the mean of {rn}1 ≤n≤log2l-1.

  4. Return dim(α)=1log2r,andH=log2r.

It is worth noting that FD1 is valid to calculate the parameter of random processes satisfying E[Xn] = 2H · E[Xn+1]. In particular, it works if Xn ∼ 2H · Xn+1, which is the case of the processes under the hypotheses of Theorem 7.

It is worth pointing out that GM2 algorithm (first provided in [21], and revisited afterwards in [15] from the viewpoint of fractal structures) becomes also valid to calculate the parameter of random processes lying under the condition E[X1] = 2(n-l)H · E[Xn]. Moreover, since that equality is equivalent to E[Xn] = 2H · E[Xn+1], then the validity of GM2 approach is equivalent to the validity of FD1. For a description of GM2 algorithm in terms of fractal structures, we refer the reader to [15, Algorithm 3.6].

In Figure 4, we illustrate how the parameter of a time series may be calculated via FD1 algorithm. In that case, they have been plotted the values of the coefficients rn and r (their mean value) for a 211 point time series from a BM. This makes a total amount of 11 levels in the induced fractal structure Δ.

The markers in the plot above correspond to the coefficients rn, and the straight line depicts their mean. Such a graphical representation has been carried out for a 2048 point BM
Figure 4

The markers in the plot above correspond to the coefficients rn, and the straight line depicts their mean. Such a graphical representation has been carried out for a 2048 point BM

On the other hand, recall that the (absolute) s-moment of a random variable X (under the assumption that such a value exists) is defined as ms(X) = E[Xs]. In addition, let {xk : k = 1,...,l} be a sample of length l from a random variable X. Its sample s-moment will be calculated by the following expression, as usual: ms(X)=1li=1lxis.

Theorem 9

(c.f. [3, Theorem 3]) Let α : [0, 1] → ℝ be a sample function of 𝐗, Γ be the natural fractal structure on [0, 1], Δ be the fractal structure induced by Γ on α([0, 1]), and Xn = M(2-n, ω). If there exists s ≥ 0 satisfying the two following properties:

  1. The s-moment of Xn, ms(Xn), is finite, and

  2. ms(Xn)=2ms(Xn+1),

then s = dim(α).

The following result contains sufficient conditions to verify the hypothesis (9) in Theorem 9.

Theorem 10

Let 𝐗 be a random process with parameter H, α : [0, 1] → ℝ be a sample function of 𝐗, Xn = M(2-n, ω), and assume thatXnTnHX0,(6)for 𝒯n = 2-n. Then the hypothesis (9) in Theorem 9 stands for H=1s, i.e., m1H(Xn)=2m1H(Xn+1).

In particular, H=1dim(α).

Accordingly, the next result follows immediately from Theorem 10.

Corollary 11

Let 𝐗 be a random process with self-affine increments with parameter H and stationary. In addition, let α : [0, 1] → ℝ be a sample function of 𝐗. Then the hypothesis (9) in Theorem 9 stands for H=1s. In particular, H=1dim(α).

Notice that FBMs and FLSMs lie under the hypothesis (9) of Theorem 9.

Corollary 12

(c.f. [3, Corollaries 1 and 2]) Let α : [0, 1] → ℝ be a sample function of either a FBM or a FLSM with parameter H. Then the hypothesis (9) in Theorem 9 stands for H=1s. In particular, H=1dim(α).

Remark 6

The hypothesis (9) in Theorem 9, stands for many empirical applications (c.f. [3, Remark 3] and [3, Section 6]).

Key Theorem 9 gives rise to a pair of novel approaches, namely, FD2 and FD3 algorithms. The main property such algorithms are based on lies on hypothesis (9) of Theorem 9. Such a condition is equivalent to ms(Xk)ms(Xk+1)=2 for k = 1,..., log2 l − 1.

Algorithm (FD2)

(c.f. [3, Algorithm 2]) For s > 0,

  1. Calculate ys = {yk,s}k = 1,..., log2l−1, whereyk,s=ms(Xk)ms(Xk+1).

  2. Let ys be the mean of each list ys.

  3. Calculate the point s0 such that ys0 = 2. Observe that {(s, ys) : s > 0} is s-increasing.

  4. Thus, dim(α) = s0 (due to Theorem 9), and H=1s0 (by Theorem 7).

It turns out that FD2 approach is valid to properly calculate the parameter of processes lying under the condition E[Xn1H]=2E[Xn+11H], i.e., the hypothesis (9) in Theorem 9. In particular, it holds for random functions such that Xn1H2Xn+11H. Going beyond, if Xn2HXn+1,thenXn1H2Xn+11H, and hence, both Theorems 7 and 9 guarantee that FD2 is valid to estimate the parameter H of processes with self-affine and stationary increments. Figure 5 graphically shows how to computationally deal with the calculations involved in FD2 algorithm. It depicts the graph of ys in terms of s. The s-increasing nature of the function ys makes easy to find out the value s0 such that ys0 = 2, namely, the fractal dimension of such a sample function. In this case, a 2048 point BM was considered for illustration purposes.

Example of a graph representation of s vs. ys $ \over{y_s} $  for a 2048 point BM (c.f. Algorithm FD2)
Figure 5

Example of a graph representation of s vs. ys for a 2048 point BM (c.f. Algorithm FD2)

Next step is to describe the so-called FD3 algorithm, an alternative to FD2 approach, which is also based on Theorem 9. First, let us sketch some theoretical notes. In fact, it is clear that the condition ms(Xn) = 2 · ms(Xn+1) is equivalent to ms(Xn)=12n1ms(X1).(7) Taking 2-base logarithms in Eq. (7), we have that log2ms(Xn)=n+γ,(8) where γ = 1 + log2 ms(X1) remains constant. Thus, Eq. (8) provides a linear relationship between n and log2 ms(Xn). The algorithm based on the ideas described above is stated as follows.

Algorithm (FD3)

(c.f. [3, Algorithm 3]) For s > 0,

  1. Calculate {(k,βk,s)}k = 1,..., log2l, where βk,s = log2 ms(Xk). Let βs be the slope of the regression line of {(k, βk,s)}k = 1,..., log2l.

  2. Calculate s1 such that βsl = −1.

  3. Hence, dim(α) = s1 (due to both Theorem 9 and Eq. (8)), and H=1S1 (by Theorem 7).

It is worth noting that FD3 approach is valid to calculate the parameter of random processes satisfying the equality E[Xn1H]=12n1E[X11H].

Since that expression is equivalent to E[Xn1H]=2E[Xn+11H], then FD3 is valid to calculate the parameter of 𝐗, if and only if, FD2 is valid for that purpose, where the increments of 𝐗 are self-affine and stationary.

Additionally, it holds that FD3 procedure also allows to verify the condition ms(Xn) = 2 · ms(Xn+1) (c.f. hypothesis (9) in Theorem 9). In fact, since Eq. (8) is equivalent to such a condition, then we can check that some empirical data lie under such a condition if Eq. (8) stands, i.e., if the regression coefficient in Eq. (8) is close to 1.

Theorem 13

(c.f. [4, Theorem 3.1]) Assume that the increments of 𝐗 are stationary. If the next relationship stands for some H > 0:M(T,ω)THM(1,ω),(9)then FD algorithms as well as GM2 approach are valid to calculate H.

Theorem 13 leads to the following corollaries.

Corollary 14

FD algorithms and GM2 procedure are valid to calculate the parameter of random processes with self-affine and stationary increments.

Corollary 15

Let 𝐗 be a FBM or a FLSM with parameter H. Then FD algorithms and GM2 approach are valid to calculate H.

References

  • [1]

    Fernández-Martínez M., Sánchez-Granero M.A., Fractal dimension for fractal structures: A Hausdorff approach, Topology Appl., 2012, 159, 7, 1825-1837. Web of ScienceCrossrefGoogle Scholar

  • [2]

    Fernández-Martínez M., Sánchez-Granero M.A., A new fractal dimension for curves based on fractal structures, Topology Appl., 2016, 203, 108-124. Web of ScienceCrossrefGoogle Scholar

  • [3]

    Sánchez-Granero M.A., Fernández-Martínez M., Trinidad Segovia J.E., Introducing fractal dimension algorithms to calculate the Hurst exponent of financial time series, Eur. Phys. J. B, 2012, 85:86, 3, 1-13. Web of ScienceGoogle Scholar

  • [4]

    Fernández-Martínez M., Sánchez-Granero M.A., Trinidad Segovia J.E., Román-Sánchez I.M., An accurate algorithm to calculate the Hurst exponent of self-similar processes, Phys. Lett. A, 2014, 378, 32-33, 2355-2362. Web of ScienceCrossrefGoogle Scholar

  • [5]

    Fernández-MartínezM., A survey on fractal dimension for fractal structures, Applied Mathematics and Nonlinear Sciences, 2016, 1, 2, 437-472. CrossrefGoogle Scholar

  • [6]

    Bandt C, Graf S., Self-similar sets VII. A characterization of self-similar fractals with positive Hausdorff measure, Proc. Amer. Math. Soc., 1992, 114, 4, 995-1001. Google Scholar

  • [7]

    Arenas F.G., Sánchez-Granero M.A., A characterization of non-Archimedeanly quasimetrizable spaces, Rend. Istit. Mat. Univ. Trieste, 1999, 30, no. suppl., 21-30. Google Scholar

  • [8]

    Fernández-Martínez M., Sánchez-Granero M.A., Fractal dimension for fractal structures, Topology Appl., 2014, 163, 93-111. CrossrefWeb of ScienceGoogle Scholar

  • [9]

    Fernández-Martínez M., Nowak M., Sánchez-Granero M.A., Counterexamples in theory of fractal dimension for fractal structures, Chaos Solitons Fractals, 2016, 89, 210-223. CrossrefWeb of ScienceGoogle Scholar

  • [10]

    Lamperti J., Semi-stable stochastic processes, Trans. Amer. Math. Soc, 1962, 104,1, 62-78. CrossrefGoogle Scholar

  • [11]

    Falconer K., Fractal geometry. Mathematical Foundations and Applications, 1st ed., John Wiley & Sons, Ltd., Chichester, 1990. Google Scholar

  • [12]

    Jeanblanc M., Yor M., Chesney M., Mathematical Methods for Financial Markets, 1st ed., Springer-Verlag London, London, 2009.Google Scholar

  • [13]

    Mercik S., Weron K., Burnecki K., Weron A., Enigma of self-similarity of fractional Lévy stable motions, Acta Phys. Polon. B, 2003, 34, 7, 3773-3791. Google Scholar

  • [14]

    Mandelbrot B.B., Gaussian self-affinity and fractals, Selected Works of Benoit B. Mandelbrot, Springer-Verlag, New York, 2002. Google Scholar

  • [15]

    Trinidad Segovia J.E., Fernández-Martínez M., Sánchez-Granero M.A., A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A, 2012, 391, 6, 2209-2214. CrossrefGoogle Scholar

  • [16]

    Samoradnitsky G., Taqqu M.S., Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance, Chapman & Hall/CRC, Boca Raton, FL, 1994. Google Scholar

  • [17]

    Burnecki K., Maejima M., Weron A., The Lamperti transformation for self-similar processes, Yokohama Math. J., 1997, 44,1, 25-42. Google Scholar

  • [18]

    Burnecki K., Self-similar processes as weak limits of a risk reserve process, Probability and Mathematical Statistics, 2000, 20, 2, 261-272.Google Scholar

  • [19]

    Maejima M., Self-similar processes and limit theorems, Sugaku, 1988, 40, 1, 19-35. Google Scholar

  • [20]

    Janicki A., Weron A., Simulation and Chaotic Behavior of α-stable Stochastic Processes, Marcel Dekker, Inc., New York, NY, 1994. Google Scholar

  • [21]

    Sánchez-Granero M.A., Trinidad Segovia, J.E., García Pérez J., Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A, 2008, 387, 22, 5543-5551. CrossrefGoogle Scholar

About the article

Received: 2017-03-13

Accepted: 2017-04-27

Published Online: 2017-06-16


Citation Information: Open Physics, ISSN (Online) 2391-5471, DOI: https://doi.org/10.1515/phys-2017-0049.

Export Citation

© 2017 M. Fernández-Martínez and M. Caravaca Garratón. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Comments (0)

Please log in or register to comment.
Log in