The flaw in the conformable calculus: It is conformable because it is not fractional

• Ahmed A. Abdelhakim

Abstract

We point out a major flaw in the so-called conformable calculus. We demonstrate why it fails at defining a fractional order derivative and where exactly these tempting conformability properties come from.

1 Introduction

Khalil et al. proposed a definition for the fractional derivative in [5] using what they called the “conformable derivative”. This concept was quickly adopted by T. Abdeljawad in [1] where he claims to have developed some tools of fractional calculus.

The conformable derivative is local by its very definition. Moreover, we proved rigorously in [2] that the conformable derivative of a function f does not exist at any point x > 0, unless f is differentiable at x. The term “conformable” is supposedly attributed to the properties this proposed definition provides.

We point out the flaw in Khalil et al.’s definition and uncover the real source of this conformability through reviewing the statements and proofs in [1, 5]. Analogous remarks apply to the statements and proofs in [3] and [4]. It turns out that the reason behind the conformability of this derivative is, ironically, the same reason it is not fractional.

We would like to emphasize here that we are not reviewing the aforementioned work to provide useful formulae to work with. On the contrary, our real purpose is to discourage researchers from using it, by making it clear from the mathematical point of view why the conformable derivative is not fractional.

We have shown in ([2], Section 5) the disadvantages of using the conformable definition in solving fractional differential equations. It breaks the fractional equation and replaces it with an ordinary equation that may no longer properly describe the underlying fractional phenomenon. This is probably the reason it produces a substantially larger error compared with the Caputo fractional derivative when used to solve fractional models (see [2], Section 6).

We discuss concrete examples that illustrate how the conformable derivative is incapable of giving the fractional derivative obtainable from the classical Riemann-Liouville or Caputo derivatives. More examples are provided to show how the conformable operator produces functions with a much different behaviour than the classical fractional derivatives. The latter are known to be successful at describing many fractional phenomena (see e.g. [6]).

2 The problems in the statements and proofs in [5]

The results in [5] are all based on the following definition:

Definition 2.1

([5], Definition 2.1) Given a function f : [0, ∞[ → ℝ, then the “ conformable fractional derivative” of f of order α is defined by

Tαf(t):=limϵ0f(t+ϵt1α)f(t)ϵ,(2.1)

for all t > 0, α ∈ ]0, 1[.

Definition 2.1 is flawed. Once we establish this, it will be immediately seen that the proofs in [5] are unnecessarily involved. More importantly, the results therein will be found insignificant as they follow directly from the traditional integer-order calculus.

We proved the following theorem in [2]:

Theorem 2.1

([2], Theorem 1) Fix 0 < α < 1 and lett > 0. A functionf : [0, ∞[ ⟶ ℝ has aconformable fractional derivativeof orderαattif and only if it is differentiable att, in which case we have the pointwise relation

Tαf(t)=t1αf(t).(2.2)

We note here the problems with Definition 2.1 in the light of Theorem 2.1:

Remark 2.1

The limit (2.1) does not exist unless limϵ0(f(t+ϵ)–f(t))/ϵ exists. In other words, there does not exist a function differentiable in the sense of Definition 2.1 that is not differentiable. In fact, the false claim in [1, 3, 5, 4] that the “conformable” derivative may exist at a point where the function is not differentiable is the only excuse for the results in these papers.

Remark 2.2

The identity (2.2) is the reason Tαf demonstrates “conformability”. The conformability comes precisely from the integer-order derivative, the factor f, in (2.2).

Remark 2.3

The derivative Tαf is not a fractional (order) derivative. It is exactly the integer-order derivative times the root function t1–α.

Therefore, Definition 2.1 is to be understood as follows:

Definition 2.2

(What Definition 2.1 in [5] really suggests) Given a function f : [0, ∞[ → ℝ, then f is α-differentiable at t > 0, if it is differentiable at t, and its α-derivative Tαf(t) := t1–αf(t), t > 0.

Now, we show how this correct understanding of what Definition 2.1 proposes trivializes the results in [5].

Theorem 2.2

([5], Theorem 2.1) If a functionf : [0, ∞[ → ℝ isα-differentiable att0 > 0, α ∈ ]0, 1], thenfis continuous att0.

If f is α-differentiable at t0 > 0, then it is differentiable at t0. It is well-known that if a function is differentiable at some point, then it is continuous thereat.

The next theorem explains why Tαf is described as conformable. We show why the statements are trivial and how the conformability comes from (2.2).

Theorem 2.3

([5], Theorem 2.2) Letα ∈ ]0, 1] andf, gbeα-differentiable. Then:

1. Tα(af + bg) = aTα(f) + bTα(g) for alla, b ∈ ℝ.

2. Tα(tp) = ptpαfor all p ∈ ℝ.

3. Tα(f) = 0 for all constant functionsf.

4. Tα(fg) = fTα(g) + gTα(f).

5. Tαfg=gTα(f)fTα(g)g2.

6. If, in addition, fis differentiable, thenTα(f)(t) = t1–αf(t).

If f, g are α-differentiable, then they are in fact differentiable, and we have Tα(f)(t) = t1–αf(t), and Tα(g)(t) = t1–αg(t).

Let us start with (1). Since f, g are differentiable, then so is af + bg. By Theorem 2.1, af + bg is α-differentiable and we have

Tα(af+bg)=t1α(af+bg)=at1αf+bt1αg=aTα(f)+bTα(g).

The proofs of items (2) through (5) are as trivial as the proof of (1).

The statement (6) is inaccurate. The truth is f is α-differentiable at t > 0 if and only if f is differentiable at t. Thus, if f is α-differentiable at t > 0, then Tα(f)(t) = t1–αf(t). We do not need to require f to be differentiable. Differentiability is already implied by assuming f is α-differentiable.

Theorem 2.4

([5], Theorem 2.3) Leta > 0 and f : [a, b] → ℝ be a given function such that:

1. fis continuous on [a, b],

2. fisα-differentiable for someα ∈ ]0, 1[,

3. f(a) = f(b).

Then, there existsc ∈ ]0, 1[ such thatf(α)(c) = 0.

The condition (ii) implies that f is differentiable on ]a, b[ and f(α)(t) = t1–αf(t) for all t ∈ ]a, b[. We know from the classical Rolle’s theorem that there exists c ∈ ]a, b[ such that f(α)(c) = c1–αf(c) = 0.

Theorem 2.5

([5], Theorem 2.4) Leta > 0 and f : [a, b] → ℝ satisfy

1. fis continuous on [a, b],

2. fisα-differentiable for someα ∈ ]0, 1[.

Then, there existsc ∈ ]0, 1[ such thatf(α)(c)=f(b)f(a)1αbα1αaα.

Once again, by Theorem 2.1, the condition (ii) implies that f is differentiable on ]a, b[ and f(α) = t1–αf on ]a, b[. Now, apply the classical Cauchy mean value theorem to the functions f and ttαα on the interval ]a, b[, we already know that there is c ∈ ]a, b[ such that

f(α)(c)=f(c)cα1=f(b)f(a)1αbα1αaα.

Let a > 0. Proposition 2.1 in [5] introduces absolutely no novelty because, by (2.2), and the fact that tt1–α is locally bounded, f(α) is bounded on [a, b] if and only if f is bounded on [a, b]. And if f is bounded on [a, b], then f is Lipschitz on [a, b], not only uniformly continuous.

The remark that follows Proposition 2.1 in [5] is false. If an α-differentiable function f on ]a, b[ is uniformly continuous on [a, b], then its α-derivative is not necessarily bounded therein. A counterexample is f(t) = t14 which is Lipschitz on [0, 1], but f12(t)=t14/4 is unbounded on ]0, 1[.

Take a look at:

Definition 2.3

([5], Definition 3.1) Let a ≥ 0. The α-integral of a function f is Iαa(f)(t):=atf(x)x1αdx.

The example given in [5], right after Definition 3.1, seems to try to sell Iαa as the antiderivative of Tα. Of course, T12sint=tcost, and I120tcost=0tcos xdx = sin t. The truth is atf(α)(x)x1αdx=atf(x)dx = f(t) – f(a). For example, I120tsint=I120T12(cost)=0tsinxdx = 1 – cos t.

3 The problems in the statements and proofs in [1]

We proceed to demonstrate the flaws in the definitions suggested in [1]. We prove that the tools of calculus proposed there lack the novelty, as they are trivial consequences of the traditional calculus. The ideas in [1] are all based on the following definition:

Definition 3.1

([1], Definition 2.1) The (left) fractional derivative starting from a of a function f : [a, ∞[→ ∞ of order 0 < α ≤ 1 is defined by

Tαaf(t):=limϵ0f(t+ϵ(ta)1α)f(t)ϵ,t>a.(3.3)

The (right) fractional derivative of order 0 < α ≤ 1 of a function f : ]– ∞, b ]→ ∞ is defined by

αbTf(t):=limϵ0f(t+ϵ(bt)1α)f(t)ϵ,t<b.(3.4)

If Tαaf(t) exists on ]a, b[ then Tαaf(a) := limta+Tαf(t). If αbTf(t) exists on ]a, b[ then αbTf(b) := limtbαbTf(t).

It is also noted in [1] that if f is differentiable, then

Tαaf(t)=(ta)1αf(t)andαbTf(t)=(bt)1αf(t).

The following theorem is given in [2]:

Theorem 3.1

([2], Theorem 3) Suppose h : ] – 1, 1[ ×ℝ ⟶ ℝ is such that limϵ→0h(ϵ, t0) ≠ 0 for somet0 ∈ ℝ. Then a functionψ : ℝ ⟶ ℝ is differentiable att0if and only if the limit

ψ~(t0):=limϵ0ψt0+ϵh(ϵ,t0)ψ(t0)ϵ

exists, in which caseψ˜(t0) = ψ(t0)ψ(t0), ψ(t) = limϵ0h(ϵ, t).

Let us see the problems in Definition 3.1:

Remark 3.1

According to Theorem 3.1, the limit in (3.3) exists at t > a if and only if limϵ0 (f(t + ϵ) – f(t))/ϵ exists. Similarly, the limit in (3.4) exists at t < b if and only if f(t) exits. This means that neither Tαaf(t) nor αbTf(t) exits unless f is differentiable at t. In fact, by Theorem 3.1, Definition 3.2 reads:

Tαaf(t):=(ta)1αf(t),t>aαbTf(t):=(bt)1αf(t),t<b,(3.5)

provided f(t) exists.

Remark 3.2

Unlike with the classical fractional derivatives of Riemann-Liouville and Caputo, Definition 3.1 does not work for functions defined on ℝ. Indeed, by Remark 3.1, if f is defined on ℝ, then both derivatives Tαf(t) and αTf(t) are ill-defined at every t ∈ ℝ, which is unacceptable.

Remark 3.3

There is no geometric or physical motivation that justifies the negative sign in the definition of the operator αbT. Furthermore, the case α = 1 is supposed to give the left first order integer derivative, but Remark 3.1 implies

1bTf(t)=f(t),t<b,

which is neither the left nor the right derivative of f at t.

Remark 3.4

There is an obvious inconsistency in Definition 3.1 when it comes to defining Tαaf(a) and αbTf(b). Let a < b. We have clarified in Remark 3.1 that if a function is not differentiable at some point t ∈ ]a, b[, then the pointwise criterion of Definition 3.1 does not allow it to be α-differentiable at t. This, however, excludes the endpoints a and b. We are going to discuss Tαaf(a) and the analogue applies to αbTf(b). Unjustifiably, Definition 3.1 allows the derivative Tαaf(a) to exist regardless of the existence of the right derivative of f at a. Precisely, by Remark 3.1, if Tαaf exists on ]a, a + δ[, for some δ > 0, then

Tαaf(a)=limta+Tαaf(t)=limta+(ta)1αf(t).

Therefore, according to Definition 3.1, Tαaf(a) exists if and only if f exists on ]a, a + δ[, and limta+ (ta)1–αf(t) exists. This is evidently a weaker condition than the existence of f on ]a, a + δ[ and limta+f(t). It is also independent of the existence of the right derivative f+(a) of f at a. Many examples are given in [1] for functions differentiable on ]a, b[ such that Tαaf(a) exists, but f+(a) does not. This may lead to the false intuition that the operator Tαa is well-defined on a larger class of functions than the derivative. The reality is there exist smooth functions on ]a, b[ such that f(a) exists but Tαaf(a) does not. Consider for instance g(t):=x2sin1x3,x0;0,x=0.. We have gC(ℝ ∖ {0}) and g(0) = 0, yet limx→0+x1–αg(x) does not exist for any 0 ≤ α ≤ 1.

Another issue with Tαaf(a) is that its existence depends on the domain. For example, h(x):=sintt0 is not differentiable at t = t0, and consequently, by Remark 3.1, Tαch(t0) does not exist. But this is true only if h is considered on the domain [c, ∞[ with any c < t0. If c = t0, however, then Tαch(t0) magically exists and equals 0, for every 0 < α12.

Remark 3.5

The identities (3.5) prove that the derivative in Definition 3.1 is not fractional and that the conformability comes from the integer-order derivative factor. What is worse is that the derivative in Definition 3.1 fails to give the fractional derivative for some functions whose fractional derivative exist and can be easily calculated using the Riemann-Liouville or Caputo definition.

See the following examples:

Example 3.1

Consider the function f1(t):=1,0t1;0,1<t2. It is easily verifiable that (Tα0f1)(1) does not exist. But the Riemann-Liouville fractional derivative D0+αf1 exists at t = 1, and

D0+αf1(1)=1Γ(1α)01dξ(1ξ)α=1(1α)Γ(1α).

Example 3.2

Consider the function f2(t) := |t – 1| on [0, 2]. Again, (Tα0f2)(1) does not exist. Nevertheless, the Caputo fractional derivative CD0+αf2 exists at t = 1, and

CD0+αf2(1)=1Γ(1α)01dξ(1ξ)α=1(1α)Γ(1α).

Remark 3.6

Pointwise multiplication of the derivative f of a function f defined on [a, ∞[ by the function (ta)1–α does not give the physical properties we hope from a fractional derivative. We show this by comparing Tαaf(t) = (ta)1–αf to the Riemann-Liouville and Caputo fractional derivatives for the sine and hyperbolic sine functions. Similar differences show up with the cosine and hyperbolic cosine functions. Notice here that the Riemann-Liouville fractional derivative coincides with the Caputo derivative for each of these functions. We see the great difference in behaviour between Tαa and the classical fractional operators:

Example 3.3

Let g1(t) = sin t. Then (Tα0g1)(t) = t1–α cos t. We can calculate

D0+αg1(t)=CD0+αg1(t)=1Γ(1α)0tcos(tξ)ξαdξ.

Notice that (Tα0g1)(t) grows unboundedly with t. Contrarily, the fractional derivatives D0+αg1 and CD0+αg1 are bounded. To see this, let t > 1. We have

01cos(tξ)ξαdξ011ξαdξ=11α.(3.6)

Also, integrating by parts,

1tcos(tξ)ξαdξ=sin(t1)α1tsin(tξ)ξ1+αdξ,(3.7)

and we have

1tsin(tξ)ξ1+αdξ1t1ξ1+αdξ=1α11tα<1α.(3.8)

The boundedness of D0+αg1, CD0+αg1 follows from (3.6), (3.7), and (3.8). See Figure 1.

Figure 1

The behavior of Tα0g1 is very different from that of D0+αg1, CD0+αg1

Example 3.4

Let g2(t) = sinh t. Then (Tα0g2)(t) = t1–α cosh t. On the other hand,

D0+αg2(t)=CD0+αg1(t)=1Γ(1α)0tcosh(tξ)ξαdξ.

The function Tα0g2 grows much faster than the fractional derivative. To prove this, we compute

limtD0+αg2(t)Tα0g2(t)=limt0tcosh(tξ)ξαdξt1αcosht=limt0t1ξαcoshξtanhtsinhξdξt1α=11αlimt1coshtlimt0tsinhξξαdξtαcosh2t=11αlimt0tsinhξξαdξtαcosh2t=11αlimt1αcosh2ttsinht+2cosht=0.

See Figure 2.

Figure 2

The behavior of Tα0g2 is very different from that of D0+αg2, CD0+αg2

Remarks 3.1 through 3.6 show the insignificance of the results in [1]. We illustrate how the proofs presented in [1] reduce to trivial exercises of calculus. For example:

Theorem 3.2

([1], Theorem 2.11) Assumef, g:]a, ∞[→ ℝ are (left) α-differentiable functions, where 0 < α ≤ 1. Leth(t) = f(g(t). Thenhis (left) α-differentiable, and for allt > asuch thatg(t) ≠ 0 we have

(Tαah)(t)=(Tαaf)(g(t)).(Tαag)(t).g(t)α1.(3.9)

First of all, the conclusion (3.9) of Theorem 3.2 is incorrect. It is correct if a = 0. We consider this case.

As noted in Remark 3.1, if f, g are (left) α-differentiable on ]a, ∞[, then they are actually differentiable on ]a, ∞[. Moreover, by the identities (3.5),

(Tα0f)(g(t)).(Tα0g)(t).g(t)α1=g(t)1α.f(g(t)).t1αg(t).g(t)α1=t1αf(g(t)).g(t)=t1α(f(g(t)))=(Tα0h)(t).

Both the statement and proof of the next theorem ([1], Theorem 4.1) we investigate are incorrect. This implies that Proposition 4.2 and Examples 4.1 through 4.3 in [1] are also incorrect.

Theorem 3.3

([1], Theorem 4.1) Assumefis an infinitelyα-differentiable function, for some 0 < α < 1 at a neighborhood of a pointt0. Thenfhas the fractional power series expansion:

f(t)=k=0(Tαt0f)(k)(t0)(tt0)kααkk!,t0<t<t0+R1α,R>0,(3.10)

where (Tαt0f)(k)means the application ofTαt0ktimes.

The proof in [1] begins with writing

f(t)=c0+c1(tt0)α+c2(tt0)2α+c3(tt0)3α+...,(3.11)

and proceeds by applying Tαt0 to both sides of (3.11), then evaluating both sides at t0, and repeating the process k times. The coefficients ck are inaccurately calculated:

ck=(Tαt0f)(k)(t0)αkk!.

By Remark 3.1, the assumption that f is an infinitely α-differentiable is equivalent to assuming f is infinitely differentiable.

Using (3.5), we get

(Tαt0f)(1)(t)=(tt0)1αf(t),(Tαt0f)(2)(t)=(1α)(tt0)12αf(t)+(tt0)22αf(t),(Tαt0f)(3)(t)=(1α)(12α)(tt0)13αf(t)+3(1α)(tt0)23αf(t)+(tt0)33αf(t),(Tαt0f)(4)(t)=(1α)(12α)(13α)(tt0)14αf(t)+(1α)(3(1α)+4(12α))(tt0)24αf(t)+6(1α)(tt0)34αf(t)++(tt0)44αf(4)(t),...(Tαt0f)(k)(t)=j=1k1aj,k(α)f(j)(t)(tt0)kαj+f(k)(t)(tt0)kαk,k2,(3.12)

where a1,k=j=1k1(1jα), and aj,k(α), 2 ≤ jk – 1, are also constants that depend only on α. At first glance we observe the following.

Remark 3.7

Since limtt0+(tt0)1–αf = 0, for any f continuously differentiable and every α < 1, then, the coefficient of (tt0)α in the series (3.10) is zero for any smooth function. Similarly, if α < 12, then limtt0+(Tαt0f)(1)(t) = limtt0+(Tαt0f)(2)(t) = 0. Consequently, the coefficient of (tt0)α and that of (tt0)2α are both zero for any smooth f. Generally, given α ∈ ]0, 1[, there exists n > 1 such that α < 1n, and, strangely enough, the coefficients of (tt0), 1 ≤ kn are all zero, regardless of the function f.

We prove in Proposition 3.1 below that the proof presented in [1] is incorrect and the series (3.10) does not make sense.

We immediately realize from (3.12) that the infinite differentiability of f does not guarantee that the series (3.10) makes sense. We need infinitely many more smallness restrictions on the derivatives of f near t0. Precisely, we have the following proposition.

Proposition 3.1

Given a functionfandα ∈ ]0, 1[, the expansion (3.10) does not make sense, unless the infinitely many limits

limtt0+j=1k1aj,k(α)(tt0)j1f(j)(t)(tt0)kα1,k>1α,(3.13)

exist.

Examples 4.1 through 4.3, in [1] that apply the generally incorrect expansion (3.11) are conveniently for functions of the form f(t)=gtt0αα. They seem to work because (Tαt0f)(k)(t)=g(k)((tt0)α). In fact, the series (3.11) that works for the function te(tt0)αα of Example 4.1 in [1] fails, for any α ∈ ]0, 1[, for the infinitely differentiable function te(ts)αα on ]s, ∞[, with st0. Verifiably, if α > 12, then (Tαt0h)(2)(t0) does not exit because

limtt0+h(t)(tt0)2α1=limtt0+(ts)α1e(ts)αα(tt0)2α1=+.

If α > 13, then (Tαt0h)(3)(t0) does not exit since

limtt0+(1α)(12α)h(t)+3(1α)(tt0)h(t)(tt0)3α1=(12α).

In fact, if α > 1n for some n ≥ 2, one can show that (Tαt0h)(k)(t0) does not exist for any kn. Hence, for the smooth function h, the coefficients, ck, of (tt0) in the expansion (3.11) are ck=0,k<1/α;±,k>1/α.,k2. Even more, the series (3.11) fails for the simplest analytic function et. An analogous argument applies to Examples 4.2 and 4.3 in [1].

Acknowledgement

The author is very grateful to professor Virginia Kiryakova for her encouraging and valuable comments.

References

[1] T. Abdeljawad, On conformable fractional calculus, J. Comput. Appl. Math. 279 (2015), 57–66.10.1016/j.cam.2014.10.016Search in Google Scholar

[2] A.A. Abdelhakim and J.A.T. Machado, A critical analysis of the conformable derivative. Nonlinear Dynamics (2019), First Online: 02 Jan. 2019, 11 pp.; https://doi.org/10.1007/s11071-018-04741-5.10.1007/s11071-018-04741-5Search in Google Scholar

[3] D.R. Anderson, D.J. Ulness, Properties of the Katugampola fractional derivative with potential application in quantum mechanics. J. Math. Phys. (AIP)56 (2015), 063502-1–063502-18; 10.1063/1.4922018.Search in Google Scholar

[4] U.N. Katugampola, A new fractional derivative with classical properties. E-print arXiv:1410.6535 (2014), 8 pp.; Subm. to: Journal of Amer. Math. Society.Search in Google Scholar

[5] R. Khalil, M. Al Horani, A. Yousef, M. Sababheh, A new definition of fractional derivative. J. Comput. Appl. Math. 264 (2014), 65–70.10.1016/j.cam.2014.01.002Search in Google Scholar

[6] F. Mainardi, Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models. World Scientific, 2010.10.1142/p614Search in Google Scholar