Consider the following conformable heat equation

$$\begin{array}{}{\displaystyle {D}_{t}^{\alpha}u(x,t)-{u}_{xx}(x,t)=f(x,t),\phantom{\rule{1em}{0ex}}(x,t)\in \mathit{\Omega}\times (0,T],}\end{array}$$(6)

$$\begin{array}{}{\displaystyle u(x,t)=0,\phantom{\rule{1em}{0ex}}(x,t)\in \mathrm{\partial}\mathit{\Omega}\times (0,T],}\end{array}$$(7)

$$\begin{array}{}{\displaystyle u(x,T)=g(x),\phantom{\rule{1em}{0ex}}x\in \mathit{\Omega},}\end{array}$$(8)

where *f*(*x*, *t*) ∈ *C*(0, *T*;*L*^{2}(*Ω*)) and *g*(*x*) ∈ *L*^{2}(*Ω*). From the information given at final time *t* = *T*, the goal of the inverse problem is to recover the information *u*(*x*, *t*) for 0 ≤ *t* < *T*. Unfortunately, the inverse problem is usually an ill-posed problem in the sense of Hadamard. An ill-posed problem in the sense of Hadamard is the one which violates at least one of the following conditions:

–

*Existence*: There exists a solution of the problem.

–

*Uniqueness*: The solution must be unique.

–

*Stability*: The solution must depend continuously on the data, i.e., any small error in given data must lead to a corresponding small error in the solution.

Problems which satisfy these conditions are called well-posed problems. We will show that the conformable backward heat problem is an ill-posed problem.

First, let us make clear what a solution of the Problem (6) - (8) is. We call a function *u* ∈ *C*^{2, 1}((0, *a*) × (0, *T*);*L*^{2} (*Ω*)) a solution for Problem (6) - (8) if

$$\begin{array}{}{\displaystyle {D}_{t}^{\alpha}\u3008u(\cdot ,t),w\u3009-\u3008{u}_{xx}(\cdot ,t),w\u3009=\u3008f(\cdot ,t),w\u3009}\end{array}$$(9)

for all functions *w* ∈ *L*^{2}(*Ω*). In fact, it is enough to choose *w* in the orthogonal basis
$\begin{array}{}{\left\{\mathrm{sin}\left(\frac{n\pi}{a}x\right)\right\}}_{n=1}^{\mathrm{\infty}}\end{array}$
and then (9) reduces to

$$\begin{array}{}{\displaystyle {u}_{n}(t)={e}^{{k}_{n}\left(\frac{{T}^{\alpha}-{t}^{\alpha}}{\alpha}\right)}{g}_{n}-\underset{t}{\overset{T}{\int}}{s}^{\alpha -1}{e}^{{k}_{n}\left(\frac{{s}^{\alpha}-{t}^{\alpha}}{\alpha}\right)}{f}_{n}(s)ds,}\end{array}$$

and as a result, the solution of (6) - (8) can be represented by

$$\begin{array}{}{\displaystyle u(x,t)=\sum _{n=1}^{\mathrm{\infty}}{u}_{n}(t)\mathrm{sin}\left(\frac{n\pi}{a}x\right)=\sum _{n=1}^{\mathrm{\infty}}\left({e}^{{k}_{n}\left(\frac{{T}^{\alpha}-{t}^{\alpha}}{\alpha}\right)}{g}_{n}-\underset{t}{\overset{T}{\int}}{s}^{\alpha -1}{e}^{{k}_{n}\left(\frac{{s}^{\alpha}-{t}^{\alpha}}{\alpha}\right)}{f}_{n}(s)ds\right)\mathrm{sin}\left(\frac{n\pi}{a}x\right),}\end{array}$$(10)

where *k*_{n} = (*n π*/*a*)^{2} and

$$\begin{array}{}{\displaystyle \begin{array}{rl}{g}_{n}& =\frac{2}{a}\underset{0}{\overset{a}{\int}}g(x)\mathrm{sin}\left(\frac{n\pi}{a}x\right)dx,\\ {f}_{n}(t)& =\frac{2}{a}\underset{0}{\overset{a}{\int}}f(x,t)\mathrm{sin}\left(\frac{n\pi}{a}x\right)dx,\\ {u}_{n}(t)& =\frac{2}{a}\underset{0}{\overset{a}{\int}}u(x,t)\mathrm{sin}\left(\frac{n\pi}{a}x\right)dx.\end{array}}\end{array}$$(11)

It is noted that the term
$\begin{array}{}{e}^{{k}_{n}\left(\frac{{T}^{\alpha}-{t}^{\alpha}}{\alpha}\right)}\end{array}$
tends to infinity as *n* tends to infinity. Hence, it causes instability in the solution.

In this paper, we will apply the quasi-boundary value method with a small modification to regularize (6) - (8). In fact, rather than using the original information, we will consider problem (6) - (8) with adjusted information so that the adjusted problem is well-posed and approximates the original one. Consider the following problem

$$\begin{array}{}{\displaystyle {D}_{t}^{\alpha}{u}^{\epsilon ,\tau}(x,t)-{u}_{xx}^{\epsilon ,\tau}(x,t)={f}^{\epsilon ,\tau}(x,t),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}(x,t)\in \mathit{\Omega}\times (0,T],}\end{array}$$(12)

$$\begin{array}{}{\displaystyle {u}^{\epsilon ,\tau}(x,t)=0,\phantom{\rule{1em}{0ex}}(x,t)\in \mathrm{\partial}\mathit{\Omega}\times (0,T],}\end{array}$$(13)

$$\begin{array}{}{\displaystyle {u}^{\epsilon ,\tau}(x,T)={g}^{\epsilon ,\tau}(x),\phantom{\rule{1em}{0ex}}x\in \mathit{\Omega},}\end{array}$$(14)

where

$$\begin{array}{}{\displaystyle \begin{array}{rl}{f}^{\epsilon ,\tau}(x,t)& =\sum _{n=1}^{\mathrm{\infty}}\frac{{e}^{-{k}_{n}\frac{({T}^{\alpha}+\tau )}{\alpha}}}{\epsilon {k}_{n}+{e}^{-{k}_{n}\frac{({T}^{\alpha}+\tau )}{\alpha}}}{f}_{n}(t)\mathrm{sin}\left(\frac{n\pi}{a}x\right),\\ {g}^{\epsilon ,\tau}(x)& =\sum _{n=1}^{\mathrm{\infty}}\frac{{e}^{-{k}_{n}\frac{({T}^{\alpha}+\tau )}{\alpha}}}{\epsilon {k}_{n}+{e}^{-{k}_{n}\frac{({T}^{\alpha}+\tau )}{\alpha}}}{g}_{n}\mathrm{sin}\left(\frac{n\pi}{a}x\right).\end{array}}\end{array}$$(15)

#### Lemma 1.1

*Let* 0 ≤ *t* ≤ *T*, *τ* > 0, *ε* ∈ 𝔻:=
$\begin{array}{}(0,{\displaystyle \frac{({T}^{\alpha}+\tau )}{\alpha}})\end{array}$
*and x* > 0. *Then*, *for α* ∈ (0, 1) *the following inequality holds*

$$\begin{array}{}{\displaystyle \frac{{e}^{-\frac{({t}^{\alpha}+\tau )}{\alpha}x}}{\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}}\le (\alpha \epsilon {)}^{\frac{{t}^{\alpha}-{T}^{\alpha}}{{T}^{\alpha}+\tau}}{\left(\frac{{T}^{\alpha}+\tau}{\left(1+\mathrm{ln}(\frac{{T}^{\alpha}+\tau}{\alpha \epsilon})\right)}\right)}^{\frac{{T}^{\alpha}-{t}^{\alpha}}{{T}^{\alpha}+\tau}}.}\end{array}$$(16)

#### Proof

For any *ε* ∈ 𝔻, *x* > 0, *α* ∈ (0, 1] and *T* > 0, the function

$$\begin{array}{}{\displaystyle w(x)=\frac{1}{\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}}}\end{array}$$

maximizes at *x* =
$\begin{array}{}\mathrm{ln}(\frac{{T}^{\alpha}+\tau}{\alpha \epsilon})/(\frac{{T}^{\alpha}+\tau}{\alpha}).\end{array}$
Therefore,

$$\begin{array}{}{\displaystyle w(x)=\frac{1}{\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}}\le w\left(\mathrm{ln}(\frac{{T}^{\alpha}+\tau}{\alpha \epsilon})/(\frac{{T}^{\alpha}+\tau}{\alpha})\right)=\frac{{T}^{\alpha}+\tau}{\alpha \epsilon \left(1+\mathrm{ln}(\frac{{T}^{\alpha}+\tau}{\alpha \epsilon})\right)}.}\end{array}$$(17)

Then, we obtain the following estimation

$$\begin{array}{}{\displaystyle \frac{{e}^{-\frac{({t}^{\alpha}+\tau )}{\alpha}x}}{\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}}=\frac{{e}^{-\frac{({t}^{\alpha}+\tau )}{\alpha}x}}{{\left(\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}\right)}^{\frac{{T}^{\alpha}-{t}^{\alpha}}{{T}^{\alpha}+\tau}}{\left(\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}\right)}^{\frac{{t}^{\alpha}+\tau}{{T}^{\alpha}+\tau}}}}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\u2a7d\frac{{e}^{-\frac{({t}^{\alpha}+\tau )}{\alpha}x}}{{e}^{-\frac{({t}^{\alpha}+\tau )}{\alpha}x}}\frac{1}{{\left(\epsilon x+{e}^{-\frac{({T}^{\alpha}+\tau )}{\alpha}x}\right)}^{\frac{{T}^{\alpha}-{t}^{\alpha}}{{T}^{\alpha}+\tau}}}}\\ {\displaystyle \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\u2a7d(\alpha \epsilon {)}^{\frac{{t}^{\alpha}-{T}^{\alpha}}{{T}^{\alpha}+\tau}}{\left(\frac{{T}^{\alpha}+\tau}{\left(1+\mathrm{ln}(\frac{{T}^{\alpha}+\tau}{\alpha \epsilon})\right)}\right)}^{\frac{{T}^{\alpha}-{t}^{\alpha}}{{T}^{\alpha}+\tau}}.}\end{array}$$

The proof is complete. □

The rest of the paper is organized as follows. In Section 2, we study the well-posedness of problem (12) - (14) and provide an error estimation between solutions of these two problems. Section 3 provides a numerical example to illustrate the efficiency of our method.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.