A Maple implentation of the approximation (4.4) and (4.5) of the function *LW* (0, *-e*^{-t})

Euler_W0 =

proc(x)

local knot,f_knot;

knot = a;

f_knot = evalf(W(0,-exp(-a)));

while (b-a)/n < x-knot do

knot = knot+(b-a)/n; f_knot = f_knot + (b-a)/n/(1+f_knot) - (b-a)/n

od;

RETURN( f_knot - f_knot/(1+f_knot)*(x-knot) )

end

#### Proof of Lemma 3.1

It holds that

$$\frac{\text{d}}{\text{d}x}[\phantom{\rule{thinmathspace}{0ex}}x-\mathrm{ln}\phantom{\rule{thinmathspace}{0ex}}x\phantom{\rule{thinmathspace}{0ex}}]=1-\frac{1}{x}\left\{\begin{array}{ll}>0;\phantom{\rule{1em}{0ex}}& x\in (1,+\mathrm{\infty}),\\ <0;\phantom{\rule{1em}{0ex}}& x\in (0,1),\end{array}\right.$$

so that *x*-ln *x* attains its global minimum 1 at *x*= 1, and

$\underset{x\to +\mathrm{\infty}}{lim}\left[x-\mathrm{ln}x\right]=\underset{x\to {0}_{+}}{lim}\left[x-\mathrm{ln}x\right]=\mathrm{\infty}.$

It is evident that for all *t* ≥ 1 there exist solutions *x*_{1}(*t*) ∈ (0, 1] and *x*_{2}(*t*) ∈ [1, ∞) of the equation *x*-ln *x*= *t*. For the sake of simplicity, we will use the notation *x*_{1} and *x*_{2} instead of *x*_{1} (*t*) and *x*_{2} (*t*) below.

Moreover, the equation *x*-ln *x*= *t* is equivalent to the equation (*-x*)*e*^{-x} = *-e*^{-t}, so that, according to the definition of *LW*(*t*), we have *-x* = *LW*(*-e*^{-t}), cf. (3.8). Now it is sufficient to determine in which branch of the multifunction *LW*(*t*) the solutions are contained. Since *x*_{1} ∈ (0, 1] and *x*_{2} ∈ (1, +∞], it holds that

${x}_{1}=-LW(0,-{\mathrm{e}}^{-t})\mathit{\hspace{1em}}\mathrm{and}\mathit{\hspace{1em}}{x}_{2}=-LW(-1,-{\mathrm{e}}^{-t})$

and the proof is complete. □

#### Proof of Lemma 3.2

The real function *F*(*a, b*) = *be*^{b}-a is continuously differentiable in both variables. From the properties of the Lambert W-function we know that, for any *a*_{0} ∈ (-e^{-1}, 0), the terms *b*_{01} = *LW* (0, *a*_{0}) and *b*_{02} = *LW* (-1, *a*_{0}) are real, and (*a*_{0}, b_{01}) and (*a*_{0}, b_{02}) are solutions of the equation *F*_{(a, b)}. Moreover, $\frac{\partial F}{\partial b}(a,b)={\mathrm{e}}^{b}+b{\mathrm{e}}^{b}$, so that $\frac{\partial F}{\partial b}({a}_{0},{b}_{01})\ne 0\ne \frac{\partial F}{\partial b}({a}_{0},{b}_{02})$ because *b*_{01} ≠ -1 ≠ *b*_{02} (⇔ *a*_{0} ≠ -e^{-1})

Applying the implicit function theorem at points (*a*_{0}, *b*_{01}) and (*a*_{0}, *b*_{02}), *a*_{0} ∈ (-e^{-1}, 0), we find that there exist continuously differentiable functions *b*_{i}: (-e^{-1}, 0) → ℝ, *i* = 1, 2, satisfying the equality *F*(*a, b*_{i}(*a*)) = 0, *i* = 1, 2 for *a* ∈ (-e^{-1}, 0). However, *b*_{1}(*a*) = *LW* (0, *a*) and *b*_{2}(*a*) = *LW* (-1, *a*), i.e., the functions *LW* (0, *-e*^{-x}) and *LW* (-1, *-e*^{-x}) are continuously differentiable on the interval (-1, +∞).

Differentiation of the equality (3.8) leads to

$\left[\frac{\text{d}}{\text{d}z}LW(k,z)\right]{\mathrm{e}}^{LW(k,z)}+LW(k,z){\mathrm{e}}^{LW(k,z)}\left[\frac{\text{d}}{\text{d}z}LW(k,z)\right]=1,$

from which

$\frac{\text{d}}{\text{d}z}LW(k,z)=\frac{1}{{\mathrm{e}}^{LW(k,z)}+\underset{z}{\underset{\u23df}{LW(k,z){\mathrm{e}}^{LW(k,z)}}}}.$

Multiplying both sides by $\frac{-LW(k,z)}{LW(k,z)}$ we get

$-\frac{\text{d}}{\text{d}z}LW(k,z)=\frac{-LW(k,z)}{\underset{z}{\underset{\u23df}{LW(k,z){\mathrm{e}}^{LW(k,z)}}}+zLW(k,z)},$

so that

$\left[\frac{\text{d}}{\text{d}z}LW(k,z)\right]\cdot (-z)=\frac{-LW(k,z)}{1+LW(k,z)}.$(7.1)

Substituting *z* = -e^{-x} and $-z={\mathrm{e}}^{-x}=\frac{\text{d}z}{\text{d}x}$ into (7.1) we finally obtain

$\left[\frac{\text{d}}{\text{d}z}LW(k,z)\right]\cdot \frac{\text{d}z}{\text{d}x}=\frac{-LW(k,-{\mathrm{e}}^{-x})}{1+LW(k,-{\mathrm{e}}^{-x})},$

with the left-hand side being $\frac{\text{d}}{\text{d}x}LW(k,-{\mathrm{e}}^{-x})$, which finishes the proof. □

**Note 6.** By Lemmas 3.1 and 3.2 it is easy to derive the distribution function *F*_{Y}(*t*) and the density *f*_{Y}(*t*) of the random variable *Y* = *X*-ln (*X*), where *X* ∼ Exp(1) and *P*(*X* < *x*) = 1-e^{-x}. Indeed,

$${F}_{Y}(t)=P(X-\mathrm{ln}\phantom{\rule{thinmathspace}{0ex}}X<t)=P({x}_{1}<X<{x}_{2})$$

where *x*_{1} and *x*_{2} are the real numbers guaranteed by Lemma 3.1. We can therefore write

$$\begin{array}{rl}{F}_{Y}(t)& =P(-LW(0,-{\text{e}}^{-t})<X<-LW(-1,-{\text{e}}^{-t}))\\ & ={\text{e}}^{LW(0,-{\text{e}}^{-t})}-{\text{e}}^{LW(-1,-{\text{e}}^{-t})},\end{array}$$(7.2)

from which for all *t* ∈ [1, +∞)

$$\begin{array}{rl}{f}_{Y}(t)=\frac{\text{d}}{\text{d}t}{F}_{Y}(t)=& \frac{\text{d}}{\text{d}t}\left[{\text{e}}^{LW(0,-{\text{e}}^{-t})}-{\text{e}}^{LW(-1,-{\text{e}}^{-t})}\right]\\ =& \frac{LW(-1,-{\text{e}}^{-t})}{1+LW(-1,-{\text{e}}^{-t})}\phantom{\rule{thinmathspace}{0ex}}{\text{e}}^{LW(-1,-{\text{e}}^{-t})}\\ & -\frac{LW(0,-{\text{e}}^{-t})}{1+LW(0,-{\text{e}}^{-t})}\phantom{\rule{thinmathspace}{0ex}}{\text{e}}^{LW(0,-{\text{e}}^{-t})},\end{array}$$

being the expression (3.7).

#### Proof of Lemma 4.1

a) We use mathematical induction to show that *y*(*t*_{i}) < 0 for all *i* = 0, 1,…,*n*, and *y*(.) is increasing on the intervals [*t*_{i}, *t*_{i+1}]. As *y*(.) is piecewise linear, the assertion will follow.

1° Let *i* = 0, then *y*(*t*_{0}) = *LW* (0, *-e*^{-a}) < 0 and for all *t* ∈ [*t*_{0}, *t*_{1}]

$\frac{\text{d}}{\text{d}t}y(t)=\frac{-LW(0,-{\mathrm{e}}^{-a})}{1+LW(0,-{\mathrm{e}}^{-a})}>0,$

i.e., *y*(.) is increasing on the interval [*t*_{0}, *t*_{1}].

2° Let *i* > 0 and $h=\frac{b-a}{n}$. Then $y({t}_{i})=y({t}_{i-1})+\frac{-y({t}_{i-1})}{1+y({t}_{i-1})}\cdot h$ and it is evident that *y* (*t*_{i}) < 0 if 1+*y*(*t*_{i-1}) > *h*. According to the induction assumption, *y*(*t*_{i-1}) ≥ *y*(*t*_{0}) > -1, so that it is sufficient to take

$n>\frac{b-a}{1+y({t}_{0})}=\frac{b-a}{1+u({t}_{0})}.$

According to the induction assumption, *y*(*t*_{i}) ≥ *y*(*t*_{0}) > -1. Moreover, $y({t}_{i+1})y({t}_{i})+\frac{-y({t}_{i})}{1+y({t}_{i})}\cdot h$ and we have just shown *y*(*t*_{i}) < 0, from which *y*(*t*_{i+1}). Therefore, *y*(.) is increasing on the interval [*t*_{i}, *t*_{i+1}].

b) This proof also uses mathematical induction. We will show that for all *t* ∈ (*t*_{i}, *t*_{i+1}], *i* = 0, 1, …, *n*-1, it holds that *y*(*t*) > *u*(*t*).

1° Let *i* = 0. Then, from the definition of *y*(*t*), we have *y*(*t*_{0}) = *u*(*t*_{0}) and ${\frac{\text{d}y(t)}{\text{d}t}|}_{t={t}_{0}}=\frac{-u({t}_{0})}{1+u({t}_{0})}={\frac{\text{d}u(t)}{\text{d}t}|}_{t={t}_{0}}$. From the strict concavity of *u*(.) and linearity of *y*(.) on the interval [*t*_{0}, *t*_{1}] we get for all *t* ∈(*t*_{0}, *t*_{1}] that *y*(*t*) > *u*(*t*).

2° Let *i* > 0 and assume that *y*(*t*) ≤ *u*(*t*) for a certain *t* ∈ (*t*_{i}, *t*_{i+1}]. Then, from the induction assumption, we have *y*(*t*_{i}) > *u*(*t*_{i}). From the continuity of *y*(.) and *u*(.) and from the nonlinearity of *u*(.) there exists *T* ∈ (*t*_{i}, *t*_{i+1}], such that (*T, u*(*T*)) = (*T, y*(*T*)), i.e., the graphs of the functions *u*(.) and *y*(.) intersect and

$\frac{-y({t}_{i})}{1+y({t}_{i})}={\frac{\text{d}y(t)}{\text{d}t}|}_{t=T}\le {\frac{\text{d}u(t)}{\text{d}t}|}_{t=T}.$(7.3)

Note that *y*(.) is increasing, *u*(*T*) = *y*(*T*), and the induction assumption implies *u*(*t*_{i}) < *y*(*t*_{i}). These facts yield that there exists *T*_{0} ∈ (*t*_{i}, T) such that *u*(*T*_{0}) = *y*(*t*_{i}). Further,

$\frac{-y({t}_{i})}{1+y({t}_{i})}=\frac{-u({T}_{0})}{1+u({T}_{0})}={\frac{\text{d}u(t)}{\text{d}t}|}_{t={T}_{0}}$

and together with (7.3) we obtain

${\frac{\text{d}u(t)}{\text{d}t}|}_{t={T}_{0}}\le {\frac{\text{d}u(t)}{\text{d}t}|}_{t=T},$

being a contradiction with the descent of the first derivative, because $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u(t)>0$ on [*a, b*]. □

#### Proof of Lemma 4.2

Since *u*(.) is increasing, it is sufficient to take *x* > *y* and prove the inequality without the absolute value. Lagrange’s mean value theorem assures that there exists *t*_{0} ∈ [*y, x*] such that

$u(x)-u(y)={\frac{\text{d}u(t)}{\text{d}t}|}_{t={t}_{0}}\cdot (x-y).$(7.4)

Since $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u(t)<0$ for all *t* ∈ [*a, b*], then

${\frac{\text{d}u(t)}{\text{d}t}|}_{u=a}>{\frac{\text{d}u(t)}{\text{d}t}|}_{t={t}_{0}},$(7.5)

so that, using Lemma 3.2, we get

${\frac{\text{d}u(t)}{\text{d}t}|}_{u=a}=\frac{-u(a)}{1+u(a)}=c.$(7.6)

Finally, from (7.4), (7.5) and (7.6), we get the assertion of Lemma 4.2. □

#### Proof of Theorem 4.1

Let us denote by *d*_{i} the difference between the grid points (knots), i.e., *d*_{i} = *y* (*t*_{i}) - *u* (*t*_{i}). Using Lagrange’s mean value theorem, we have, for *i* ≥ 0,

$$\begin{array}{rl}{d}_{i+1}& =y({t}_{i+1})-u({t}_{i+1})\\ & =y({t}_{i})+\frac{-y({t}_{i})}{1+y({t}_{i})}\phantom{\rule{thinmathspace}{0ex}}h-\left(u({t}_{i})+\frac{du(t)}{\{textdt}{|}_{t=T}\cdot h\right)\\ & ={d}_{i}+\left(\frac{u(T)}{1+u(T)}-\frac{y({t}_{i})}{1+y({t}_{i})}\right)h\\ & ={d}_{i}+\frac{u(T)-y({t}_{i})}{(1+y({t}_{i}))\phantom{\rule{thinmathspace}{0ex}}(1+u(T))}\phantom{\rule{thinmathspace}{0ex}}h,\phantom{\rule{2em}{0ex}}T\in [{t}_{i},{t}_{i+1}].\end{array}$$(7.7)

For *i* = 0 we use the facts that *y*(*t*_{0}) = *u*(*t*_{0}), *d*_{0}, *u*(.) is increasing, and *u*(*t*_{0}) > -1; hence

$${d}_{1}=\frac{u(T)-u({t}_{0})}{(1+u({t}_{0}))\phantom{\rule{thinmathspace}{0ex}}\left(1+u(T)\right)}\phantom{\rule{thinmathspace}{0ex}}h<\frac{u(T)-u({t}_{0})}{(1+u({t}_{0}){)}^{2}}\phantom{\rule{thinmathspace}{0ex}}h.$$(7.8)

Now, again using Lagrange’s mean value theorem, we get

$$\begin{array}{rl}{d}_{1}& =\frac{\frac{\text{d}}{\text{d}s}u(s)\phantom{\rule{thinmathspace}{0ex}}(T-{t}_{0})}{(1+u({t}_{0}){)}^{2}}\phantom{\rule{thinmathspace}{0ex}}h=\frac{-\frac{u(s)}{1+u(s)}\phantom{\rule{thinmathspace}{0ex}}(T-{t}_{0})}{(1+u({t}_{0}){)}^{2}}h\\ & \le \frac{-u({t}_{0})}{(1+u({t}_{0}){)}^{3}}\phantom{\rule{thinmathspace}{0ex}}{h}^{2}=\frac{-u({t}_{0})}{(1+u({t}_{0}){)}^{3}}\phantom{\rule{thinmathspace}{0ex}}(\frac{b-a}{n}{)}^{2},\phantom{\rule{2em}{0ex}}s\in [{t}_{0},T].\end{array}$$

Let 0 < *i* < *n* and *u*(*T*) - *y*(*t*_{i}) > 0. Then, using the fact that both functions *u*(.) and *y*(.) are increasing, *y*(*t*) ≥ *u*(*t*) > -1 for all *t* ∈ [*a, b*] (this follows from Lemma 4.1) and formula (7.7), we obtain that

$$\begin{array}{}{d}_{i+1}& \le {d}_{i}+\frac{u(T)-y({t}_{i})}{(1+y({t}_{0}))\phantom{\rule{thinmathspace}{0ex}}(1+u({t}_{0}))}\phantom{\rule{thinmathspace}{0ex}}h\\ & ={d}_{i}+\frac{u(T)-y({t}_{i})}{(1+u({t}_{0}){)}^{2}}\phantom{\rule{thinmathspace}{0ex}}h<{d}_{i}+\frac{u({t}_{i+1})-u({t}_{i})}{(1+u({t}_{0}){)}^{2}}\phantom{\rule{thinmathspace}{0ex}}h\end{array}$$

and by the inequality

$$u({t}_{i+1})-u({t}_{i})<c({t}_{i+1}-{t}_{i})=c\cdot h,$$

which follows directly from Lemma 4.2, we have

$${d}_{i+1}<{d}_{i}+B\phantom{\rule{thinmathspace}{0ex}}{h}^{2},$$(7.9)

where $B=\frac{c}{{\left(1+u({t}_{0})\right)}^{2}}$ is a positive constant. If 0 < *i* < *n* and *u*(*T*) - *y*(*t*_{i}) ≤ 0, then we obtain (7.9) immediately, since by (7.7): *d*_{i+1} ≤ *d*_{i} < *d*_{i} + *B h*^{2}.

An iterative use of (7.9) leads to

$$\begin{array}{rl}{d}_{i+1}& <[{d}_{i-1}+B\phantom{\rule{thinmathspace}{0ex}}{h}^{2}]+B\phantom{\rule{thinmathspace}{0ex}}{h}^{2}<\cdots <{d}_{1}+iB{h}^{2}<{d}_{1}+nB{h}^{2}\\ & ={d}_{1}+nB{\left(\frac{b-a}{n}\right)}^{2}={d}_{1}+B\frac{(b-a{)}^{2}}{n}\end{array}$$(7.10)

From (7.8) it follows that ${lim}_{n\to +\mathrm{\infty}}{d}_{1}=0$, which ensures that the last term in (7.10) converges to 0 for *n*→ + ∞. Since it does not depend on *i*, we have

$$\mathrm{\forall}\epsilon >0\phantom{\rule{thinmathspace}{0ex}}\mathrm{\exists}{n}_{0}\phantom{\rule{thinmathspace}{0ex}}\mathrm{\forall}n>{n}_{0}\phantom{\rule{thinmathspace}{0ex}}\mathrm{\forall}i\in \{0,1,\dots ,n\}\phantom{\rule{1em}{0ex}}\mathrm{i}\mathrm{t}\phantom{\rule{thinmathspace}{0ex}}\mathrm{h}\mathrm{o}\mathrm{l}\mathrm{d}\mathrm{s}\phantom{\rule{1em}{0ex}}{d}_{i}<\epsilon .$$(7.11)

The function *y*(.) is linear on each interval [*t*_{i}, t_{i+1}], *i* = 0, 1,…, *n*, and *u*(.) is concave. Therefore, the function *y*(*t*) - *u*(*t*) is convex on each [*t*_{i}, *t*_{i+1}], and since it is continuous, it can be shown easily, that its supremum is attained on one of the grid points *t*_{i} or *t*_{i+1}. Lemma 4.1 ensures that *y*(*t*) ≥ *u*(*t*) for all *t* ∈ [*a, b*]. It follows that

$$\begin{array}{rl}\underset{t\in [a,b]}{sup}\left|y(t)-u(t)\right|& =\underset{t\in [a,b]}{max}\left(y(t)-u(t)\right)=\underset{i\in \{0,1,\dots ,n-1\}}{max}[\underset{t\in [{t}_{i},{t}_{i+1}]}{max}\left(y(t)-u(t)\right)]\\ & =\underset{i\in \{0,1,\dots ,n-1\}}{max}\left[max\{{d}_{i},{d}_{i+1}\}\right]=\underset{i\in \{0,1,\dots ,n\}}{max}{d}_{i}\end{array}$$

Finally, using (7.11) we get

$$\mathrm{\forall}\epsilon >0\phantom{\rule{thinmathspace}{0ex}}\mathrm{\exists}{n}_{0}\phantom{\rule{thinmathspace}{0ex}}\mathrm{\forall}n>{n}_{0}\phantom{\rule{1em}{0ex}}\underset{t\in [a,b]}{sup}\left|y(t)-u(t)\right|<\epsilon ,$$

which proves the uniform convergence. □

#### Proof of Lemma 4.3

a) The proof follows the approach used in the proof of Lemma 4.1 a). In step 1∘ it is sufficient to use the fact that

$$\frac{-LW(-1,-{\text{e}}^{-a})}{1+LW(-1,-{\text{e}}^{-a})}<0,$$

and in step 2∘ the fact that

$$\frac{-y({t}_{i})}{1+y({t}_{i})}<0.$$

b) We proceed analogously to the proof of Lemma 4.1 b). In step 1∘, the only difference is that we use the strict convexity of *u*(.). In step 2∘ we can show easily that ${\frac{\text{d}u(t)}{\text{d}t}|}_{t={T}_{0}}\ge {\frac{\text{d}u(t)}{\text{d}t}|}_{t=T}$, which is the contradiction with the growth of the first derivative, because $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u(t)>0$ for all *t* ∈ [*a, b*]. □

#### Proof of Lemma 4.4

Since *u*(.) is decreasing, it is enough to prove that *u*(*y*)-*u*(*x*) < *c*(*x-y*) for all *y*. Similarly as in the proof of Lemma 4.2 we get

$\text{there exists}{t}_{0}\in [y,x]:u(y)-u(x)={\frac{\text{d}u(t)}{\text{d}t}|}_{t={t}_{0}}\cdot (y-x).$(7.12)

Since $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u(t)>0$ for all *t* ∈ [*a, b*], it is true that

$$\frac{\text{d}u(t)}{\text{d}t}{|}_{t=a}<\frac{\text{d}u(t)}{\text{d}t}{|}_{t={t}_{0}}.$$(7.13)

Moreover, according to Lemma 3.2 we have

$\frac{\text{d}u(t)}{\text{d}t}{|}_{t=a}=\frac{-u(a)}{1+u(a)}=:-c.$(7.14)

Substituting (7.13) and (7.14) into (7.12), we finally get

$$u(y)-u(x)=\frac{-du(t)}{dt}{|}_{t={t}_{0}}\cdot (x-y)<\frac{-du(t)}{dt}{|}_{t=a}\cdot (x-y)=c\phantom{\rule{thinmathspace}{0ex}}(x-y),$$

which completes the proof. □

#### Proof of Theorem 4.2

In this proof we again denote by *d*_{i} the difference between the grid points (knots); however, now *d*_{i} = *u*(*t*_{i})-*y*(*t*_{i}). Then analogously to the proof of Theorem 4.1 we have

${d}_{i+1}={d}_{i}+\frac{y({t}_{i})-u(T)}{\left(1+y({t}_{i})\right)\left(1+u(T)\right)}h,T\in [{t}_{i},{t}_{i+1}],i\ge 0.$(7.15)

If *i* = 0, then *y*(*t*_{0}) = *u*(*t*_{0}) and *d*_{0} = 0. Further, *u*(.) is decreasing and *u*(*t*) < -1 for all *t* ∈ [*a, b*]. Similarly to the proof of Theorem 4.1 we have

$${d}_{1}<\frac{u({t}_{0})}{(1+u({t}_{0}){)}^{3}}\phantom{\rule{thinmathspace}{0ex}}(\frac{b-a}{n}{)}^{2}.$$(7.16)

Let 0 < *i* < *n* and *y*(*t*_{i}) - *u*(*T*) > 0. Then, using the fact that both functions *u*(.) and *y*(.) are decreasing, *y*(*t*) < *u*(*t*) < -1 for all *t* ∈ [*a, b*] (see Lemma 4.3), and (7.15), similarly to the proof of Theorem 4.1 we get that the following equality holds, i.e.,

${d}_{i+1}={d}_{i}+\frac{y({t}_{i})-u({t}_{i+1})}{{\left(1+u({t}_{0})\right)}^{2}}h.$

Finally, we use the inequality *u*(*t*_{i}) - *u*(*t*_{i+1}) < *c*(*t*_{i+1} - *t*_{i}) = *c* ⋅ *h*, which follows directly from Lemma 4.4, to get

$${d}_{i+1}<{d}_{i}+B\phantom{\rule{thinmathspace}{0ex}}{h}^{2},$$

where $B=\frac{c}{{\left(1+u({t}_{0})\right)}^{2}}>0$ is a positive constant. Further, we follow the same approach as in the proof of Theorem 4.1. □

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.