Show Summary Details
More options …

# Mathematica Slovaca

Editor-in-Chief: Pulmannová, Sylvia

6 Issues per year

IMPACT FACTOR 2017: 0.314
5-year IMPACT FACTOR: 0.462

CiteScore 2017: 0.46

SCImago Journal Rank (SJR) 2017: 0.339
Source Normalized Impact per Paper (SNIP) 2017: 0.845

Mathematical Citation Quotient (MCQ) 2017: 0.26

Online
ISSN
1337-2211
See all formats and pricing

Access brought to you by:

provisional account

More options …
Volume 68, Issue 5

# Approximation of Information Divergences for Statistical Learning with Applications

Milan Stehlík
• Institute of Applied Statistics and Linz Institute of Technology Johannes Kepler University Altenberger Strasse 69 A–4040 Linz Austria, Institute of Statistics University of Valparaíso Gran Bretana 1111 Valparaíso Chile, Austria
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
/ Ján Somorčík
• Department of Applied Mathematics and Statistics Comenius University Mlynská Dolina SK–842 48 Bratislava Slovakia
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
/ Luboš Střelec
• Department of Statistics and Operation Analysis Mendel University in Brno Zemědělská 1 CZ–613 00 Brno Czech Republic
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
/ Jaromír Antoch
• Department of Probability and Mathematical Statistics Charles University Sokolovská 83 CZ–186 75 Praha Czech Republic
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
Published Online: 2018-10-20 | DOI: https://doi.org/10.1515/ms-2017-0177

## Abstract

In this paper we give a partial response to one of the most important statistical questions, namely, what optimal statistical decisions are and how they are related to (statistical) information theory. We exemplify the necessity of understanding the structure of information divergences and their approximations, which may in particular be understood through deconvolution. Deconvolution of information divergences is illustrated in the exponential family of distributions, leading to the optimal tests in the Bahadur sense. We provide a new approximation of I-divergences using the Fourier transformation, saddle point approximation, and uniform convergence of the Euler polygons. Uniform approximation of deconvoluted parts of I-divergences is also discussed. Our approach is illustrated on a real data example.

MSC 2010: 62E17; 62F03; 65L20; 33E30

We would like to extend our gratitude for the support from Fondecyt Proyecto Regular No. 1151441 and LIT-2016-1-SEE-023. This work was also supported by Grants P403/15/09663S and GA16-07089S of the Czech Science Foundation, and grant VEGA No. 2/0047/15. Support from the BELSPO IAP P7/06 StUDyS network is also prominently acknowledged. The authors are very grateful to the Editor, Associate Editor and anonymous referees for their valuable comments and extremely careful reading.

## 1 Introduction

It is well known that one of the most important statistical applications of information theory is testing of statistical hypotheses, and that deconvolution of information divergences can lead to the optimal statistical inference. We illustrate this fact by the deconvolution of information divergence in the exponential family (see [12] for details), which results in tests optimal in the Bahadur sense.

Let us consider a statistical model with N independent observations y1,…yN, which are distributed according to the gamma densities

$f(yi|ϑ)=γi(ϑ)viΓ(vi)yivi−1exp{−γi(ϑ)yi},yi>0,0,yi≤0,$(1.1)

where $\mathrm{\Gamma }\left(t\right)=\underset{0}{\overset{\mathrm{\infty }}{\int }}{x}^{t-1}{\mathrm{e}}^{-x}\text{d}x$ denotes the Gamma function, ϑ=(ϑ1,…,ϑp)TΘ is a vector of unknown scale parameters, which are the parameters of interest, and v=(v,…,vN)T is a vector of known shape parameters. The parameter space Θ is an open subset of ℝP, γiC2 (Θ) and the matrix of the first order derivatives of the mapping γ = (γ1,…γN)T has a full rank on Θ. This type of model is motivated, e.g., by modeling time intervals between N+1 successive random events in a Poisson process and testing its homogeneity; see [1] for details.

For v1 = v2 = … = vn =: v the model (1.1) belongs to a regular exponential family of N-variate densities

$\mathrm{exp}\left\{-\psi \left(\Lambda \right)+{t}^{T}\left(\Lambda \right)𝜸-\kappa \left(𝜸\right)\right\},$(1.2)

where Y= (Y1, …, YN)T, $\psi \left(\Lambda \right)=\left(1-v\right)\sum _{i=1}^{N}\mathrm{ln}\left({y}_{i}\right)$, $\kappa \left(\gamma \right)=N\mathrm{ln}\left(\mathrm{\Gamma }\left(v\right)\right)-v\sum _{i=1}^{N}\mathrm{ln}\left({\gamma }_{i}\right)$, and t(y) = -y is the sufficient statistic for the canonical parameter γ𝒢. The “covering property” (see [5] for details)

$\left\{t\left(\Lambda \right):\Lambda \in Y\subseteq {\text{R}}^{N}\right\}\subseteq \left\{{E}_{𝜸}t\left(\Lambda \right):𝜸\in 𝒢\right\},$

together with the relation

${E}_{𝜸}t\left(\Lambda \right)=\frac{\partial \kappa \left(𝜸\mathbf{\right)}}{\partial 𝜸},$

enable us to associate with each value of t (y) a value 𝜸 ^ y ∈ 𝒢, which satisfies

${\frac{\partial \kappa \left(\Lambda \right)}{\partial \Lambda }|}_{\Lambda ={\stackrel{^}{\Lambda }}_{\Lambda }}=t\left(\Lambda \right).$(1.3)

It follows from (1.3) that ${\stackrel{^}{\mathbit{\gamma }}}_{\mathbit{y}}$ is the MLE of the canonical parameter γ in the family (1.2). This allows us to define I-divergence of the observed vector y in the sense of [5] as

$IN(y,γ)=I(γ^y,γ).$

Recall that I (γ⋆, γ) is the Kullback-Leibler divergence between the densities with parameters γ⋆ and γ. The I-divergence has many nice geometrical properties. For our purposes, let us just mention the Pythagorean relation, i.e., for all $\mathbit{\gamma },\overline{\mathbit{\gamma }},{\mathbit{\gamma }}^{\star }\in \mathrm{int}\left(\mathcal{G}\right)$ such that $\left({E}_{\overline{\mathbit{\gamma }}}t\left(\mathbit{y}\right)-{E}_{{\mathbit{\gamma }}^{\mathbf{\star }}}t\left(\mathbit{y}\right){\right)}^{T}\left({\mathbit{\gamma }}^{\star }-\mathbit{\gamma }\right)=0$, it holds that

$I(γ¯,γ)=I(γ¯,γ⋆)+I(γ⋆,γ),$

where int(𝒢) denotes the interior of the set 𝒢. Recall that the Pythagorean relation can be used for constructing the density of the MLE in a regular exponential family, see [6] for details.

The use of I-divergence(s) has nice statistical consequences. Let us consider, e.g., the test statistic λ1 for the likelihood ratio test (LRT) of the hypothesis H0: γ = γ0 against H1 γ ≠ γ0, and the test statistic λ2 for the LRT of the homogeneity hypothesis ${\stackrel{~}{H}}_{0}{\gamma }_{1}=\mathrm{\dots }={\gamma }_{N}$ in the family of densities (1.1). Then we have the following interesting relation for every vector of canonical parameters ${\Lambda }_{0}=\left({\gamma }_{0},\mathrm{\dots },{\gamma }_{0}\right)\in 𝒢$,

$IN(y,γ0)=−lnλ1+[−lnλ2|γ1=⋯=γN],$(1.4)

where the variables – In λ1 and – In λ2|γ1 = … = γN (being the test statistic – In λ2 under the ${\stackrel{~}{H}}_{0}$) are independent. Notice that deconvolution (1.4) of IN is a consequence of Theorem 4 in [11]. Both tests are asymptotically optimal in the Bahadur sense ([8, 9]). For details on homogeneity testing for selected members of the exponential family see, e.g., papers [14] and [15]. General deconvolution of IN is explained, using a geometric integration, in [13].

This paper is organized as follows. In Section 2 we study the approximation of the density of I1 by means of the Fourier transformation. In Section 3 we derive the corresponding saddle point approximation. In Section 4 we derive approximation of I1 by numerical methods applicable to differential equations. Section 5 about likelihood ratio tests follows. Finally, Section 6 illustrates our approach on a simple example based on a real problem and data. A concise discussion concludes the paper.

## 2 Approximation of the density of I1 by Fourier transformation

Suppose $X\sim \mathrm{Exp}\left(\gamma \right)$, i.e., X is exponentially distributed. The basic building block of the I-divergence IN is the random variable I1 (X, γ). For γ = 1 it can be shown easily, that I1 (X, 1) = X – In (X) – 1. Before studying the properties of I1 and IN, we will focus on the random variable Y = X – In (X). To find the exact cumulative distribution function of Y, we need the Lambert W-function, studied in detail in [11], for example. Here we outline an approach enabling us to use the Fourier transformation for approximating the density fY of the random variable Y.

Before deriving the characteristic function of Y, we will start with an approximation of the characteristic function ${\phi }_{X}\left(t\right)=E{\mathrm{e}}^{\text{i}tX}$ of $X\sim \mathrm{Exp}\left(1\right)$. Suppose that we can use an expansion ${\phi }_{X}\left(t\right)=\sum _{n=0}^{\mathrm{\infty }}{a}_{n}{t}^{n}$. Then we can approximate the density ${f}_{X}\left(x\right)$ by $\frac{1}{2\pi }\underset{-\mathrm{\infty }}{\overset{+\mathrm{\infty }}{\int }}{\mathrm{e}}^{-\text{i}tx}\left(\sum _{n=0}^{{n}_{0}}{a}_{n}{t}^{n}\right)\text{d}t$.

We must distinguish between two basic cases, i.e.,

$\left\{\begin{array}{cc}\text{for}|t|<1\text{it can be shown}{\phi }_{X}\left(t\right)=\sum _{n=0}^{\mathrm{\infty }}{\left(\text{i}t\right)}^{n},\hfill & \\ \text{for}|t|>1\text{it can be shown}{\phi }_{X}\left(t\right)=\sum _{n=1}^{\mathrm{\infty }}\frac{-1}{{\left(\text{i}t\right)}^{n}}.\hfill & \end{array}$(2.1)

Let us take only finitely many terms from the series φX(t), say n0, which is even, and approximate the right-hand side of the so-called “backward transformation”

${f}_{X}\left(x\right)=\frac{1}{2\pi }\underset{-\mathrm{\infty }}{\overset{+\mathrm{\infty }}{\int }}{\mathrm{e}}^{-\text{i}tx}{\phi }_{X}\left(t\right)\text{d}t.$(2.2)

By straightforward algebra for all t ∈ (-1, 1) we get an approximation

$\frac{1}{2\pi }\underset{-1}{\overset{1}{\int }}\left[{\mathrm{e}}^{-\text{i}tx}\sum _{n=0}^{{n}_{0}-1}{\left(\text{i}t\right)}^{n}\right]\text{d}t=\frac{1}{\pi }\sum _{n=0}^{\frac{{n}_{0}}{2}-1}{\left(-1\right)}^{n}\underset{0}{\overset{1}{\int }}\left[{t}^{2n}\mathrm{cos}\left(tx\right)+{t}^{2n+1}\mathrm{sin}\left(tx\right)\right]\text{d}t$(2.3)

and analogously for all t ∈ (-∞, -1) ∪ (1, +∞) we get

$\frac{1}{2\pi }{\int }_{\text{R}\setminus \left(-1,1\right)}\left[{\mathrm{e}}^{-\text{i}tx}\sum _{n=1}^{{n}_{0}}-\frac{1}{{\left(\text{i}t\right)}^{n}}\right]\text{d}t=\frac{1}{\pi }\sum _{n=1}^{\frac{{n}_{0}}{2}}{\left(-1\right)}^{n+1}\underset{1}{\overset{\mathrm{\infty }}{\int }}\left[\frac{1}{{t}^{2n}}\mathrm{cos}\left(tx\right)+\frac{1}{{t}^{2n-1}}\mathrm{sin}\left(tx\right)\right]\text{d}t.$(2.4)

Hence, for n0 large enough, (2.2) is approximately equal to the sum of (2.3) and (2.4). A graphical representation of this approximation was prepared using Mathematica v. 11.1 and is plotted in Figure 1. The integrals were calculated numerically. The exact density is marked with a thick line.

Figure 1

Density of the random variable X ~ Exp (1) and its approximation (2.3)+(2.4) for n0 = 4,10,16.

Let us now turn back to the random variable Y = X–ln (X) and its characteristic function

${\phi }_{Y}\left(t\right)=\underset{0}{\overset{+\mathrm{\infty }}{\int }}{\mathrm{e}}^{\text{i}t\left(x-\mathrm{ln}\left(x\right)\right)}{\mathrm{e}}^{-x}\text{d}x,$

and notice that an attempt to express φY(t) in the series’ form is more complicated than in the case of φX(t). Therefore, we do not compute φY(t) directly, but we will concentrate on its n th-derivative, for whose existence we only need to prove that the random variable Y = X–ln (X) has the n th initial moment, i.e., that $\underset{0}{\overset{+\mathrm{\infty }}{\int }}{\left(x-\mathrm{ln}\left(x\right)\right)}^{n}{\mathrm{e}}^{-x}\text{d}x$ is a real number. It holds that

$∫0+∞(x−ln⁡(x))ne−xdx=∑k=0n(−1)n−knk∫0+∞xkln⁡(x)n−ke−xdx|u=ln(x)=∑k=0n(−1)n−knk∫0+∞un−ke(k+1)u−eudu.$(2.5)

Since there exists u0 > 0 such that for all u > u0 it holds that $\left(k+1\right)u-{\mathrm{e}}^{u}<-u$, then

${\mathrm{e}}^{\left(k+1\right)u-{\mathrm{e}}^{u}}<{\mathrm{e}}^{-u}\mathit{ }\text{for all}u>{u}_{0},$

and, simultaneously,

$\underset{{u}_{0}}{\overset{+\mathrm{\infty }}{\int }}{u}^{n-k}{\mathrm{e}}^{-u}\text{d}u<+\mathrm{\infty }\mathit{ }\text{for all}k=0,\mathrm{\dots },n.$(2.6)

Finally, from (2.5) and (2.6) we get the desired outcome in the form

$\sum _{k=0}^{n}\underset{0}{\overset{+\mathrm{\infty }}{\int }}{u}^{n-k}{\mathrm{e}}^{\left(k+1\right)u-{\mathrm{e}}^{u}}\text{d}u\in \text{R}.$

Similarly, according to [7, Chapter VI, Theorem 7], we have

$φY(n)(0)=in∫0+∞x−ln⁡(x)ne−xtextdx=in∑l=0nnl(−1)n−l∫0+∞xl(ln⁡(x))n−le−xdx=in∑l=0nnl(−1)n−lΓ(n−l)(l+1),$(2.7)

where the last equality follows from the identity

$Γ(l)(t):=dldtlΓ(t)=∫0∞xt−1ln⁡(x)le−xdx,t>0.$

Thus, φY(t) can be formally written in the form

$φY(t)=∑n=0∞φY(n)(0)n!tn=∑n=0∞∑l=0nnl(−1)n−lΓ(n−l)(l+1)n!⏟a(n)(it)n.$(2.8)

Using the first n0 terms of the series (2.8) and applying the backward transformation (2.2), we obtain

$12π∫−∞∞e−itx∑n=0n0−1∑l=0nnl(−1)n−lΓ(n−l)(l+1)n!(it)ndt=1π∑n=0n02−1−1n∫0∞a(2n)t2ncos⁡(tx)+a(2n+1)t2n+1sin⁡(tx)dt.$(2.9)

However, the integrals in (2.9) are divergent and a flaw of this procedure is that (2.8) is not valid for each t ∈ ℝ. Therefore, we are interested in the limit behavior of the sequence $\sqrt[n]{|a\left(n\right)|}$. Using a computer, it is possible to calculate $\sqrt[n]{|a\left(n\right)|}$ for at least a few n. From the analysis of the obtained results we formulate the following hypothesis,

$\underset{n\to +\mathrm{\infty }}{lim}\sqrt[n]{|a\left(n\right)|}=1.$(2.10)

The well-known Cauchy-Hadamard theorem from the complex analysis would guarantee the absolute convergence of the series (2.8) for |t| < 1. However, neither the relationship in (2.10) could be confirmed, nor we expressed φY(t) in the series form for |t| > 1. As suggested by an anonymous referee, the relation (2.10) is quite difficult to confirm, but one can easily find a bound for $|a\left(n\right)|=|{\phi }_{Y}^{\left(n\right)}\left(0\right)|/n!$ and show that ${lim sup}_{n\to \mathrm{\infty }}\sqrt[n]{|a\left(n\right)|}\le 1$. Hence, the radius of convergence is at least 1.

## 3 Saddle-point approximations

In this section we use the saddle point approximation and the framework developed in [5, Chapter 3] to derive an approximation of the density of the random variable Y = X - ln (X), where X ~ Exp(1).

Let us first consider a family of densities of the random variable X:

$f\left(x|\gamma \right)={\mathrm{e}}^{-\mathrm{ln}x+\left(\mathrm{ln}x-x\right)\gamma -\kappa \left(\gamma \right)},x\in \left(0,\mathrm{\infty }\right),\gamma \in 𝒢=\left(0,\mathrm{\infty }\right),$

where using the identity $1=\underset{0}{\overset{\mathrm{\infty }}{\int }}f\left(x|\gamma \right)\text{d}x$ we get

${\mathrm{e}}^{\kappa \left(\gamma \right)}=\underset{0}{\overset{\mathrm{\infty }}{\int }}{\mathrm{e}}^{-\mathrm{ln}x+\left(\mathrm{ln}x-x\right).\gamma }\text{d}x=\underset{0}{\overset{\mathrm{\infty }}{\int }}{x}^{\gamma -1}{\mathrm{e}}^{-x\gamma }\text{d}x=\frac{1}{{\gamma }^{\gamma }}\underset{0}{\overset{\mathrm{\infty }}{\int }}{u}^{\gamma -1}{\mathrm{e}}^{-u}\text{d}u=\frac{\mathrm{\gamma }\left(\gamma \right)}{{\gamma }^{\gamma }},$

which means that

$\kappa \left(\gamma \right)=\mathrm{ln}\mathrm{\gamma }\left(\gamma \right)-\gamma \mathrm{ln}\gamma .$(3.1)

Note that e-x = e-ln x+(ln x-x)γ-κ(γ) | γ = 1. Hence, the saddle-point approximation qT (t|γ) of the density of the random variable T = ln (X)-X, where X has the density f (x|γ), can help us to approximate the density of the random variable Y = X - ln (X), where X ∼ Exp(1). The required approximation has the form qT(-t| 1).

Let us check the regularity assumptions a) – f) from [5, Chapter 3]. Assumptions a) – c) are obviously satisfied because the sample space of the random variable X is (0, ∞), the parametric space is 𝚯 = (0, ∞), and γ (0, ∞) → (0, ∞) is the identity. Since t(x)=ln(x)-x, we have $\frac{\text{d}}{\text{d}x}t\left(x\right)=\frac{1}{x}-1$, so that d) follows. Consequently, $\kappa \left(\gamma \right)<\mathrm{\infty }$ for all $\gamma \in \left(0,\mathrm{\infty }\right)=𝒢$ and Assumption e) follows. Finally,

$\left\{\frac{\partial }{\partial \gamma }\kappa \left(\gamma \right);\gamma \in \left(0,\mathrm{\infty }\right)\right\}=\left\{\frac{{\mathrm{\gamma }}^{\prime }\left(\gamma \right)}{\mathrm{\gamma }\left(\gamma \right)}-\mathrm{ln}\gamma -1;\gamma \in \left(0,\mathrm{\infty }\right)\right\}=\left(-\mathrm{\infty },-1\right),$

which implies the validity of Assumption f), since t(x) = ln (x) - x ≤ -1 for x > 0.

The random variable T = ln(X) - XT, where X has the density f(xIγ), can be shown to have a density of the form ${\mathrm{e}}^{-\varphi \left(t\right)+t\gamma -\kappa \gamma }$, where the function ϕ(.) is usually hard to determine explicitly. For the MLE $\stackrel{^}{\gamma }\left(t\right)$ of γ define

${\mathrm{\Sigma }}_{\stackrel{^}{\gamma }\left(t\right)}:={\frac{{\partial }^{2}}{\partial {\gamma }^{2}}\kappa \left(\gamma \right)|}_{\gamma =\stackrel{^}{\gamma }\left(t\right)}=\frac{{\mathrm{\gamma }}^{\prime \prime }\left(\stackrel{^}{\gamma }\left(t\right)\right)\mathrm{\gamma }\left(\stackrel{^}{\gamma }\left(t\right)\right)-{\left[{\mathrm{\gamma }}^{\prime }\left(\stackrel{^}{\gamma }\left(t\right)\right)\right]}^{2}}{{\left[\mathrm{\gamma }\left(\stackrel{^}{\gamma }\left(t\right)\right)\right]}^{2}}-\frac{1}{\stackrel{^}{\gamma }\left(t\right)}.$(3.2)

Then for the I-divergence defined as

$I\left(\gamma ,{\gamma }^{*}\right):={E}_{\gamma }\left[\mathrm{ln}\frac{f\left(x|\gamma \right)}{f\left(x|{\gamma }^{*}\right)}\right],$(3.3)

it can be shown easily by means of (3.1) that

$I\left(\stackrel{^}{\gamma }\left(t\right),\gamma \right)=\left(\stackrel{^}{\gamma }\left(t\right)-\gamma \right)t-\mathrm{ln}\mathrm{\gamma }\left(\stackrel{^}{\gamma }\left(t\right)\right)+\stackrel{^}{\gamma }\left(t\right)\mathrm{ln}\stackrel{^}{\gamma }\left(t\right)+\mathrm{ln}\mathrm{\gamma }\left(\gamma \right)-\gamma \mathrm{ln}\gamma ,$

from which we finally get

$I\left(\stackrel{^}{\gamma }\left(t\right),1\right)=\left(\stackrel{^}{\gamma }\left(t\right)-1\right)t-\mathrm{ln}\mathrm{\gamma }\left(\stackrel{^}{\gamma }\left(t\right)\right)+\stackrel{^}{\gamma }\left(t\right)\mathrm{ln}\stackrel{^}{\gamma }\left(t\right).$(3.4)

Now, it remains to determine $\stackrel{^}{\gamma }\left(t\right)$. From the equation ${\frac{\partial }{\partial \gamma }\kappa \left(\gamma \right)|}_{\gamma =\stackrel{^}{\gamma }\left(t\right)}=t$ (cf. (1.3)) we have

$\frac{{\mathrm{\gamma }}^{\prime }\left(\stackrel{^}{\gamma }\left(t\right)\right)}{\mathrm{\gamma }\left(\stackrel{^}{\gamma }\left(t\right)\right)}-\mathrm{ln}\stackrel{^}{\gamma }\left(t\right)-1=t.$(3.5)

Notice that the solution of (3.5) cannot be found in a closed form and must be evaluated numerically.

Substituting (3.2) and (3.4) into the saddle-point approximation formula (see [5, Chapter 3])

${q}_{T}\left(t|\gamma \right)=\frac{1}{\sqrt{2\pi }}{\left(det{\mathrm{\Sigma }}_{\stackrel{^}{\gamma }\left(t\right)}\right)}^{-1/2}{\mathrm{e}}^{-I\left(\stackrel{^}{\gamma }\left(t\right),\gamma \right)},$

we get

$qT(t|1)=12π1Σγ^(t)e−I(γ^(t),1)=12πΓ′′(γ^(t))Γ(γ^(t))−Γ′(γ^(t))2Γ(γ^(t))2−1γ^(t)−1/2e(1−γ^(t))t+ln⁡Γ(γ^(t))−γ^(t)ln⁡γ^(t)=12π[Γ(γ^(t))]2γ^(t)γ^(t)−12γ^(t)Γ′′(γ^(t))Γ(γ^(t))−[Γ′(γ^(t))]2−Γ(γ^(t))2e(1−γ^(t))t.$

Note 1.

In mathematical softwares, the derivatives of the gamma function are usually not implemented, unlike those of the digamma and polygamma functions

$\mathrm{\psi }\left(t\right)=\frac{\text{d}}{\text{d}t}\mathrm{ln}\mathrm{\gamma }\left(t\right)=\frac{{\mathrm{\gamma }}^{\prime }\left(t\right)}{\mathrm{\gamma }\left(t\right)}\mathit{ }\mathrm{and}\mathit{ }{\mathrm{\psi }}_{n}\left(t\right)=\frac{{\text{d}}^{n}}{\text{d}{t}^{n}}\mathrm{\psi }\left(t\right).$

Note that equation (3.5) has now the form

$\mathrm{\psi }\left(\stackrel{^}{\gamma }\left(t\right)\right)-\mathrm{ln}\stackrel{^}{\gamma }\left(t\right)-1=t$

and finally (3.2) turns into ${\mathrm{\Sigma }}_{\stackrel{^}{\gamma }\left(t\right)}={\mathrm{\psi }}_{1}\left(\stackrel{^}{\gamma }\left(t\right)\right)-\frac{1}{\stackrel{^}{\gamma }\left(t\right)}$, which enables us to conclude that

${q}_{T}\left(t| 1\right)=\frac{1}{\sqrt{2\pi }}\frac{1}{\sqrt{{\mathrm{\psi }}_{1}\left(\stackrel{^}{\gamma }\left(t\right)\right)-\frac{1}{\stackrel{^}{\gamma }\left(t\right)}}}{\mathrm{e}}^{\left(1-\stackrel{^}{\gamma }\left(t\right)\right)t+\mathrm{ln}\mathrm{\gamma }\left(\stackrel{^}{\gamma }\left(t\right)\right)-\stackrel{^}{\gamma }\left(t\right)\mathrm{ln}\stackrel{^}{\gamma }\left(t\right)}.$(3.6)

The exact density of the random variable Y = X - ln (X), marked by a solid line in Figure 2, has the form (see Note 6 in the Appendix)

${f}_{Y}\left(t\right)=\frac{LW\left(-1,-{\mathrm{e}}^{-t}\right)}{1+LW\left(-1,-{\mathrm{e}}^{-t}\right)}{\mathrm{e}}^{LW\left(-1,-{\mathrm{e}}^{-t}\right)}-\frac{LW\left(0,-{\mathrm{e}}^{-t}\right)}{1+LW\left(0,-{\mathrm{e}}^{-t}\right)}{\mathrm{e}}^{LW\left(0,-{\mathrm{e}}^{-t}\right)},$(3.7)

for t ∈ [1, +∞), where LW (k, t) is the k-th branch of the complex multifunction called Lambert W-function. Recall that LW (z) is defined as the solution of equation

$LW\left(z\right){\mathrm{e}}^{LW\left(z\right)}=z,z\in C,$(3.8)

and notice that (3.7) is a special case of a general expression for the density derived in [11].

Figure 2

Density of the random variable Y = X - Log(X) (X ~ Exp(1)) given by (3.7) and its approximation qT (-t, 1) given by (3.6).

Since equation (3.8) has infinitely many solutions, LW (t) is a multifunction. Real values are contained in branches LW (0, t), t ∈ (-e -1, ∞), and LW (-1, t), t ∈ (-e-1, 0]. The ranges of the corresponding functional values are [-1, +∞) for LW (0, t) and (-∞, -1] for LW (-1, t). For more details on the Lambert W-function, see [2] and [11].

The following two Lemmas, describing this function’s basic properties, will be needed below.

#### Lemma 3.1.

Equation X-ln (x) = t in the variable X has for all t ∈ [1, +∞) two real solutions X1, X2 such that 0 < x1 ≤ x2, and it holds that

${x}_{1}=-LW\left(0,-{\mathrm{e}}^{-t}\right)\mathit{ }\text{𝑎𝑛𝑑}\mathit{ }{x}_{2}=-LW\left(-1,-{\mathrm{e}}^{-t}\right).$

Proof.

See Appendix.

Lemma 3.2.

For k = 0, 1, the function LW (k, -e-x) is continuously differentiable in the variable x ∈ (1, +∞), and it holds that

$\frac{\text{d}}{\text{d}x}\left[LW\left(k,-{\mathrm{e}}^{-x}\right)\right]=-\frac{LW\left(k,-{\mathrm{e}}^{-x}\right)}{1+LW\left(k,-{\mathrm{e}}^{-x}\right)}.$

Proof.

See Appendix.

## 4 Approximation by Euler polygons

To calculate the values of the density fY of the random variable Y = X-ln (X), where X ~ Exp(1), the exact form (3.7) can be used. However, an approximation of it can also be useful. For example, if we cannot determine the values of the Lambert W-function, which is not implemented in some common statistical softwares such as Microsoft Excel, SPSS or Minitab.

The basic idea of our approximation is to replace the values LW (-1, -e-t) and LW (0, -e-t) in (3.7) by their approximations y-1(t) and y0(t), which can be obtained by numerical solving of the differential equation

$\frac{\text{d}}{\text{d}t}y\left(t\right)=-\frac{y\left(t\right)}{1+y\left(t\right)},t\in \left[a,b\right],a>1,$

which is satisfied by both LW (-1, -e-t) and LW (0, -e-t), see Lemma 3.2.

For finding the required solution of this equation, Euler’s forward method (also called the Euler’s polygon method) can be used, providing us with a solution in the form of a continuous piecewise linear function. More precisely, at first we divide the interval [a, b] into n subintervals [ti,ti+1], i=0,1,…, n-1, t0 =a, ti+1 - ti = b-a/n = h. we put

${y}_{k}\left({t}_{0}\right)=LW\left(k,-{\mathrm{e}}^{-{t}_{0}}\right),k=-1,0.$(4.1)

Consequently, we proceed from the point t0 to the right by step h. At every point ti, i = 0,1,…, n-1, it holds that

${\frac{\text{d}}{\text{d}t}{y}_{k}\left(t\right)|}_{t={t}_{i}}=-\frac{{y}_{k}\left({t}_{i}\right)}{1+{y}_{k}\left({t}_{i}\right)}.$(4.2)

However, since the function yk (t) is linear on the interval [ti,ti+1], we have

${\frac{\text{d}}{\text{d}t}{y}_{k}\left(t\right)|}_{t={t}_{i}}=\frac{{y}_{k}\left({t}_{i+1}\right)-{y}_{k}\left({t}_{i}\right)}{h}.$(4.3)

Notice that the right hand side of the last equality is sometimes called “the forward-difference”.

Consequently, if we substitute (4.3) into (4.2), we get

$\frac{{y}_{k}\left({t}_{i+1}\right)-{y}_{k}\left({t}_{i}\right)}{h}=-\frac{{y}_{k}\left({t}_{i}\right)}{1+{y}_{k}\left({t}_{i}\right)},i=0,1,2,\mathrm{\dots },n-1,$

from which

${y}_{k}\left({t}_{i+1}\right)={y}_{k}\left({t}_{i}\right)-\frac{{y}_{k}\left({t}_{i}\right)}{1+{y}_{k}\left({t}_{i}\right)}h={y}_{k}\left({t}_{i}\right)+\frac{h}{1+{y}_{k}\left({t}_{i}\right)}-h,i=0,1,\mathrm{\dots },n-1.$(4.4)

Equations (4.1) and (4.4) give us values yk (t0), yk (t1), yk (t2), etc. Finally, from the linearity of yk (t) on each interval [ti, ti+1], we have

${y}_{k}\left(t\right)={y}_{k}\left({t}_{i}\right)+\frac{{y}_{k}\left({t}_{i+1}\right)-{y}_{k}\left({t}_{i}\right)}{h}\left(t-{t}_{i}\right)={y}_{k}\left({t}_{i}\right)-\frac{{y}_{k}\left({t}_{i}\right)}{1+{y}_{k}\left({t}_{i}\right)}\left(t-{t}_{i}\right),t\in \left[{t}_{i},{t}_{i+1}\right].$(4.5)

The above-described idea for computing yk (t) can be easily implemented; see Appendix. In Figure 3 we illustrate our approach on the interval [a, b] = [1.1, 5] for different numbers n of the grid points ti, n ∈ {10, 13, 25}. This enables us to compare the suggested approximation with the exact values of LW (0, -e-t). Parallel to that, in Figure 4 we can compare the quality of the approximations of LW (-1, -e-t) with the exact values for n ∈ {10, 13, 25} on [a, b] = [1.1, 5].

Figure 3

Plot of LW (0, -e-t) and its approximation (4.5) for n ∈ {10, 13, 25}.

Figure 4

Plot of LW (-1, -e-t) and its approximation (4.5) for n ∈ {10, 13, 25}.

Substituting the approximations y-1 (t) and y0 (t) of LW (-1, -e-t) and LW (0, -e-t) into (3.7), we get the approximation ffn (t) of the exact density fy (t) on the interval [a, b] in the form

${\mathrm{𝑓𝑓}}_{n}\left(t\right)=\frac{{y}_{-1}\left(t\right)}{1+{y}_{-1}\left(t\right)}{\mathrm{e}}^{{y}_{-1}\left(t\right)}-\frac{{y}_{0}\left(t\right)}{1+{y}_{0}\left(t\right)}{\mathrm{e}}^{{y}_{0}\left(t\right)}.$(4.6)

For an illustration of the quality of this approximation on [a, b] = [1.1, 5], see Figure 5.

Figure 5

Exact density fy(t) of the random variable Y = X - log(X) (X ~ Exp (1) given by (3.7), and its approximation (4.6) for n ∈ {10, 13, 25}.

To examine the approximation (4.6) in more depth, we first focus on LW (0, -e-t). To simplify the notation, let us, for the moment, denote

$u\left(t\right):=LW\left(0,-{\mathrm{e}}^{-t}\right)\mathit{ }\mathrm{and}\mathit{ }y\left(t\right):={y}_{0}\left(t\right),$

and, without a proof, use the fact that -1 < u (t) < 0 holds for all t ∈ [a, b]. From Lemma 3.2 it follows $\frac{\text{d}}{\text{d}t}u\left(t\right)=-\frac{u\left(t\right)}{1+u\left(t\right)}>0$ for all t ∈ [a, b], i.e., u (.) is an increasing function on [a, b]. Moreover, $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u\left(t\right)=\frac{u\left(t\right)}{{\left(1+u\left(t\right)\right)}^{3}}<0$ for all t ∈ [a, b], i.e., u (.) is strictly concave on [a, b]. We will need the following two Lemmas for the subsequent proofs.

#### Lemma 4.1.

If the number n of grid points is “large enough”, then:

1. y⁢(⋅) is increasing on [a,b] and, moreover, y⁢(t)<0 for all t∈[a,b].

2. y⁢(t) approximates u(t) from above, i.e., for all t∈(a,b] it holds that y⁢(t) > u⁢(t).

Proof.

See Appendix. ∎

Lemma 4.2.

There is a constant c > 0 such that for all x, y ∈ [a, b] it holds that |u(x) - u(y)| < c |x-y|, i.e., u (.) is Lipschitz continuous.

Proof.

See Appendix. ∎

And this is our main result about the accuracy of the approximation y0(t):

#### Theorem 4.1.

Let 1 < a < b < + ∞. Then y0 (t) uniformly converges to LW (0, -e-t) on the interval [a, b] as the number of grid points n → + ∞.

#### Proof.

See Appendix. ∎

Note 2. Recall that the Picard’s existence theorem, being one of the basic results in the theory of ordinary differential equations, gives the conditions that guarantee existence and uniqueness of the solutions of the Cauchy initial problem

$\frac{\text{d}}{\text{d}t}x\left(t\right)=f\left(t,x\left(t\right)\right),x\left({t}_{0}\right)={x}_{0}.$

It states that there exist both an interval I , such that t0I, and a solution x: I → ℝ. For the existence of the solution it is sufficient that f: ℝ × ℝ → ℝ is Lipschitz continuous in the second variable.

Proof of the Picard’s existence theorem is usually based on the uniform convergence of the Euler polygons. Unfortunately, we cannot rely on this approach in the proof of Theorem 4.1 despite the fact that the Lipschitz continuity of the function $f\left(t,x\right)=-\frac{x}{1+x}$ can be easily proven, because Theorem 4.1 guarantees global convergence of the Euler polygons on the entire interval [a, b], not just on a small neighborhood [a, a + ε] of the point a.

Now we focus on LW (-1, -e-t) and show convergence analogous to that in Theorem 4.1. We will use procedures and Lemmas similar to those used in the proof of Theorem 4.1. Therefore, it is advantageous, for the moment, to use analogous notation as well, i.e.,

$u\left(t\right):=LW\left(-1,-{\mathrm{e}}^{-t}\right)\mathit{ }\mathrm{and}\mathit{ }y\left(t\right):={y}_{-1}\left(t\right).$

Without a proof, we will use the following properties of the Lambert W-function, namely, for all t ε [a, b] it holds that u (t) < -1, $\frac{d}{\text{d}t}u\left(t\right)=-\frac{u\left(t\right)}{1+u\left(t\right)}<0$ and $\frac{{d}^{2}}{\text{d}{t}^{2}}u\left(t\right)=\frac{u\left(t\right)}{{\left(1+u\left(t\right)\right)}^{3}}>0$, i.e., u (.) is a decreasing and strictly convex function. We will also need the following two Lemmas.

#### Lemma 4.3.

If the number n of the grid points is “large enough”, then:

1. y⁢(⋅) is decreasing on [a,b].

2. y(t) approximates u⁢(t) from below, i.e., for all t∈(a,b] it holds that y⁢(t)<u⁢(t).

Proof.

See Appendix. ∎

#### Lemma 4.4.

There is a constant c > 0 such that for all x, y ε [a, b] it holds that |u(x) - u(y)| < c|x-y|, i.e., u (t) is Lipschitz continuous.

Proof.

See Appendix. ∎

This is our main result about the accuracy of the approximation y-1 (t):

#### Theorem 4.2.

Let 1 < a < b < + ∞. Then y-1 (t) uniformly converges to LW (-1, -e-t) on the interval [a, b] as the number of the grid points n → +∞.

Proof.

See Appendix. ∎

Finally, Theorems 4.1 and 4.2 allow us to make an explicit statement about the quality of the approximation ffn (t) (see (4.6)) of the exact density fY (t) given by (3.7).

#### Theorem 4.3.

Let 1<a<b<+∞, then 𝑓𝑓n⁢(t) converge uniformly on the interval [a, b] to 𝑓Y (t) as the number of the grid points n→+∞.

#### Proof.

The assertion follows from the uniform convergence of y0(t) and y-1(t) to LW (0, -e-t) and LW (-1, -e-t), from the continuity of the function $\frac{z}{1+z}{\mathrm{e}}^{z}$ and from the fact that there exist a lower bound of LW (0, -e-t) on the interval [a, b], that is greater than -1, and an upper bound of LW (-1, -e-t) on the interval [a, b], that is smaller than -1. ∎

## 5 Likelihood ratio tests

First recall that if the random variables X1, …, XN∼Exp (γ) are independent, then the maximum likelihood estimator of the parameter γ has the form

$γ^=11N∑i=1NXi.$(5.1)

Moreover, ${Z}_{N}=\gamma \sum _{i=1}^{N}{X}_{i}$ is governed by the gamma distribution γ (N, 1) with the distribution function

${D}_{N}\left(x\right)=P\left({Z}_{N}0,\hfill \\ 0,\hfill & x\le 0.\hfill \end{array}$

Using integration by parts, we can calculate DN(x) using

${D}_{N}\left(x\right)=1-{\mathrm{e}}^{-x}\left(1+\frac{x}{1!}+\mathrm{\dots }+\frac{{x}^{N-1}}{\left(N-1\right)!}\right)\mathit{ }\text{for}x>0.$(5.2)

If we test the hypothesis H0 γ = γ0 against H1 γ ≠ γ0, and λ(X) denotes the corresponding likelihood, the test statistic of the associated LRT equals to

$-\mathrm{ln}\lambda \left(𝑿\right)={\gamma }_{0}\sum _{i=1}^{N}{X}_{i}-N\mathrm{ln}\left({\gamma }_{0}\sum _{i=1}^{N}{X}_{i}\right)-N+N\mathrm{ln}N.$(5.3)

Applying the same procedure as used for the derivation of (3.7) and slightly generalizing Lemma 3.1, we can show that the distribution function of -ln λ (X), i.e., ${\overline{F}}_{N}\left(x\right)=P\left(-\mathrm{ln}\lambda \left(𝑿\right)\le x\right)$, has, under the hypothesis H0, the form

${\overline{F}}_{N}\left(x\right)=\left\{\begin{array}{cc}{D}_{N}\left(-N\cdot LW\left(-1,-{\mathrm{e}}^{-1-\frac{x}{N}}\right)\right)-{D}_{N}\left(-N\cdot LW\left(0,-{\mathrm{e}}^{-1-\frac{x}{N}}\right)\right),\hfill & x>0,\hfill \\ 0,\hfill & x\le 0.\hfill \end{array}$

Note 3. If N = 1 and γ0 = 1, then -ln λ(x) = X - ln(X)-1, where X ~ Exp(1), and the corresponding distribution function has the form

${\overline{F}}_{1}\left(x\right)=\left\{\begin{array}{cc}{D}_{1}\left(-LW\left(-1,-{\mathrm{e}}^{-1-x}\right)\right)-{D}_{1}\left(-LW\left(0,-{\mathrm{e}}^{-1-x}\right)\right)\hfill & \hfill \\ ={\mathrm{e}}^{LW\left(0,-{\mathrm{e}}^{-1-x}\right)}-{\mathrm{e}}^{-LW\left(-1,-{\mathrm{e}}^{-1-x}\right)}\hfill & x>0,\hfill \\ 0,\hfill & x\le 0.\hfill \end{array}$

Hence, the random variable X - lnX has the distribution function of the form ${\mathrm{e}}^{LW\left(0,-{\mathrm{e}}^{-x}\right)}-{\mathrm{e}}^{-LW\left(-1,-{\mathrm{e}}^{-x}\right)}$ on the interval [1, +∞) and zero otherwise, being the same result as (.2) in Note 6.

For testing H0: γγ0 against H1: γγ0, most people use the LRT and the Wilks’ test statistic -2lnλ(X), which is well known to be asymptotically distributed as $\underset{1}{\overset{2}{\chi }}$, see [16], with the corresponding asymptotic $\underset{1}{\overset{2}{\chi }}$ critical values. The exact critical values are usually not used.

A natural question arises as to how good the asymptotic approach is, i.e., how much the asymptotic critical values differ from the exact ones. Therefore, we will, in more detail, focus on the exact distribution of the Wilks’ test statistic under the given conditions. To that purpose, first recall that for x > 0:

$FN(x):=P(−2lnλ(X)(5.4)

Note that the corresponding density can be found following the approach outlined in Note 6.

Moreover, let us denote the distribution function of $\underset{1}{\overset{2}{\chi }}$ by CHI1(x). Since FN(x) converges to CHI1(x) from below as N→+∞, the corresponding asymptotic critical values based on CHI1(x) are smaller than the exact ones based on FN(x). Consequently, we will try to approximate the function FN(x) from above to get closer to its graph than CHI1(x). To approximate the function LW (k, -e-t) for k = -1,0, we will use the method of the Euler polygons described in Section 4. There are two options:

1. To derive an approximation of the exact density and integrate it to obtain the approximation of the distribution function (5.4).

2. To directly approximate the individual terms appearing in (5.4).

Ad a). This approach is not suitable, since Lemmas 4.1 and 4.3 show that y-1(x) and y0(x) are approximations of LW (-1, -e-x) and LW (0, -e-x) from below and from above, respectively. However, we need the approximation of the exact density from above, because the approximating distribution function must be enclosed between FN(x) and CHI1(x). It can be shown, that to ensure it, we would need, for N odd, the function $\frac{z}{1+z}{\mathrm{e}}^{Nz}$ to be decreasing on the interval (-∞, 0) and, on the contrary, for N even, the same function to be increasing on the same interval. Unfortunately, the opposite is true, since

$\frac{\text{d}}{\text{d}z}\left(\frac{{z}^{N}}{1+z}{\mathrm{e}}^{Nz}\right)=\frac{{\mathrm{e}}^{Nz}}{{\left(1+z\right)}^{2}}\left(N{z}^{2}+\left(2N-1\right)z+N\right){z}^{N-1},$

which is (for z<0) positive for odd N and negative for even N.

Ad b). The direct approximation of the individual terms in (5.4) appears to be a better idea. According to Lemmas 4.1 and 4.3, we have

$ly−1(1+x2N)≤LW(−1,−e−1−x2N)andy0(1+x2N)≥LW(0,−e−1−x2N),$

from which

$l−N⋅y−1(1+x2N)≥−N⋅LW(−1,−e−1−x2N)and−N⋅y0(1+x2N)≤−N⋅LW(0,−e−1−x2N).$

Moreover, the distribution function DN(.) is non-decreasing, so that

$FN(x)=DN−N⋅LW(−1,−e−1−x2N)−DN−N⋅LW(0,−e−1−x2N)≤DN−Ny−11+x2N−DN−Ny01+x2N=:FFNn(x),$

where $F{F}_{{N}^{n}}\left(x\right)$ denotes an approximation of FN (x) obtained when n grid points are used for the evaluation of the corresponding Lambert W-functions on the interval [a, b]. The required inequality ${F}_{N}\left(x\right)\le F\underset{N}{\overset{n}{F}}\left(x\right)$ holds, but only for a sufficiently large number n of grid points, because we have used Lemmas 4.1 and 4.3.

Note that for each fixed N, the sequence of functions $\left\{F{F}_{N}^{n}\left(x\right){\right\}}_{n=1}^{+\mathrm{\infty }}$ uniformly converges to FN (x). This follows directly from Theorems 4.1 and 4.2 and from the continuity of DN (x). Recall that if we want to calculate $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{n}\left(x\right)$ on the interval [c, d], then we need to choose the appropriate a and b needed to determine y-1 (x) and y0 (x). It is evident that we can simply take

$a=1+\frac{c}{2N}\mathit{ }\text{and}\mathit{ }b=1+\frac{d}{2N}.$(5.5)

In [11] one can look up the exact α critical values based on FN (x) for α ∈ {0.005, 0.01, 0.02, 0.05} and N ∈ {1, 2, 3, 4, 5} (see Table 1). To obtain approximate critical values, it is sufficient to approximate the distribution functions FN (x) just on a certain subinterval to be able to determine the critical values for the most common α values. Note that the smallest exact critical value in Table 1 is 3.968 and the largest is 8.853; therefore we will compute the approximations $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{n}\left(x\right)$ just on the interval [c, d] = [3.8; 8.85] for N ∈ {1, 2, 3, 4, 5}. It means that it is sufficient to choose the same a, b for all five values of N, i.e., to set $a=1+\frac{3.8}{2×5}=1.38$ and $b=1+\frac{8.85}{2×1}\approx 5.43$ (cf. (5.5)).

Since for all $x\in \left[c,d\right]F\underset{N}{\overset{n}{F}}\left(x\right)>{F}_{N}\left(x\right)$, the approximate critical values obtained from $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{n}\left(x\right)$ are smaller than the exact critical values. Recall that the critical values based on $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{n}\left(x\right)$ can be obtained as a numerical solution of the equation $1-F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{n}\left(x\right)=\alpha$. The approximate critical values calculated for n = 60 are presented in Table 1.

Table 1

Critical values: exact (based on FN (x)), approximate (based on $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{60}\left(x\right)$), and asymptotic (based on CHI1 (x))

Finally, the asymptotic critical values based on CHI1 (x) are also shown in Table 1 for the purpose of comparison. We can see that the approximate critical values obtained from $F\underset{N}{\overset{n}{F}}\left(x\right)$ are always between the exact and the asymptotic critical values.

Using a computer, it is also possible to “experimentally” determine the smallest n0 for which $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{{n}_{0}}\left(x\right)$ is enclosed between FN (x) and CHI1 (x) on for all x ∈ [c, d] (see Table 2). This means that for all n > n0 the approximation based on $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{60}\left(x\right)$ is more accurate than the asymptotic approximation based on CHI1 (x). The values of n0 reported in Table 2 were the motivation to set n = 60 when constructing Table 1.

Table 2

Smallest n0 for which $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{{n}_{0}}\left(x\right)$ is enclosed between FN (x) and CHI1 (x) for all x ∈ [c, d]

When calculating the approximations $F\phantom{\rule{-2.5pt}{0ex}}{F}_{N}^{n}\left(x\right)$ on the interval [c, d] for a particular N, we can reach good accuracy for even smaller n, provided a and b have been set exactly according to (5.5). However, notice that for Tables 1 and 2, we chose the same a = 1.38 and b = 5.43 for all N. This is useful, e.g., for the construction of Table 1, because we need to know y-1 (t) and y0 (t) only on a fixed interval [a, b] for n = 60, regardless of the value of N.

Note 4. If we want to determine n0 exactly, it is evident that we should find the maximal error of the approximation of LW (-1, -e-t) and LW (0, -e-t) when using y-1 (t) and y0 (t), i.e.,

$\underset{t\in \left[a,b\right]}{sup}|{y}_{-1}\left(t\right)-LW\left(-1,-{\mathrm{e}}^{-t}\right)|\mathit{ }\mathrm{and}\mathit{ }\underset{t\in \left[a,b\right]}{sup}|{y}_{0}\left(t\right)-LW\left(0,-{\mathrm{e}}^{-t}\right)|.$

Upper bounds of these errors were derived in the proofs of Theorems 4.1 and 4.2. According to (.8) and (.10), the upper bound of the latter maximal error is

${d}_{1}^{\prime }+B\frac{{\left(b-a\right)}^{2}}{n}$

where $B=\frac{c}{{\left(1+LW\left(0,-{\mathrm{e}}^{-a}\right)\right)}^{2}}\text{,}\underset{1}{\overset{\mathrm{\prime }}{d}}=\frac{-LW\left(0,{\mathrm{e}}^{-a}\right)}{{\left(1+LW\left(0,-{\mathrm{e}}^{-a}\right)\right)}^{3}}{\left(\frac{b-a}{n}\right)}^{2}$, and, according to Lemma 4.2, $c=\frac{-LW\left(0,-{\mathrm{e}}^{-a}\right)}{1+LW\left(0,-{\mathrm{e}}^{-a}\right)}$. Taking a = 1.38, b = 5.43 (as in Table 1) and n0 = 20, the upper bound is 1.191, being a very rough estimate. Thus, the resulting estimate of the smallest n0 would be too high, no matter how accurately we are able to estimate the change of the value of the DN (x) when changing its argument only negligibly.

Note 5. When practically using the exact distribution function FN (x) or its approximation $F\underset{N}{\overset{n}{F}}\left(x\right)$, it is not surprising that the main numerical problem is not the computation of the functional values of the Lambert W-function or its approximation, but the computation of the functional values of DN(x), especially for large values of N.

## 6 Example

Imagine a company producing different products. In every moment, the company concentrates just on one product, and only starts production of another one after the completion of the previous one.

Assume that the production times of individual pieces of a given product can be described by independent random variables with small dispersion. Assume moreover that the overall production time of one product can be described by the Erlang distribution Erl(k, γ), which is a special case of the gamma distribution with an integer parameter k. It is well known that the corresponding cumulative distribution function is Dk (γx) (see (5.2)). Different estimators of the parameters and their properties have been thoroughly studied in the literature; for more details see, e.g., [3], [4] or [10].

Let the company, that currently has m back-orders, get a new order. The company is now interested in estimating the time in which it is able to process the new order with a prescribed reliability δ, say δ = 0.99. We will call this time T “the δ due time”. If we denote the production time of the ith product as Xi, we are interested in such a time T for which

$P\left({X}_{1}+{X}_{2}+\mathrm{\dots }+{X}_{m+1}

Since Xi ∼ Erl(k, γ) are independent, X1+X2+ … Xm+1 ∼ Erl(k(m+1), γ), and the required T can be calculated from the equation (cf. (5.2))

$1-{\mathrm{e}}^{-\gamma T}\sum _{i=0}^{k\left(m+1\right)-1}\frac{{\left(\gamma T\right)}^{i}}{i!}=\delta .$

It is evident that, provided the values of parameters k and γ are known, T is the δ-quantile of Erl ((m+1)k, γ). It can be shown easily, that an equivalent expression is

$T={d}_{\left(m+1\right)k}\left(\delta \right)/\gamma ,$(6.1)

where d(m+1)k (δ) is the δ-quantile of Erl((m+1)k, 1). When the parameters are not known, we have to replace them by some kind of estimates $\stackrel{^}{k}$ and $\stackrel{^}{\gamma }$.

As an illustration of our approach, we will use real data representing production times in hours presented in Table 3. This data forms a subset of a larger data set studied in [10]. The MLEs of k and γ are $\stackrel{^}{k}=18$ and $\stackrel{^}{\gamma }=12.4042$, respectively (see [4] for details). Moreover, we assume m = 4. Replacing k and γ in (6.1) by $\stackrel{^}{k}$ and $\stackrel{^}{\gamma }$, respectively, and putting δ= 0.99, we obtain $\stackrel{^}{T}=9.1524$.

Table 3

Observed data.

The company might be also interested in testing the hypothesis H0: T = T0 against an alternative H1: T > T0, where T0 is the δ due time of delivery of the (m+1)st product as required by the customer. If we can assume that the parameter k is known, say from past experience, these hypotheses might be replaced by hypotheses on the parameter γ, more precisely, H0: γ = γ(T0) against an alternative H1: γ < γ(T0), where ${\gamma }_{0}={d}_{\left(m+1\right)k}\left(\delta \right)/{T}_{0}$ (cf. (6.1)). Put, for example, T0 = 8 hours. Then the corresponding γ(8) will be 14.1910. By straightforward algebra, the Wilks’ test statistic of the appropriate LRT turns out to be of the form

$-2\mathrm{ln}\lambda \left(𝑿\right)=2\left({\gamma }_{0}\sum _{i=1}^{N}{X}_{i}-Nk\mathrm{ln}\left({\gamma }_{0}\sum _{i=1}^{N}{X}_{i}\right)-Nk+Nk\mathrm{ln}\left(Nk\right)\right),$

i.e., very similar to (5.3). Since the sum of independent exponentially distributed random variables is governed by the Erlang distribution, it is easy to show that the corresponding distribution function of -2 ln λ(X) under H0 is FNk (x) (cf. (5.4)), i.e., F10.8 (x) = F180 (x) in our case. For our data we obtain -2ln λ(X) = 3.4110, which is smaller than the exact, the approximate, and also the asymptotic critical value corresponding to α = 0.05 – see Table 4, which was obtained in a similar manner as Table 1. Hence, neither the exact, nor the approximate, nor the asymptotic version of the LRT rejected H0 at the significance level 5%.

Table 4

Critical values: exact (based on F180 (x)), approximate (based on $F{F}_{180}^{100}\left(x\right)$), and asymptotic (based on CHI1 (x)).

Another problem of the company is possible liability to pay a penalty for the delay of delivery. It is therefore appropriate to determine the power of the considered LRT if the real δ due time of delivery is T0+1, i.e., the real γ is γ(T0+1) = γ(8+1) = 12.6142 in our case. Similarly to Note 6 (see Appendix) it can be shown, that the distribution function of -2ln (X) under a general value of γ has for all x > 0 the form

${D}_{Nk}\left(-Nk\cdot \frac{\gamma }{{\gamma }_{0}}\cdot LW\left(-1,-{\mathrm{e}}^{-1-\frac{x}{2Nk}}\right)\right)-{D}_{Nk}\left(-Nk\cdot \frac{\gamma }{{\gamma }_{0}}\cdot LW\left(0,-{\mathrm{e}}^{-1-\frac{x}{2Nk}}\right)\right),$

so that the power of the considered LRT is

$1−DNk−Nk⋅γ(T0+1)γ0⋅LW(−1,−e−1−cα,Nk2Nk)+DNk−Nk⋅γ(T0+1)γ0⋅LW(0,−e−1−cα,Nk2Nk),$

where cα, Nk is the critical value of the test derived either from FNk (x), $F{F}_{Nk}^{n}\left(x\right)$ or from CHI1 (x) (see Table 4). For α = 0.05 we obtain the powers 0.3596, 0.3597, and 0.3600 of the exact, the approximate, and the asymptotic version, respectively, of the LRT at the nominal significance level 5%.

## 7 Conclusions

Approximations of information divergences and their decompositions can be a very useful tool for both theory and practice. In particular, they are related to the likelihood ratio tests. In this paper we provide several approximations and discuss their properties. Applications to precision of likelihood ratio tests of production times are given.

## References

• [1]

ANTOCH, J.—JARUšKOVá, D.: Testing a homogeneity of stochastic processes, Kybernetika 41 (2007),415–430. Google Scholar

• [2]

CORLESS, R. M.—GONNET, G. H.—HARE, D. E. G.—JEFFREY, D. J: On Lambert’s W Function. Technical Report Cs-93-03, Department of Computer Science, University of Waterloo. Google Scholar

• [3]

JOHNSON, N. L.—KOTZ, S.—BALAKRISHNAN, N.: Continuous Univariate Distributions, Volume 1, 2nd ed., J. Wiley, New York, 1994. Google Scholar

• [4]

MILLER, G. K.: Maximum likelihood estimation for the Erlang integer parameter, Statist. Probab. Lett. 43 (1999), 335–341. Google Scholar

• [5]

PáZMAN, A.: Nonlinear Statistical Models, Dordrecht, Kluwer Academic Publishers, 1993. Google Scholar

• [6]

PáZMAN, A.: The density of the parameter estimators when the observations are distributed exponentially, Metrika 44 (1996), 9–26. Google Scholar

• [7]

RÉNYI, A.: Wahrscheinlichkeitsrechnung mit einem Anhang über Informationstheorie. VEB Deutcher Verlag der Wissenschaften, Berlin, 1962. Google Scholar

• [8]

RUBLíK, F.: On optimality of the likelihood ratio tests in the sense of exact slopes. Part 1, General case, Kybernetika 25 (1989), 13–25. Google Scholar

• [9]

RUBLíK, F.: On optimality of the likelihood ratio tests in the sense of exact slopes. Part 2, Application to individual distributions, Kybernetika 25 (1989), 117–135. Google Scholar

• [10]

STEHLíK, M.: Exact likelihood ratio tests of the scale in the Gamma family, Tatra Mt. Math. Publ. 26 (2003), 381–390. Google Scholar

• [11]

STEHLíK, M.: Distributions of exact tests in the exponential family, Metrika 57 (2003), 145–164. Google Scholar

• [12]

STEHLíK, M.: Decompositions of information divergences: Recent development, open problems and applications. In: AIP Conf. Proc. 1493, 2012, pp. 972–976. Google Scholar

• [13]

STEHLíK, M.—ECONOMOU, P.—KISEL’áK, J.—RICHTER, W. D.: Kullback-Leibler life time testing, Appl. Math. Comput. 240 (2014), 122–139. Google Scholar

• [14]

STEHLíK, M.—OSOSKOV, G. A.: Efficient testing of the homogeneity, scale parameters and number of components in the Rayleigh mixture. JINR Rapid Communications E-11-2003-116, 2003, 1493. Google Scholar

• [15]

STEHLíK, M.—WAGNER, H.: Exact likelihood ratio testing for homogeneity of the exponential distribution, Comm. Statist. Simulation Comput. 40 (2011), 663–684. Google Scholar

• [16]

WILKS, S. S.: Mathematical Statistics, New York – London, J. Wiley and Sons, 1962. Google Scholar

## Appendix

A Maple implentation of the approximation (4.4) and (4.5) of the function LW (0, -e-t)

Euler_W0 =

proc(x)

local knot,f_knot;

knot = a;

f_knot = evalf(W(0,-exp(-a)));

while (b-a)/n < x-knot do

knot = knot+(b-a)/n; f_knot = f_knot + (b-a)/n/(1+f_knot) - (b-a)/n

od;

RETURN( f_knot - f_knot/(1+f_knot)*(x-knot) )

end

#### Proof of Lemma 3.1

It holds that

$ddx[x−lnx]=1−1x>0;x∈(1,+∞),<0;x∈(0,1),$

so that x-ln x attains its global minimum 1 at x= 1, and

$\underset{x\to +\mathrm{\infty }}{lim}\left[x-\mathrm{ln}x\right]=\underset{x\to {0}_{+}}{lim}\left[x-\mathrm{ln}x\right]=\mathrm{\infty }.$

It is evident that for all t ≥ 1 there exist solutions x1(t) ∈ (0, 1] and x2(t) ∈ [1, ∞) of the equation x-ln x= t. For the sake of simplicity, we will use the notation x1 and x2 instead of x1 (t) and x2 (t) below.

Moreover, the equation x-ln x= t is equivalent to the equation (-x)e-x = -e-t, so that, according to the definition of LW(t), we have -x = LW(-e-t), cf. (3.8). Now it is sufficient to determine in which branch of the multifunction LW(t) the solutions are contained. Since x1 ∈ (0, 1] and x2 ∈ (1, +∞], it holds that

${x}_{1}=-LW\left(0,-{\mathrm{e}}^{-t}\right)\mathit{ }\mathrm{and}\mathit{ }{x}_{2}=-LW\left(-1,-{\mathrm{e}}^{-t}\right)$

and the proof is complete. □

#### Proof of Lemma 3.2

The real function F(a, b) = beb-a is continuously differentiable in both variables. From the properties of the Lambert W-function we know that, for any a0 ∈ (-e-1, 0), the terms b01 = LW (0, a0) and b02 = LW (-1, a0) are real, and (a0, b01) and (a0, b02) are solutions of the equation F(a, b). Moreover, $\frac{\partial F}{\partial b}\left(a,b\right)={\mathrm{e}}^{b}+b{\mathrm{e}}^{b}$, so that $\frac{\partial F}{\partial b}\left({a}_{0},{b}_{01}\right)\ne 0\ne \frac{\partial F}{\partial b}\left({a}_{0},{b}_{02}\right)$ because b01 ≠ -1 ≠ b02 (⇔ a0 ≠ -e-1)

Applying the implicit function theorem at points (a0, b01) and (a0, b02), a0 ∈ (-e-1, 0), we find that there exist continuously differentiable functions bi: (-e-1, 0) → ℝ, i = 1, 2, satisfying the equality F(a, bi(a)) = 0, i = 1, 2 for a ∈ (-e-1, 0). However, b1(a) = LW (0, a) and b2(a) = LW (-1, a), i.e., the functions LW (0, -e-x) and LW (-1, -e-x) are continuously differentiable on the interval (-1, +∞).

Differentiation of the equality (3.8) leads to

$\left[\frac{\text{d}}{\text{d}z}LW\left(k,z\right)\right]{\mathrm{e}}^{LW\left(k,z\right)}+LW\left(k,z\right){\mathrm{e}}^{LW\left(k,z\right)}\left[\frac{\text{d}}{\text{d}z}LW\left(k,z\right)\right]=1,$

from which

$\frac{\text{d}}{\text{d}z}LW\left(k,z\right)=\frac{1}{{\mathrm{e}}^{LW\left(k,z\right)}+\underset{z}{\underset{⏟}{LW\left(k,z\right){\mathrm{e}}^{LW\left(k,z\right)}}}}.$

Multiplying both sides by $\frac{-LW\left(k,z\right)}{LW\left(k,z\right)}$ we get

$-\frac{\text{d}}{\text{d}z}LW\left(k,z\right)=\frac{-LW\left(k,z\right)}{\underset{z}{\underset{⏟}{LW\left(k,z\right){\mathrm{e}}^{LW\left(k,z\right)}}}+zLW\left(k,z\right)},$

so that

$\left[\frac{\text{d}}{\text{d}z}LW\left(k,z\right)\right]\cdot \left(-z\right)=\frac{-LW\left(k,z\right)}{1+LW\left(k,z\right)}.$(7.1)

Substituting z = -e-x and $-z={\mathrm{e}}^{-x}=\frac{\text{d}z}{\text{d}x}$ into (7.1) we finally obtain

$\left[\frac{\text{d}}{\text{d}z}LW\left(k,z\right)\right]\cdot \frac{\text{d}z}{\text{d}x}=\frac{-LW\left(k,-{\mathrm{e}}^{-x}\right)}{1+LW\left(k,-{\mathrm{e}}^{-x}\right)},$

with the left-hand side being $\frac{\text{d}}{\text{d}x}LW\left(k,-{\mathrm{e}}^{-x}\right)$, which finishes the proof. □

Note 6. By Lemmas 3.1 and 3.2 it is easy to derive the distribution function FY(t) and the density fY(t) of the random variable Y = X-ln (X), where X ∼ Exp(1) and P(X < x) = 1-e-x. Indeed,

$FY(t)=P(X−lnX

where x1 and x2 are the real numbers guaranteed by Lemma 3.1. We can therefore write

$FY(t)=P(−LW(0,−e−t)(7.2)

from which for all t ∈ [1, +∞)

$fY(t)=ddtFY(t)=ddteLW(0,−e−t)−eLW(−1,−e−t)=LW(−1,−e−t)1+LW(−1,−e−t)eLW(−1,−e−t)−LW(0,−e−t)1+LW(0,−e−t)eLW(0,−e−t),$

being the expression (3.7).

#### Proof of Lemma 4.1

a) We use mathematical induction to show that y(ti) < 0 for all i = 0, 1,…,n, and y(.) is increasing on the intervals [ti, ti+1]. As y(.) is piecewise linear, the assertion will follow.

1° Let i = 0, then y(t0) = LW (0, -e-a) < 0 and for all t ∈ [t0, t1]

$\frac{\text{d}}{\text{d}t}y\left(t\right)=\frac{-LW\left(0,-{\mathrm{e}}^{-a}\right)}{1+LW\left(0,-{\mathrm{e}}^{-a}\right)}>0,$

i.e., y(.) is increasing on the interval [t0, t1].

2° Let i > 0 and $h=\frac{b-a}{n}$. Then $y\left({t}_{i}\right)=y\left({t}_{i-1}\right)+\frac{-y\left({t}_{i-1}\right)}{1+y\left({t}_{i-1}\right)}\cdot h$ and it is evident that y (ti) < 0 if 1+y(ti-1) > h. According to the induction assumption, y(ti-1) ≥ y(t0) > -1, so that it is sufficient to take

$n>\frac{b-a}{1+y\left({t}_{0}\right)}=\frac{b-a}{1+u\left({t}_{0}\right)}.$

According to the induction assumption, y(ti) ≥ y(t0) > -1. Moreover, $y\left({t}_{i+1}\right)y\left({t}_{i}\right)+\frac{-y\left({t}_{i}\right)}{1+y\left({t}_{i}\right)}\cdot h$ and we have just shown y(ti) < 0, from which y(ti+1). Therefore, y(.) is increasing on the interval [ti, ti+1].

b) This proof also uses mathematical induction. We will show that for all t ∈ (ti, ti+1], i = 0, 1, …, n-1, it holds that y(t) > u(t).

1° Let i = 0. Then, from the definition of y(t), we have y(t0) = u(t0) and ${\frac{\text{d}y\left(t\right)}{\text{d}t}|}_{t={t}_{0}}=\frac{-u\left({t}_{0}\right)}{1+u\left({t}_{0}\right)}={\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={t}_{0}}$. From the strict concavity of u(.) and linearity of y(.) on the interval [t0, t1] we get for all t ∈(t0, t1] that y(t) > u(t).

2° Let i > 0 and assume that y(t) ≤ u(t) for a certain t ∈ (ti, ti+1]. Then, from the induction assumption, we have y(ti) > u(ti). From the continuity of y(.) and u(.) and from the nonlinearity of u(.) there exists T ∈ (ti, ti+1], such that (T, u(T)) = (T, y(T)), i.e., the graphs of the functions u(.) and y(.) intersect and

$\frac{-y\left({t}_{i}\right)}{1+y\left({t}_{i}\right)}={\frac{\text{d}y\left(t\right)}{\text{d}t}|}_{t=T}\le {\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t=T}.$(7.3)

Note that y(.) is increasing, u(T) = y(T), and the induction assumption implies u(ti) < y(ti). These facts yield that there exists T0 ∈ (ti, T) such that u(T0) = y(ti). Further,

$\frac{-y\left({t}_{i}\right)}{1+y\left({t}_{i}\right)}=\frac{-u\left({T}_{0}\right)}{1+u\left({T}_{0}\right)}={\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={T}_{0}}$

and together with (7.3) we obtain

${\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={T}_{0}}\le {\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t=T},$

being a contradiction with the descent of the first derivative, because $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u\left(t\right)>0$ on [a, b]. □

#### Proof of Lemma 4.2

Since u(.) is increasing, it is sufficient to take x > y and prove the inequality without the absolute value. Lagrange’s mean value theorem assures that there exists t0 ∈ [y, x] such that

$u\left(x\right)-u\left(y\right)={\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={t}_{0}}\cdot \left(x-y\right).$(7.4)

Since $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u\left(t\right)<0$ for all t ∈ [a, b], then

${\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{u=a}>{\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={t}_{0}},$(7.5)

so that, using Lemma 3.2, we get

${\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{u=a}=\frac{-u\left(a\right)}{1+u\left(a\right)}=c.$(7.6)

Finally, from (7.4), (7.5) and (7.6), we get the assertion of Lemma 4.2. □

#### Proof of Theorem 4.1

Let us denote by di the difference between the grid points (knots), i.e., di = y (ti) - u (ti). Using Lagrange’s mean value theorem, we have, for i ≥ 0,

$di+1=y(ti+1)−u(ti+1)=y(ti)+−y(ti)1+y(ti)h−u(ti)+du(t){textdt|t=T⋅h=di+u(T)1+u(T)−y(ti)1+y(ti)h=di+u(T)−y(ti)(1+y(ti))(1+u(T))h,T∈[ti,ti+1].$(7.7)

For i = 0 we use the facts that y(t0) = u(t0), d0, u(.) is increasing, and u(t0) > -1; hence

$d1=u(T)−u(t0)(1+u(t0))1+u(T)h(7.8)

Now, again using Lagrange’s mean value theorem, we get

$d1=ddsu(s)(T−t0)(1+u(t0))2h=−u(s)1+u(s)(T−t0)(1+u(t0))2h≤−u(t0)(1+u(t0))3h2=−u(t0)(1+u(t0))3(b−an)2,s∈[t0,T].$

Let 0 < i < n and u(T) - y(ti) > 0. Then, using the fact that both functions u(.) and y(.) are increasing, y(t) ≥ u(t) > -1 for all t ∈ [a, b] (this follows from Lemma 4.1) and formula (7.7), we obtain that

$di+1≤di+u(T)−y(ti)(1+y(t0))(1+u(t0))h=di+u(T)−y(ti)(1+u(t0))2h

and by the inequality

$u(ti+1)−u(ti)

which follows directly from Lemma 4.2, we have

$di+1(7.9)

where $B=\frac{c}{{\left(1+u\left({t}_{0}\right)\right)}^{2}}$ is a positive constant. If 0 < i < n and u(T) - y(ti) ≤ 0, then we obtain (7.9) immediately, since by (7.7): di+1di < di + B h2.

An iterative use of (7.9) leads to

$di+1<[di−1+Bh2]+Bh2<⋯(7.10)

From (7.8) it follows that ${lim}_{n\to +\mathrm{\infty }}{d}_{1}=0$, which ensures that the last term in (7.10) converges to 0 for n→ + ∞. Since it does not depend on i, we have

$∀ε>0∃n0∀n>n0∀i∈{0,1,…,n}itholdsdi<ε.$(7.11)

The function y(.) is linear on each interval [ti, ti+1], i = 0, 1,…, n, and u(.) is concave. Therefore, the function y(t) - u(t) is convex on each [ti, ti+1], and since it is continuous, it can be shown easily, that its supremum is attained on one of the grid points ti or ti+1. Lemma 4.1 ensures that y(t) ≥ u(t) for all t ∈ [a, b]. It follows that

$supt∈[a,b]y(t)−u(t)=maxt∈[a,b]y(t)−u(t)=maxi∈{0,1,…,n−1}[maxt∈[ti,ti+1]y(t)−u(t)]=maxi∈{0,1,…,n−1}max{di,di+1}=maxi∈{0,1,…,n}di$

Finally, using (7.11) we get

$∀ε>0∃n0∀n>n0supt∈[a,b]y(t)−u(t)<ε,$

which proves the uniform convergence. □

#### Proof of Lemma 4.3

a) The proof follows the approach used in the proof of Lemma 4.1 a). In step 1∘ it is sufficient to use the fact that

$−LW(−1,−e−a)1+LW(−1,−e−a)<0,$

and in step 2∘ the fact that

$−y(ti)1+y(ti)<0.$

b) We proceed analogously to the proof of Lemma 4.1 b). In step 1∘, the only difference is that we use the strict convexity of u(.). In step 2∘ we can show easily that ${\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={T}_{0}}\ge {\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t=T}$, which is the contradiction with the growth of the first derivative, because $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u\left(t\right)>0$ for all t ∈ [a, b]. □

#### Proof of Lemma 4.4

Since u(.) is decreasing, it is enough to prove that u(y)-u(x) < c(x-y) for all y. Similarly as in the proof of Lemma 4.2 we get

$\text{there exists}{t}_{0}\in \left[y,x\right]:u\left(y\right)-u\left(x\right)={\frac{\text{d}u\left(t\right)}{\text{d}t}|}_{t={t}_{0}}\cdot \left(y-x\right).$(7.12)

Since $\frac{{\text{d}}^{2}}{\text{d}{t}^{2}}u\left(t\right)>0$ for all t ∈ [a, b], it is true that

$du(t)dt|t=a(7.13)

Moreover, according to Lemma 3.2 we have

$\frac{\text{d}u\left(t\right)}{\text{d}t}{|}_{t=a}=\frac{-u\left(a\right)}{1+u\left(a\right)}=:-c.$(7.14)

Substituting (7.13) and (7.14) into (7.12), we finally get

$u(y)−u(x)=−du(t)dt|t=t0⋅(x−y)<−du(t)dt|t=a⋅(x−y)=c(x−y),$

which completes the proof. □

#### Proof of Theorem 4.2

In this proof we again denote by di the difference between the grid points (knots); however, now di = u(ti)-y(ti). Then analogously to the proof of Theorem 4.1 we have

${d}_{i+1}={d}_{i}+\frac{y\left({t}_{i}\right)-u\left(T\right)}{\left(1+y\left({t}_{i}\right)\right)\left(1+u\left(T\right)\right)}h,T\in \left[{t}_{i},{t}_{i+1}\right],i\ge 0.$(7.15)

If i = 0, then y(t0) = u(t0) and d0 = 0. Further, u(.) is decreasing and u(t) < -1 for all t ∈ [a, b]. Similarly to the proof of Theorem 4.1 we have

$d1(7.16)

Let 0 < i < n and y(ti) - u(T) > 0. Then, using the fact that both functions u(.) and y(.) are decreasing, y(t) < u(t) < -1 for all t ∈ [a, b] (see Lemma 4.3), and (7.15), similarly to the proof of Theorem 4.1 we get that the following equality holds, i.e.,

${d}_{i+1}={d}_{i}+\frac{y\left({t}_{i}\right)-u\left({t}_{i+1}\right)}{{\left(1+u\left({t}_{0}\right)\right)}^{2}}h.$

Finally, we use the inequality u(ti) - u(ti+1) < c(ti+1 - ti) = ch, which follows directly from Lemma 4.4, to get

$di+1

where $B=\frac{c}{{\left(1+u\left({t}_{0}\right)\right)}^{2}}>0$ is a positive constant. Further, we follow the same approach as in the proof of Theorem 4.1. □

## About the article

E-mail Milan.Stehlik@uv.cl

Accepted: 2017-09-22

Published Online: 2018-10-20

Published in Print: 2018-10-25

Communicated by Gejza Wimmer

Citation Information: Mathematica Slovaca, Volume 68, Issue 5, Pages 1149–1172, ISSN (Online) 1337-2211, ISSN (Print) 0139-9918,

Export Citation

© 2018 Mathematical Institute Slovak Academy of Sciences.