Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter Open Access April 23, 2016

Refinement of the Jensen integral inequality

  • Silvestru Sever Dragomir EMAIL logo , Muhammad Adil Khan and Addisalem Abathun
From the journal Open Mathematics

Abstract

In this paper we give a refinement of Jensen’s integral inequality and its generalization for linear functionals. We also present some applications in Information Theory.

MSC 2010: 26D15; 94A17

1 Introduction

Let C be a convex subset of the linear space X and f be a convex function on C. If p = (p1, ... pn) is probability sequence and x = (x1, ... xn) ∈ Cn, then

(1)f(i=1npixi)i=1npif(xi)

is well known in the literature as Jensen’s inequality.

The Lebesgue integral version of the Jensen inequality is given below:

Theorem 1.1

Let (Ω, Λ, μ) be a measure space with 0 < μ(Ω) < ∞ and let ϕ : I → ℝ be a convex function defined on an open interval I in ℝ. If f : Ω → I is such that f, ϕfL(Ω, Λ, μ), then

(2)ϕ(1μ(Ω)Ωfdμ)1μ(Ω)Ωϕ(f)dμ.

In case when ϕ is strictly convex on I one has equality in (2) if and only if f is constant almost everywhere on Ω.

The Jensen inequality for convex functions plays a crucial role in the Theory of Inequalities due to the fact that other inequalities such as the arithmetic mean-geometric mean inequality, the Hölder and Minkowski inequalities, the Ky Fan inequality etc. can be obtained as particular cases of it.

There is an extensive literature devoted to Jensen’s inequality concerning different generalizations, refinements, counterparts and converse results, see, for example [19].

In this paper we give a refinement of Jensen’s integral inequality and its generalization for linear functionals. We also present some applications in Information Theory for example for Kullback-Leibler, total variation and Karl Pearson χ2-divergences etc.

2 Main results

Let (Ω, Λ, μ) be a measure space with 0 < μ(Ω) < ∞ and L(Ω, Λ, μ) = {f : Ω → ℝ : f is μ measurable and ∫Ωf(t)(t) < ∞} be a Lebesgue space. Consider the set 𝔖 = {ω ∈ Λ : μ(ω) ≠ 0 and μ(ω̅) = μ(Ω \ ω) ≠ 0} and ϕ : (a, b) → ℝ be a convex function defined on an open interval (a, b). If f : Ω → (a, b) is such that f, ϕfL(Ω, Λ, μ), then for any set ω ∈ 𝔖, define the functional as

(3)Ϝ(ϕ,f;ω)=μ(ω)μ(Ω)ϕ(1μ(ω)ωfdμ)+μ(ω¯)μ(Ω)ϕ(1μ(ω¯)ω¯fdμ).

We give the following refinement of Jensen’s inequality.

Theorem 2.1

Let (Ω, Λ, μ) be a measure space with 0 < μ(Ω) < ∞ and let ϕ : (a, b) → ℝ be a convex function defined on an open interval (a, b). If f : Ω → (a, b) is such that f, ϕfL(Ω, Λ, μ), then for any setω ∈ 𝔖 we have

(4)ϕ(1μ(Ω)Ωfdμ)Ϝ(ϕ,f;ω)1μ(Ω)Ωϕ(f)dμ.
Proof

As for any ω ∈ 𝔖 we have

ϕ(1μ(Ω)Ωfdμ)=ϕ[μ(ω)μ(Ω)(1μ(ω)ωfdμ)+μ(ω¯)μ(Ω)(1μ(ω¯)ω¯fdμ)].

Therefore by the convexity of the function ϕ we get

(5)ϕ(1μ(Ω)Ωfdμ)μ(ω)μ(Ω)ϕ(1μ(ω)ωfdμ)+μ(ω¯)μ(Ω)ϕ(1μ(ω¯)ω¯fdμ)=Ϝ(ϕ,f;ω),

Also for any ω ∈ 𝔖 and by the Jensen inequality we have

(6)ϝ(ϕ,f;ω)=μ(ω)μ(Ω)ϕ1μ(ω)ωfdμ+μ(ω¯)μ(Ω)ϕ1μ(ω¯)ω¯fdμ1μ(Ω)ωϕ(f)dμ+1μ(Ω)ω¯ϕ(f)dμ=1μ(Ω)Ωϕ(f)dμ.

From (5) and (6) we have (4). □

Remark 2.2

We observe that the inequality (4)can be written in an equivalent form as

infωSϜ(ϕ,f;ω)ϕ(1μ(Ω)Ωfdμ)

and

1μ(Ω)Ωϕ(f)dμsupωSϜ(ϕ,f;ω).
Remark 2.3

If ∅ Ω ∈ 𝔖 and if we take ω = ∅ or ω = Ω, then we have F(ϕ, f; ω) is equal to the left hand side of (2). In this case (5) holds trivially.

Particularly Riemann integral version can be given as:

Corollary 2.4

Let ϕ : [a, b] → ℝ be a convex function defined on the interval [a, b]. If f : [c, d] → [a, b], p : [c, d] → ℝ+are such that f, fp and (ϕf) p are all integrable on [c, d], then we have

infx(c,d)[xcdcϕ(1xccxp(t)f(t)dt)+dxdcϕ(1dxxdp(t)f(t)dt)]ϕ(1dccdp(t)f(t)dt),1dccdp(t)ϕ(f(t))dtsupx[c,d][xcdcϕ(1xccxp(t)f(t)dt)+dxdcϕ(1dxxdp(t)f(t)dt)].

As a simple consequence of Theorem 2.1 we can obtain refinement of Hermite-Hadamard inequality:

Corollary 2.5

If ϕ : [a, b] → ℝ is a convex function defined on the interval [a, b], then for any [c, d] ⊆ [a, b]we have

ϕ(d+c2)infx[c,d][xcdcϕ(x+c2)+dxdcϕ(d+x2)],1dccdϕ(t)dtsupx[c,d][xcdcϕ(x+c2)+dxdcϕ(d+x2)].

3 Further generalization

Let E be a nonempty set, 𝔄 be an algebra of subsets of E, and L be a linear class of real-valued functions f : E → ℝ having the properties:

L1 : f, gL ⇒ (αf + βg) ∈ L for all α, β ∈ ℝ;

L2 : 1L, i.e., if f(t) = 1 for all tE, then fL;

L3 : fL, E1 ∈ 𝔄 ⇒ f. χE1L,

where χE1 is the indicator function of E1. It follows from L2, L3 that χE1L for every E1 ∈ 𝔄.

A positive isotonic linear functional A : L → ℝ is a functional satisfying the following properties:

A1 : A(αf + βg) = αA(f) + βA(g) for f, gL, α, β ∈ ℝ;

A2 : fL, f(t) ≥ 0 on EA(f) ≥ 0;

It follows from L3 that for every E1 ∈ 𝔄 such that A(χE1) > 0, the functional AE1 is defined for a fixed positive isotonic linear functional A as AE1(f)=A(f.χE1)A(χE1), for all fL, with A(1) = 1. Furthermore, we observe that

A(χE1)+A(χEE1)=1,
(7)A(f)=A(f.χE1)+A(f.χEE1).

Jessen (see [10, p-47]) gave the following generalization of Jensen’s inequality for convex functions.

Theorem 3.1

Let L satisfy L1and L2on a nonempty set E, and assume that ϕ : [a, b] → ℝ be a continuous convex function. If A is linear positive functional withA(1) = 1, then for all fL such that ϕ(f) ∈ L we have A(f) ∈ [a, b]and

(8)ϕ(A(f))A(ϕ(f));

The following refinement of (8) holds.

Theorem 3.2

Under the above assumptions, if ϕ : [a, b] → ℝ is a continuous convex function, then

(9)ϕ(A(f))D¯(A,f,ϕ;E1)A(ϕ(f));

where

D¯(A,f,ϕ;E1)=A(χE1)ϕ(A(f.χE1)A(χE1))+A(χEE1)ϕ(A(f.χEE1)A(χEE1))

for all non empty set E1 ∈ 𝔄 such that 0 < A(χE1) < 1

Proof

Since

D¯(A,f,ϕ;E1)=A(χE1)ϕ(A(f.χE1)A(χE1))+A(χEE1)ϕ(A(f.χEE1)A(χEE1))i.  e.D¯(A,f,ϕ;E1)=A(χE1)ϕ(AE1(f))+A(χEE1)ϕ(AEE1(f)).

Using the inequality (8) we obtain

(10)D¯(A,f,ϕ;E1)A(ϕ(f).χE1)+A(ϕ(f).χEE1)=A(ϕ(f)).

This proves the second inequality in (9).

The first inequality follows by using definition of convex function and identity (7). □

4 Applications for Csiszár divergence measures

Let (Ω, Λ, μ) be a probability measure space. Consider the set of all density functions on μ to be S ≔ {p|p : Ω → ℝ, p(s) > 0, ∫Ωp(s)(s) = 1}.

Csiszár introduced the concept of f -divergence for a convex function f : (0, ∞) → (-∞, ∞) (cf. [11], see also [12]) by

If(q,p)=Ωp(s)f(q(s)p(s))dμ(s),p,qS.

By appropriately defining the convex function f, various divergences can be derived. We give some important f -divergences, playing a significant role in Information Theory and Statistics.

(i) The class of χ-divergences: The f -divergences, in this class, are generated by the family of functions

fα(u)=u1αu0andα1.Ifα(q,p)=Ωp1α(s)q(s)p(s)αdμ(s).

For α = 1; it gives the total variation distance.

V(q,p)=Ωq(s)p(s)dμ(s).

For α = 2; it gives the Karl Pearson χ2-divergence,

Iχ2(q,p)=Ω[q(s)p(s)]2p(s)dμ(s).

(ii) α-order Renyi entropy : For α > 1 let

f(t)=tα,t>0.

Then If gives α-order entropy

Dα(q,p)=Ωqα(s)p1α(s)dμ(s).

(iii) Harmonic distance: Let

f(t)=2t1+t,t>0.

Then If gives Harmonic distance

DH(q,p)=Ω2p(s)q(s)p(s)+q(s)dμ(s).

(iv) Kullback-Leibler: Let

f(t)=tlogt,t>0.

Then f -divergence functional give rise to Kullback-Leibler distance [13]

DKL(q,p)=Ωq(s)log(q(s)p(s))dμ(s).

One parametric generalization of the Kullback-Leibler [13] relative information was studied in a different way by Cressie and Read [14].

(v) Jeffreys divergence: Let

f(t)=(t1)logt,t>0.

Then f -divergence functional give Jeffreys divergence

J(q,p)=ω(p(s)q(s))ln(q(s)p(s))dμ(s).

(vi) The Dichotomy class: This class is generated by the family of functions gα : (0, ∞) → ℝ,

(11)gα(u)={u1logu,α=01α(1α)[αu+1αuα],α\{0,1};1u+ulogu,α=1.

This class gives, for particular values of α; some important divergences. For instance, for α=12 it provides a distance, namely, the Hellinger distance.

There are various other divergences in Information Theory and Statistics such as Arimoto-type divergences, Matushita’s divergence, Puri-Vincze divergences etc. ( cf. [15], [16]) used in various problems in Information Theory and statistics. An application of Theorem 1.1 is the following result given by Csiszár and Korner (cf. [17]).

Theorem 4.1

Let f : [0, ∞) → ℝ be a convex function and p, q be positive functions from S. Then the following inequality is valid,

(12)If(q,p)f(1).
Theorem 4.2

Let f : [0, ∞] → ℝ be a convex function, then for any p and q in S we have:

(13)If(q,p)μ(ω)f(1μ(ω)ωq(s)dμ(s))+μ(ω¯)f(1μ(ω¯)ω¯q(s)dμ(s))f(1).
Proof

By substituting ϕ(s) = f(s), f(s)=q(s)p(s) and (s) = p(s)(s) in Theorem 2.1, we deduce (13). □

Proposition 4.3

Let p, qS, then we have

(14)V(q,p)2supωS|ωq(s)dμ(s)μ(ω)|(0).
Proof

By putting f(x) = |x - 1| for all x ≥ 0 in Theorem 4.2 we get (14). □

Proposition 4.4

For any p, qS,

(15)Iχ2(q,p)supωS{(ωq(s)dμ(s)μ(ω))2μ(ω)(1μ(ω))}4supωS{(ωq(s)dμ(s)μ(ω))2}(0).
Proof

By making use of the function f(x) = (t - 1)2 in Theorem 4.2 we get

Ωp(s)(q(s)p(s)1)2dμ(s)supωS{μ(ω)(1μ(ω)ωq(s)dμ(s)1)2+μ(ω¯)(1μ(ω¯)ω¯q(s)dμ(s)1)2}(0)i.e.  Ω(q(s)p(s))2p(s)dμ(s)supωS{(ωq(s)dμ(s)μ(ω))2μ(ω)(1μ(ω))}(0).

Since by Arithmetic-Geometric mean inequality we have

μ(ω)(1μ(ω))14[μ(ω)+(1μ(ω))]2=14,

therefore

(ωq(s)dμ(s)μ(ω))2μ(ω)(1μ(ω)4(ωq(s)dμ(s)μ(ω))2(0).

Proposition 4.5

For any p, qS, we have:

(16)DKL(q,p)ln[(1ωq(s)dμ(s)1μ(ω))1ωq(s)dμ(s).(ωq(s)dμ(s)μ(ω))ωq(s)dμ(s)](0).
Proof

By putting f(t) = t ln(t) in Theorem 4.2 one can get first inequality in (16).

To prove the second inequality, we utilize the inequality between the geometric mean and harmonic mean,

xαy1α1αx+1αy,  x,y,α[0,1],

we have for

x=ωq(s)dμ(s)μ(ω),y=1ωq(s)dμ(s)1μ(ω)  andα=ωq(s)dμ(s)

that

(1ωq(s)dμ(s)1μ(ω))1ωq(s)dμ(s).(ωq(s)dμ(s)μ(ω))ωq(s)dμ(s)1,

for any ω ∈ 𝔖, which implies the second inequality in (16). □

Proposition 4.6

For any p, qS, we have:

(17)J(q,p)ln(supωS{[(1ωq(s)dμ(s))μ(ω)(1μ(ω))ωq(s)dμ(s)](μ(ω)ωq(s)dμ(s))})supωS((μ(ω)ωq(s)dμ(s))2ωq(s)dμ(s)+μ(ω)2ωq(s)dμ(s)μ(ω))0.
Proof

By putting f(x) = (x - 1) ln(x), x > 0 in Theorem 4.2 we have

ωp(s)(q(s)p(s)1)ln(q(s)p(s))dμ(s)supωS(μ(ω)(ωq(s)dμ(s)μ(ω)1)ln(ωq(s)dμ(s)μ(ω))+μ(ω¯)(ω¯q(s)dμ(s)μ(ω¯)1)ln(ω¯q(s)dμ(s)μ(ω¯)))=supωS((ωq(s)dμ(s)μ(ω))ln(ωq(s)dμ(s)μ(ω))+(ω¯q(s)dμ(s)μ(ω¯))ln(ω¯q(s)dμ(s)μ(ω¯)))

that is

J(q,p)supωS((μ(ω)ωq(s)dμ(s))ln(1ωq(s)dμ(s)1μ(ω))(μ(ω)ωq(s)dμ(s))ln(ωq(s)dμ(s)μ(ω)))

proving the first inequality in (17).

Utilizing the elementary inequality for positive numbers,

lnblnaba2a+b,a,b>0

we have

(μ(ω)ωq(s)dμ(s))[ln(1ωq(s)dμ(s)1μ(ω))ln(ωq(s)dμ(s)μ(ω))]=(μ(ω)ωq(s)dμ(s))ln(1ωq(s)dμ(s)1μ(ω))ln(ωq(s)dμ(s)μ(ω))1ωq(s)dμ(s)1μ(ω)ωq(s)dμ(s)μ(ω)×[1ωq(s)dμ(s)1μ(ω)ωq(s)dμ(s)μ(ω)]=(μ(ω)ωq(s)dμ(s))2μ(ω)(1μ(ω)).ln(1ωq(s)dμ(s)1μ(ω))ln(ωq(s)dμ(s)μ(ω))1ωq(s)dμ(s)1μ(ω)ωq(s)dμ(s)μ(ω)(μ(ω)ωq(s)dμ(s))2μ(ω)(1μ(ω)).21ωq(s)dμ(s)1μ(ω)+ωq(s)dμ(s)μ(ω)=2(μ(ω)ωq(s)dμ(s))2ωq(s)dμ(s)+μ(ω)2ωq(s)dμ(s)μ(ω)0,

for each ω ∈ 𝔖, giving the second inequality in (17). □

Proposition 4.7

For any p, qS, we have:

(18)Dα(q,p)supωS[(μ(ω))1α(ωq(s)dμ(s))α+(1μ(ω))1α(1ωq(s)dμ(s))α]1.
Proof

By putting f(x) = xα for α > 1, x > 0, in Theorem 4.2 we get the required inequalities. □

Acknowledgement

The authors express their sincere thanks to the referees for their careful reading of the manuscript and very helpful suggestions that improved the manuscript.

References

[1] Adil Khan M., Anwar M., Jakšetić J., and Pečarić J., On some improvements of the Jensen inequality with some applications, J. Inequal. Appl. 2009 (2009), Article ID 323615, 15 pages.10.1155/2009/323615Search in Google Scholar

[2] Adil Khan M., Khan G. A., Ali T., Batbold T., and Kilicman A., Further refinement of Jensen’s type inequalities for the function defined on the rectangle, Abstr. Appl. Anal. 2013 (2013), Article ID 214123, 1-8.Search in Google Scholar

[3] Adil Khan M., Khan G. A., Ali T., and Kilicman A., On the refinement of Jensen’s inequality, Appl. Math. Comput. 262 (1) (2015), 128-135.10.1016/j.amc.2015.04.012Search in Google Scholar

[4] Beesack P. R. and Pečarić J., On Jessen’s inequality for convex functions, J. Math. Anal. Appl., 110 (1985), 536-552.10.1016/0022-247X(85)90315-4Search in Google Scholar

[5] Dragomir S. S., A refinement of Jensen’s inequality with applications for f-divergence measures, Taiwanese J. Math., 14 (1) (2010), 153-164.10.11650/twjm/1500405733Search in Google Scholar

[6] Dragomir S. S., A new refinement of Jensen’s inequality in linear spaces with applications, Math. Comput. Modelling, 52 (2010), 1497-1505.10.1016/j.mcm.2010.05.035Search in Google Scholar

[7] Dragomir S. S., Some refinements of Jensen’s inequality, J. Math. Anal. Appl., 168 (2) (1992), 518–522.10.1016/0022-247X(92)90177-FSearch in Google Scholar

[8] Dragomir S. S., A further improvement of Jensen’s inequality, Tamkang J. Math., 25(1) (1994), 29–36.10.5556/j.tkjm.25.1994.4422Search in Google Scholar

[9] Micić-Hot J., Pečarić J. and Jurica P., Refined Jensen’s operator inequality with condition on spectra, Oper. Matrices, 7(2) (2013), 293-308.10.7153/oam-07-17Search in Google Scholar

[10] Pečarić J., Proschan F. and Tong Y. L., Convex functions, Partial Orderings and Statistical Applications, Academic Press, New York, 1992.Search in Google Scholar

[11] Csiszár I., Information measures, Acritical survey, Trans. 7th Prague Conf. on Info. Th., Volume B, Academia Prague, (1978), 73-86.Search in Google Scholar

[12] Pardo M. C. and Vajda I., On asymptotic properties of information-theoretic divergences, IEEE Trans. Inform. Theory, 49 (3) (2003), 1860-1868.10.1109/TIT.2003.813509Search in Google Scholar

[13] Kullback S. and Leiber R. A., On information and sufficency, Ann. Math. Statist., 22 (1951), 79-86.10.1214/aoms/1177729694Search in Google Scholar

[14] Cressie P. and Read T. R. C., Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. Ser. B, 46 (1984), 440-464.10.1111/j.2517-6161.1984.tb01318.xSearch in Google Scholar

[15] Kafka P., R Osterreicher F. and Vincze I., On powers of f-divergences defining a distance, Studia Sci. Math. Hungar. 26 (4) (1991), 415-422.Search in Google Scholar

[16] Liese F. and Vajda I., Convex statistical distances. With German, Teubner-Texte Zur Mathematika [Teubner Texts in mathematics], 95. BSB B. G. Teubner Verlagsgesellschaff, Leipzig, 1987.Search in Google Scholar

[17] Csiszár I. and Korner J., Information theory: Coding Theorem for Dicsrete Memoryless systems, Academic Press, New York, (1981).Search in Google Scholar

Received: 2015-9-7
Accepted: 2015-12-28
Published Online: 2016-4-23
Published in Print: 2016-1-1

© 2016 Dragomir et al., published by De Gruyter Open

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Downloaded on 2.12.2023 from https://www.degruyter.com/document/doi/10.1515/math-2016-0020/html
Scroll to top button