Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year


IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2016: 0.454
Source Normalized Impact per Paper (SNIP) 2016: 0.850

Mathematical Citation Quotient (MCQ) 2016: 0.23

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …
Volume 14, Issue 1 (Jan 2016)

Issues

Refinement of the Jensen integral inequality

Silvestru Sever Dragomir
  • Corresponding author
  • School of Computer Science and Mathematics, Victoria University of Technology PO Box 14428, MCMC 8001 Victoria, Australia
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Muhammad Adil Khan / Addisalem Abathun
Published Online: 2016-04-23 | DOI: https://doi.org/10.1515/math-2016-0020

Abstract

In this paper we give a refinement of Jensen’s integral inequality and its generalization for linear functionals. We also present some applications in Information Theory.

Keywords: Convex functions; Jensen’s inequality; f-divergences

MSC 2010: 26D15; 94A17

1 Introduction

Let C be a convex subset of the linear space X and f be a convex function on C. If p = (p1, ... pn) is probability sequence and x = (x1, ... xn) ∈ Cn, then f(i=1npixi)i=1npif(xi)(1)

is well known in the literature as Jensen’s inequality.

The Lebesgue integral version of the Jensen inequality is given below:

Let (Ω, Λ, μ) be a measure space with 0 < μ(Ω) < ∞ and let ϕ : I → ℝ be a convex function defined on an open interval I in ℝ. If f : Ω → I is such that f, ϕfL(Ω, Λ, μ), then ϕ(1μ(Ω)Ωfdμ)1μ(Ω)Ωϕ(f)dμ.(2)

In case when ϕ is strictly convex on I one has equality in (2) if and only if f is constant almost everywhere on Ω.

The Jensen inequality for convex functions plays a crucial role in the Theory of Inequalities due to the fact that other inequalities such as the arithmetic mean-geometric mean inequality, the Hölder and Minkowski inequalities, the Ky Fan inequality etc. can be obtained as particular cases of it.

There is an extensive literature devoted to Jensen’s inequality concerning different generalizations, refinements, counterparts and converse results, see, for example [19].

In this paper we give a refinement of Jensen’s integral inequality and its generalization for linear functionals. We also present some applications in Information Theory for example for Kullback-Leibler, total variation and Karl Pearson χ2-divergences etc.

2 Main results

Let (Ω, Λ, μ) be a measure space with 0 < μ(Ω) < ∞ and L(Ω, Λ, μ) = {f : Ω → ℝ : f is μ measurable and ∫Ω f(t)(t) < ∞} be a Lebesgue space. Consider the set 𝔖 = {ω ∈ Λ : μ(ω) ≠ 0 and μ(ω̅) = μ(Ω \ ω) ≠ 0} and ϕ : (a, b) → ℝ be a convex function defined on an open interval (a, b). If f : Ω → (a, b) is such that f, ϕfL(Ω, Λ, μ), then for any set ω ∈ 𝔖, define the functional as Ϝ(ϕ,f;ω)=μ(ω)μ(Ω)ϕ(1μ(ω)ωfdμ)+μ(ω¯)μ(Ω)ϕ(1μ(ω¯)ω¯fdμ).(3)

We give the following refinement of Jensen’s inequality.

Let (Ω, Λ, μ) be a measure space with 0 < μ(Ω) < ∞ and let ϕ : (a, b) → ℝ be a convex function defined on an open interval (a, b). If f : Ω → (a, b) is such that f, ϕfL(Ω, Λ, μ), then for any set ω ∈ 𝔖 we have ϕ(1μ(Ω)Ωfdμ)Ϝ(ϕ,f;ω)1μ(Ω)Ωϕ(f)dμ.(4)

As for any ω ∈ 𝔖 we have ϕ(1μ(Ω)Ωfdμ)=ϕ[μ(ω)μ(Ω)(1μ(ω)ωfdμ)+μ(ω¯)μ(Ω)(1μ(ω¯)ω¯fdμ)].

Therefore by the convexity of the function ϕ we get ϕ(1μ(Ω)Ωfdμ)μ(ω)μ(Ω)ϕ(1μ(ω)ωfdμ)+μ(ω¯)μ(Ω)ϕ(1μ(ω¯)ω¯fdμ)=Ϝ(ϕ,f;ω),(5)

Also for any ω ∈ 𝔖 and by the Jensen inequality we have ϝ(ϕ,f;ω)=μ(ω)μ(Ω)ϕ1μ(ω)ωfdμ+μ(ω¯)μ(Ω)ϕ1μ(ω¯)ω¯fdμ1μ(Ω)ωϕ(f)dμ+1μ(Ω)ω¯ϕ(f)dμ=1μ(Ω)Ωϕ(f)dμ.(6)

From (5) and (6) we have (4). □

We observe that the inequality (4)can be written in an equivalent form as infωSϜ(ϕ,f;ω)ϕ(1μ(Ω)Ωfdμ)

and 1μ(Ω)Ωϕ(f)dμsupωSϜ(ϕ,f;ω).

If ∅ Ω ∈ 𝔖 and if we take ω = ∅ or ω = Ω, then we have F(ϕ, f; ω) is equal to the left hand side of (2). In this case (5) holds trivially.

Particularly Riemann integral version can be given as:

Let ϕ : [a, b] → ℝ be a convex function defined on the interval [a, b]. If f : [c, d] → [a, b], p : [c, d] → ℝ+ are such that f, fp and (ϕf) p are all integrable on [c, d], then we have infx(c,d)[xcdcϕ(1xccxp(t)f(t)dt)+dxdcϕ(1dxxdp(t)f(t)dt)]ϕ(1dccdp(t)f(t)dt),1dccdp(t)ϕ(f(t))dtsupx[c,d][xcdcϕ(1xccxp(t)f(t)dt)+dxdcϕ(1dxxdp(t)f(t)dt)].

As a simple consequence of Theorem 2.1 we can obtain refinement of Hermite-Hadamard inequality:

If ϕ : [a, b] → ℝ is a convex function defined on the interval [a, b], then for any [c, d] ⊆ [a, b]we have ϕ(d+c2)infx[c,d][xcdcϕ(x+c2)+dxdcϕ(d+x2)],1dccdϕ(t)dtsupx[c,d][xcdcϕ(x+c2)+dxdcϕ(d+x2)].

3 Further generalization

Let E be a nonempty set, 𝔄 be an algebra of subsets of E, and L be a linear class of real-valued functions f : E → ℝ having the properties:

L1 : f, gL ⇒ (αf + βg) ∈ L for all α, β ∈ ℝ;

L2 : 1L, i.e., if f(t) = 1 for all tE, then fL;

L3 : fL, E1 ∈ 𝔄 ⇒ f. χE1L,

where χE1 is the indicator function of E1. It follows from L2, L3 that χE1L for every E1 ∈ 𝔄.

A positive isotonic linear functional A : L → ℝ is a functional satisfying the following properties:

A1 : A(αf + βg) = αA(f) + βA(g) for f, gL, α, β ∈ ℝ;

A2 : fL, f(t) ≥ 0 on EA(f) ≥ 0;

It follows from L3 that for every E1 ∈ 𝔄 such that A(χE1) > 0, the functional AE1 is defined for a fixed positive isotonic linear functional A as AE1(f)=A(f.χE1)A(χE1), for all fL, with A(1) = 1. Furthermore, we observe that A(χE1)+A(χEE1)=1, A(f)=A(f.χE1)+A(f.χEE1).(7)

Jessen (see [10, p-47]) gave the following generalization of Jensen’s inequality for convex functions.

Let L satisfy L1and L2 on a nonempty set E, and assume that ϕ : [a, b] → ℝ be a continuous convex function. If A is linear positive functional with A(1) = 1, then for all fL such that ϕ(f) ∈ L we have A(f) ∈ [a, b]and ϕ(A(f))A(ϕ(f));(8)

The following refinement of (8) holds.

Under the above assumptions, if ϕ : [a, b] → ℝ is a continuous convex function, then ϕ(A(f))D¯(A,f,ϕ;E1)A(ϕ(f));(9)

where D¯(A,f,ϕ;E1)=A(χE1)ϕ(A(f.χE1)A(χE1))+A(χEE1)ϕ(A(f.χEE1)A(χEE1))

for all non empty set E1 ∈ 𝔄 such that 0 < A(χE1) < 1

Since D¯(A,f,ϕ;E1)=A(χE1)ϕ(A(f.χE1)A(χE1))+A(χEE1)ϕ(A(f.χEE1)A(χEE1))i.  e.D¯(A,f,ϕ;E1)=A(χE1)ϕ(AE1(f))+A(χEE1)ϕ(AEE1(f)).

Using the inequality (8) we obtain D¯(A,f,ϕ;E1)A(ϕ(f).χE1)+A(ϕ(f).χEE1)=A(ϕ(f)).(10)

This proves the second inequality in (9).

The first inequality follows by using definition of convex function and identity (7). □

4 Applications for Csiszár divergence measures

Let (Ω, Λ, μ) be a probability measure space. Consider the set of all density functions on μ to be S ≔ {p|p : Ω → ℝ, p(s) > 0, ∫Ω p(s)(s) = 1}.

Csiszár introduced the concept of f -divergence for a convex function f : (0, ∞) → (-∞, ∞) (cf. [11], see also [12]) by If(q,p)=Ωp(s)f(q(s)p(s))dμ(s),p,qS.

By appropriately defining the convex function f, various divergences can be derived. We give some important f -divergences, playing a significant role in Information Theory and Statistics.

(i) The class of χ-divergences: The f -divergences, in this class, are generated by the family of functions fα(u)=u1αu0andα1.Ifα(q,p)=Ωp1α(s)q(s)p(s)αdμ(s).

For α = 1; it gives the total variation distance. V(q,p)=Ωq(s)p(s)dμ(s).

For α = 2; it gives the Karl Pearson χ2-divergence, Iχ2(q,p)=Ω[q(s)p(s)]2p(s)dμ(s).

(ii) α-order Renyi entropy : For α > 1 let f(t)=tα,t>0.

Then If gives α-order entropy Dα(q,p)=Ωqα(s)p1α(s)dμ(s).

(iii) Harmonic distance: Let f(t)=2t1+t,t>0.

Then If gives Harmonic distance DH(q,p)=Ω2p(s)q(s)p(s)+q(s)dμ(s).

(iv) Kullback-Leibler: Let f(t)=tlogt,t>0.

Then f -divergence functional give rise to Kullback-Leibler distance [13] DKL(q,p)=Ωq(s)log(q(s)p(s))dμ(s).

One parametric generalization of the Kullback-Leibler [13] relative information was studied in a different way by Cressie and Read [14].

(v) Jeffreys divergence: Let f(t)=(t1)logt,t>0.

Then f -divergence functional give Jeffreys divergence J(q,p)=ω(p(s)q(s))ln(q(s)p(s))dμ(s).

(vi) The Dichotomy class: This class is generated by the family of functions gα : (0, ∞) → ℝ, gα(u)={u1logu,α=01α(1α)[αu+1αuα],α\{0,1};1u+ulogu,α=1.(11)

This class gives, for particular values of α; some important divergences. For instance, for α=12 it provides a distance, namely, the Hellinger distance.

There are various other divergences in Information Theory and Statistics such as Arimoto-type divergences, Matushita’s divergence, Puri-Vincze divergences etc. ( cf. [15], [16]) used in various problems in Information Theory and statistics. An application of Theorem 1.1 is the following result given by Csiszár and Korner (cf. [17]).

Let f : [0, ∞) → ℝ be a convex function and p, q be positive functions from S. Then the following inequality is valid, If(q,p)f(1).(12)

Let f : [0, ∞] → ℝ be a convex function, then for any p and q in S we have: If(q,p)μ(ω)f(1μ(ω)ωq(s)dμ(s))+μ(ω¯)f(1μ(ω¯)ω¯q(s)dμ(s))f(1).(13)

By substituting ϕ(s) = f(s), f(s)=q(s)p(s) and (s) = p(s)(s) in Theorem 2.1, we deduce (13). □

Let p, qS, then we have V(q,p)2supωS|ωq(s)dμ(s)μ(ω)|(0).(14)

By putting f(x) = |x - 1| for all x ≥ 0 in Theorem 4.2 we get (14). □

For any p, qS, Iχ2(q,p)supωS{(ωq(s)dμ(s)μ(ω))2μ(ω)(1μ(ω))}4supωS{(ωq(s)dμ(s)μ(ω))2}(0).(15)

By making use of the function f(x) = (t - 1)2 in Theorem 4.2 we get Ωp(s)(q(s)p(s)1)2dμ(s)supωS{μ(ω)(1μ(ω)ωq(s)dμ(s)1)2+μ(ω¯)(1μ(ω¯)ω¯q(s)dμ(s)1)2}(0)i.e.  Ω(q(s)p(s))2p(s)dμ(s)supωS{(ωq(s)dμ(s)μ(ω))2μ(ω)(1μ(ω))}(0).

Since by Arithmetic-Geometric mean inequality we have μ(ω)(1μ(ω))14[μ(ω)+(1μ(ω))]2=14,

therefore (ωq(s)dμ(s)μ(ω))2μ(ω)(1μ(ω)4(ωq(s)dμ(s)μ(ω))2(0).

For any p, qS, we have: DKL(q,p)ln[(1ωq(s)dμ(s)1μ(ω))1ωq(s)dμ(s).(ωq(s)dμ(s)μ(ω))ωq(s)dμ(s)](0).(16)

By putting f(t) = t ln(t) in Theorem 4.2 one can get first inequality in (16).

To prove the second inequality, we utilize the inequality between the geometric mean and harmonic mean, xαy1α1αx+1αy,  x,y,α[0,1],

we have for x=ωq(s)dμ(s)μ(ω),y=1ωq(s)dμ(s)1μ(ω)  andα=ωq(s)dμ(s)

that (1ωq(s)dμ(s)1μ(ω))1ωq(s)dμ(s).(ωq(s)dμ(s)μ(ω))ωq(s)dμ(s)1,

for any ω ∈ 𝔖, which implies the second inequality in (16). □

For any p, qS, we have: J(q,p)ln(supωS{[(1ωq(s)dμ(s))μ(ω)(1μ(ω))ωq(s)dμ(s)](μ(ω)ωq(s)dμ(s))})supωS((μ(ω)ωq(s)dμ(s))2ωq(s)dμ(s)+μ(ω)2ωq(s)dμ(s)μ(ω))0.(17)

By putting f(x) = (x - 1) ln(x), x > 0 in Theorem 4.2 we have ωp(s)(q(s)p(s)1)ln(q(s)p(s))dμ(s)supωS(μ(ω)(ωq(s)dμ(s)μ(ω)1)ln(ωq(s)dμ(s)μ(ω))+μ(ω¯)(ω¯q(s)dμ(s)μ(ω¯)1)ln(ω¯q(s)dμ(s)μ(ω¯)))=supωS((ωq(s)dμ(s)μ(ω))ln(ωq(s)dμ(s)μ(ω))+(ω¯q(s)dμ(s)μ(ω¯))ln(ω¯q(s)dμ(s)μ(ω¯)))

that is J(q,p)supωS((μ(ω)ωq(s)dμ(s))ln(1ωq(s)dμ(s)1μ(ω))(μ(ω)ωq(s)dμ(s))ln(ωq(s)dμ(s)μ(ω)))

proving the first inequality in (17).

Utilizing the elementary inequality for positive numbers, lnblnaba2a+b,a,b>0

we have (μ(ω)ωq(s)dμ(s))[ln(1ωq(s)dμ(s)1μ(ω))ln(ωq(s)dμ(s)μ(ω))]=(μ(ω)ωq(s)dμ(s))ln(1ωq(s)dμ(s)1μ(ω))ln(ωq(s)dμ(s)μ(ω))1ωq(s)dμ(s)1μ(ω)ωq(s)dμ(s)μ(ω)×[1ωq(s)dμ(s)1μ(ω)ωq(s)dμ(s)μ(ω)]=(μ(ω)ωq(s)dμ(s))2μ(ω)(1μ(ω)).ln(1ωq(s)dμ(s)1μ(ω))ln(ωq(s)dμ(s)μ(ω))1ωq(s)dμ(s)1μ(ω)ωq(s)dμ(s)μ(ω)(μ(ω)ωq(s)dμ(s))2μ(ω)(1μ(ω)).21ωq(s)dμ(s)1μ(ω)+ωq(s)dμ(s)μ(ω)=2(μ(ω)ωq(s)dμ(s))2ωq(s)dμ(s)+μ(ω)2ωq(s)dμ(s)μ(ω)0,

for each ω ∈ 𝔖, giving the second inequality in (17). □

For any p, qS, we have: Dα(q,p)supωS[(μ(ω))1α(ωq(s)dμ(s))α+(1μ(ω))1α(1ωq(s)dμ(s))α]1.(18)

By putting f(x) = xα for α > 1, x > 0, in Theorem 4.2 we get the required inequalities. □

Acknowledgement

The authors express their sincere thanks to the referees for their careful reading of the manuscript and very helpful suggestions that improved the manuscript.

References

  • [1]

    Adil Khan M., Anwar M., Jakšetić J., and Pečarić J., On some improvements of the Jensen inequality with some applications, J. Inequal. Appl. 2009 (2009), Article ID 323615, 15 pages. Google Scholar

  • [2]

    Adil Khan M., Khan G. A., Ali T., Batbold T., and Kilicman A., Further refinement of Jensen’s type inequalities for the function defined on the rectangle, Abstr. Appl. Anal. 2013 (2013), Article ID 214123, 1-8.Google Scholar

  • [3]

    Adil Khan M., Khan G. A., Ali T., and Kilicman A., On the refinement of Jensen’s inequality, Appl. Math. Comput. 262 (1) (2015), 128-135.Google Scholar

  • [4]

    Beesack P. R. and Pečarić J., On Jessen’s inequality for convex functions, J. Math. Anal. Appl., 110 (1985), 536-552.Google Scholar

  • [5]

    Dragomir S. S., A refinement of Jensen’s inequality with applications for f-divergence measures, Taiwanese J. Math., 14 (1) (2010), 153-164.Google Scholar

  • [6]

    Dragomir S. S., A new refinement of Jensen’s inequality in linear spaces with applications, Math. Comput. Modelling, 52 (2010), 1497-1505.Google Scholar

  • [7]

    Dragomir S. S., Some refinements of Jensen’s inequality, J. Math. Anal. Appl., 168 (2) (1992), 518–522.Google Scholar

  • [8]

    Dragomir S. S., A further improvement of Jensen’s inequality, Tamkang J. Math., 25(1) (1994), 29–36.Google Scholar

  • [9]

    Micić-Hot J., Pečarić J. and Jurica P., Refined Jensen’s operator inequality with condition on spectra, Oper. Matrices, 7(2) (2013), 293-308. Google Scholar

  • [10]

    Pečarić J., Proschan F. and Tong Y. L., Convex functions, Partial Orderings and Statistical Applications, Academic Press, New York, 1992. Google Scholar

  • [11]

    Csiszár I., Information measures, Acritical survey, Trans. 7th Prague Conf. on Info. Th., Volume B, Academia Prague, (1978), 73-86. Google Scholar

  • [12]

    Pardo M. C. and Vajda I., On asymptotic properties of information-theoretic divergences, IEEE Trans. Inform. Theory, 49 (3) (2003), 1860-1868. Google Scholar

  • [13]

    Kullback S. and Leiber R. A., On information and sufficency, Ann. Math. Statist., 22 (1951), 79-86. Google Scholar

  • [14]

    Cressie P. and Read T. R. C., Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. Ser. B, 46 (1984), 440-464. Google Scholar

  • [15]

    Kafka P., R Osterreicher F. and Vincze I., On powers of f-divergences defining a distance, Studia Sci. Math. Hungar. 26 (4) (1991), 415-422. Google Scholar

  • [16]

    Liese F. and Vajda I., Convex statistical distances. With German, Teubner-Texte Zur Mathematika [Teubner Texts in mathematics], 95. BSB B. G. Teubner Verlagsgesellschaff, Leipzig, 1987. Google Scholar

  • [17]

    Csiszár I. and Korner J., Information theory: Coding Theorem for Dicsrete Memoryless systems, Academic Press, New York, (1981). Google Scholar

About the article

Received: 2015-09-07

Accepted: 2015-12-28

Published Online: 2016-04-23

Published in Print: 2016-01-01


Citation Information: Open Mathematics, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2016-0020.

Export Citation

© 2016 Dragomir et al., published by De Gruyter Open. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Comments (0)

Please log in or register to comment.
Log in