Jump to ContentJump to Main Navigation
Show Summary Details
In This Section

Open Mathematics

formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year


IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2015: 0.521
Source Normalized Impact per Paper (SNIP) 2015: 1.233

Mathematical Citation Quotient (MCQ) 2015: 0.39

Open Access
Online
ISSN
2391-5455
See all formats and pricing
In This Section
Volume 14, Issue 1 (Jan 2016)

Issues

Uncertainty orders on the sublinear expectation space

Dejian Tian
  • Corresponding author
  • School of Sciences, China University of Mining and Technology, China
  • Email:
/ Long Jiang
  • School of Sciences, China University of Mining and Technology, China
  • Email:
Published Online: 2016-04-23 | DOI: https://doi.org/10.1515/math-2016-0023

Abstract

In this paper, we introduce some definitions of uncertainty orders for random vectors in a sublinear expectation space. We all know that, under some continuity conditions, each sublinear expectation 𝔼 has a robust representation as the supremum of a family of probability measures. We describe uncertainty orders from two different viewpoints. One is from sublinear operator viewpoint. After giving definitions such as monotonic orders, convex orders and increasing convex orders, we use these uncertainty orders to derive characterizations for maximal distributions, G-normal distributions and G-distributions, which are the most important random vectors in the sublinear expectation space theory. On the other hand, we also establish some uncertainty orders’ characterizations from the viewpoint of probability measures and build some connections with the theory of risk measures.

Keywords: Uncertainty order; Sublinear expectation; Choquet integral; Quantile function; Risk measure

MSC 2010: 62C86; 91B06; 90B50

1 Introduction

Nonlinear expectations have been a prominent topic in mathematical economics ever since the famous Allais paradox. Typical examples are monetary risk measures introduced in [1, 2], Choquet expectations developed in [3], g-expectations and G-expectations established and studied by [410] respectively.

The theory of sublinear expectation, an important and a special nonlinear expectation, is not based on a given (linear) probability space. We all know that, under some mild continuity conditions, each sublinear expectation 𝔼 has a robust representation as the supremum of a subset of probability measures. It thus provides a new way to describe the uncertainties.

In a financial context, when facing risks one often uses the stochastic orders to compare the “good or bad” between the portfolios. In this case, a probability measure is given on the set of scenarios, so we can focus on the resulting payoff or loss distributions. The comparison of financial risks plays an important role for both regulators and agents in financial markets. Under the framework of a classical probability space, based on expected utility theory developed by [11], lots of elegant results about the stochastic orders have been obtained. For example, in the stochastic order theory it has been shown that the monotonic order is equivalent to saying that one risk position is preferred over the other by all decision makers who have increasing real functions for which the expectations exist. For more details and properties of stochastic orders, see [12, 13].

Motivated by the classical stochastic orders, the object of this paper is to explore the uncertainty orders for random vectors in a sublinear expectation space. These uncertainty orders will provide useful criterions for describing the comparisons of the uncertainty degrees between two random vectors on the sublinear expectation space.

We establish the uncertainty orders in the sublinear expectation space from two different viewpoints. One is from the sublinear operator viewpoint. We give some definitions of uncertainty orders such as monotonic orders, convex orders and increasing convex orders. Then we use these uncertainty orders to derive the characterizations for maximal distributions, G-normal distributions and G-distributions, which are the most important random vectors in the theory of sublinear expectation space.

On the other hand, we study the uncertainty orders in the sublinear expectation space by a family of probability measures 𝒫 induced by the sublinear expectation 𝔼. We can define the capacity space from 𝒫, then we get some characterizations with the help of the recent results about capacity orders obtained by [14, 15] in the capacity space. Besides, we also give the characterization of uncertainty orders by distortion functions.

The paper is organized as follows. In Section 2, we first introduce some preliminaries about the sublinear expectation space. Then we give the definitions of uncertainty orders from the sublinear operator viewpoint, and establish some characterizations for maximal distributions, G-normal distributions and G-distributions. In Section 3, we introduce the other viewpoint of characterizations for uncertainty orders in the sublinear expectation space by using some terminologies of a capacity space. We conclude the paper in Section 4.

2 Characterizations of uncertainty orders from the sublinear operator viewpoint

We first present some preliminaries of sublinear expectation space theory. Then we give the definitions of uncertainty orders in the sublinear expectation space. More details for the first subsection can be found in [6, 8].

2.1 Sublinear expectation spaces

Let Ω be a given set and let 𝓗 be a linear space of real valued functions defined on Ω such that c ∈ 𝓗 for all constants c and ∣X∣ ∈ 𝓗 if X ∈ 𝓗. The space 𝓗 can be considered as the space of random variables. We further suppose that if X1, ... , Xn ∈ 𝓗, then φ(X1, ... , Xn) ∈ 𝓗 for each φCl.Lip(ℝn), where Cl.Lip (ℝn) denotes the linear space of functions φ satisfying |φ(x)φ(y)|C(1+|x|m+|y|m)|xy|,x,yn,

where m depends only on φ.

Definition 2.1: (see [8]). A sublinear expectation 𝔼 on 𝓗 is a functional 𝔼 : 𝓗 → ℝ satisfying the following properties: for all X, Y ∈ 𝓗, we have

  • (i)

    Monotonicity: 𝔼[X] ≥ 𝔼[Y] if XY.

  • (ii)

    Constant preserving: 𝔼[c] = c for c ∈ ℝ.

  • (iii)

    Subadditivity: 𝔼[X + Y] ≤ 𝔼[X] + 𝔼[Y].

  • (iv)

    Positive homogeneity: 𝔼[λX] = λ𝔼[X] for λ ≥ 0.

The triple (Ω, 𝓗, 𝔼) is called a sublinear expectation space.Let X = (X1, ... , Xn), Xi ∈ 𝓗, denoted by X ∈ 𝓗n, be a given n-dimensional random vector on a sublinear expectation space (Ω, 𝓗, 𝔼). Define a function on Cl.Lip (ℝn) by FX[φ]:=E[φ(X)],φCl.Lip(n).The triple (ℝn, Cl.Lip (ℝn), 𝔽X) forms a sublinear expectation space. 𝔽X is called the distribution of X. Let Y be another n-dimensional random vector on (Ω, 𝓗, 𝔼), we denote X=dY, if E[φ(X)] = E[φ(Y)] for all φCl.Lip (ℝn)

Definition 2.2: (see [6, 8]). Let (Ω, 𝓗, 𝔼) be a sublinear expectation space. A random vector Y ∈ 𝓗n is said to be independent from another random vector X ∈ 𝓗m under 𝔼, if for each test function φCl.Lip (ℝm + n) we have E[φ(X,Y)]=E[E[φ(x,Y)]x=X].And Y is said to be weakly independent from X under 𝔼, if the above test functions φ are taken only among, instead of Cl.Lip (ℝm + n), φ(x,y)=ψ0(y)+ψ1(y)|x|+ψ2(y)|x|2,ψiCl.Lip(n),i=0,1,2.Let X, be two n-dimensional random vectors on a sublinear expectation space (Ω, 𝓗, 𝔼). is called an independent copy of X if X=dX¯ and independent copy of X

Definition 2.3: (see [8]). Let (Ω, 𝓗, 𝔼) be a sublinear expectation space. A n-dimensional random vector η on (Ω, 𝓗, 𝔼) is called maximal distributed if aη+bη¯=d(a+b)η,fora,b0,where η̅ is an independent copy of η. In particular, for n = 1, we denote η=dN([μ_,μ¯]×{0}) where μ = - 𝔼[-η] and μ̅ = 𝔼[η].A n-dimensional random vector X on (Ω, 𝓗, 𝔼) is called G-normal distributed if aX+bX¯=da2+b2X,fora,b0,where X̅ is an independent copy of X. In particular, for n = 1, we denote X=dN({0}×[σ_2,σ¯2]) where σ2 = − 𝔼[− X2] and σ̅2 = E[X2] and σ̅σ ≥ 0.A pair of n-dimensional random vectors (X, η) on (Ω, 𝓗, 𝔼) is called G-distributed if (aX+bX¯,a2η+b2η¯)=d(a2+b2X,(a2+b2)η)fora,b0,where (X̅ η̅) is an independent copy of (X η) For n = 1, we denote (X,η)=dN([μ_,μ¯]×[σ_2,σ¯2]) where μ, = μ̅ σ2 and σ̅2 are defined as above

2.2 Characterizations from the sublinear operator viewpoint

Throughout the following paper, we interpret the risk positions as loss random vectors. Motivated by the definitions of stochastic orders in a probability space, we give the definitions of uncertainty orders in a sublinear expectation space. We then use these uncertainty orders to derive the characterizations for maximal distributions, G-normal distributions and G-distributions.

Definition 2.4: Let (Ω, 𝓗, 𝔼) be a sublinear expectation space. Let X, Y be two n-dimensional random vectors on (Ω, 𝓗, 𝔼).

  • (i)

    X is said to precede Y in the monotonic order sense under 𝔼, denoted by XmonEY, if for all increasing functions φCl.Lip (ℝn), we have E[φ(X)]E[φ(Y)]andE[φ(X)]E[φ(Y)].(1)

  • (ii)

    X is said to precede Y in the convex order sense under 𝔼, denoted by XconEY, if (1) holds for all convex functions φCl.Lip (ℝn)

  • (iii)

    X is said to precede Y in the increasing convex order sense under 𝔼, denoted by XiconEY, if (1) holds for all increasing convex functions φCl.Lip (ℝn).

Remark 2.5: From the definitions of uncertainty orders as above, we can see that the uncertainty orders only involve the distributions of random vectors X and Y, thus we can also consider the random vectors X and Y from the different sublinear expectation spaces.

Remark 2.6: Compared with the stochastic orders, here we impose an extra restriction condition - 𝔼[−φ(X)] ≤ 𝔼[−φ(Y)] on the uncertainty orders, which is a redundant condition in the linear probability space.Recall Lemma 3.4 (the representation theorem) in Chapter I established in [8], we can see that it is reasonable to define the uncertainty orders as above. In fact, the condition (1) is equivalent to maxθΘEθ[φ(X)]maxθΘEθ[φ(Y)]andminθΘEθ[φ(X)]minθΘEθ[φ(Y)],(2)where Θ is a family of probability measures on (ℝn, 𝓑(ℝn)). In this sense, we say XmonEY, i.e., the best and worst expectations of a subset of linear expectations {Eθ : θ ∈ Θ} for loss random variables φ(X) are both less than those of φ(Y) for all increasing functions φCl.Lip (ℝn).For any loss random variable X ∈ 𝓗 of a sublinear expectation space (Ω, 𝓗, 𝔼), we have the following four typical parameters: μ_=E[X],μ¯=E[X],σ_2=E[X2],σ¯2=E[X2].The internals [μ, μ̅] and [σ2; σ̅2] characterize the mean-uncertainty and the variance-uncertainty of X respectively.From Definition 2.4 of the uncertainty orders, we easily obtain that for another loss random variable Y in (Ω, 𝓗, 𝔼), with the mean-uncertainty interval [v ] and the variance-uncertainty interval [ρ2, ρ̅2],

  • if XmonEY or XiconEY, then μ̅v̅ and μ.

  • if XconEY then μ̅ = , μ = v, σ̅2ρ̅2 and σ2ρ2.

In the following three theorems we show that for some particular distributions the above necessary conditions are also the sufficient conditions. And these distributions are very important in the sublinear expectation space theory.

Theorem 2.7: Let η=dN([μ_,μ¯]×{0}) and ξ=dN([v_,v¯]×{0}) be two maximal distributions on a sublinear expectation space (Ω, 𝓗, 𝔼). Then we have

  • (i)

    ηmonEξηiconEξμ¯v¯ and μv.

  • (ii)

    ηconEξμ¯=v¯ and μ = v, i.e., η=dξ.

Proof: (i) From the definitions of the monotonic order and the increasing convex order, ηmonEξ implies ηiconEξ is obvious.If ηiconEξ, choosing an increasing convex function φ(x) = x satisfying φCl.Lip (ℝ), from the definitions of the increasing convex orders and maximal distributions, we have μ¯=E[η]E[ξ]=v¯andμ_=E[η]E[ξ]=v_.If μ̅ and μv, for any increasing function φCl.Lip (ℝ), from the equivalent definition of the maximal distribution Definition 1.1 in Chapter II of [8], we have E[φ(η)]=maxμ_yμ¯φ(y)=φ(μ¯)φ(v¯)=maxv_yv¯φ(y)=E[φ(ξ)],and E[φ(η)]=minμ_yμ¯φ(y)=φ(μ_)φ(v_)=minv_yv¯φ(y)=E[φ(ξ)].Thus we have ηmonEξ(ii) It is obvious that η=dξ implies ηconEξ. As for the other direction, choosing the convex functions φ(x) = x and φ(x) = -x respectively, we can easily obtain μ̅ = and μ = v by the definitions of conE and maximal distributions.

Theorem 2.8: Let X=dN({0}×[σ_2,σ¯2]) and Y=dN({0}×[ρ_2,ρ¯2]) be two G-normal distributions on a sublinear expectation space (Ω, 𝓗, 𝔼). Then we have

  • (i)

    XmonEYσ¯2=ρ¯2 and σρ2 i.e., X=dY.

  • (ii)

    XconEYXiconEYσ¯2ρ¯2 and σ2ρ2.

Proof: (i) X=dY implies XmonEY is obvious.Suppose XmonEY. Recall the fact that 𝔼[φ(·)] can be explicitly calculated for G-normal distributions such that φCl.Lip (ℝ) is a convex or concave function in [8], for an increasing convex function φ(x) = x+ satisfying φCl.Lip (ℝ), thus we have E[φ(X)]=12π+φ(σ¯y)exp(y22)dy=σ¯2π0+yexp(y22)dy=12πσ¯.Similarly we obtain E[φ(Y)]=12πρ¯. Since σ̅, ρ̅ ≥ 0, we have σ̅2ρ̅2 by the definition of monE On the other hand, we have E[φ(X)]=12π+φ(σ_y)exp(y22)dy=σ_2π0+yexp(y22)dy=12πσ_.Similarly we can get E[φ(Y)]=12πρ_. Since σ, ρ ≥ 0, we have σ2ρ−2 from the definition of monE.Taking an increasing concave function φ(x) = -x ̅, we can derive σ̅2ρ̅2 and σ2ρ2 using the arguments as φ(x) = x+.We conclude from the above that σ̅2 = ρ̅2 and σ2 = ρ2, i.e., X=dY(ii) Clearly we have XconEYXiconEY. Repeating the arguments in the part (i) we have XconEYσ¯2ρ¯2andσ_2ρ_2 and σ2ρ2 by choosing the increasing convex function φ(x) = x+.It only needs to show that σ̅2ρ̅2 and σ2ρ2σ_2ρ_2XconEY.For any convex function φCl.Lip (ℝ). Consider the following G-heat equation for X {ut12(σ¯2(2ux2)+σ_2(2ux2))=0,u|t=0=φ.(3)We have that v(t,x):=E[φ(x+tX)], (t, x) ∈ [0, ∞) × ℝ, is the unique viscosity solution of (3). Furthermore, we can check that u(t, x) is convex in x. Thus the above G-heat equation (3) becomes ut12σ¯2(2ux2)+=0,u|t=0=φ.(4)Similarly we obtain v(t, x):=E[φ(x+tY)], (t, x) ∈ [0, ∞) × ℝ, is the unique viscosity solution of the following G-heat equation {vt12ρ¯2(2vx2)+=0, v|t=0=φ.(5)Since σ̅2ρ̅2, then by the comparison theorem for the viscosity solutions of (4) and (5) (For example, see [8], Theorem 2.6 in Appendix C), we derive that u(t, x)v(t, x), (t, x)[0, +]×.In particular, taking (t, x) = (1, 0), we have E[φ(X)]E[φ(Y)].(6)Since -φ is a concave function, we can similarly show that m(t, x):=E[φ(x+tX)] and n(t, x):=E[φ(x+tY)], (t, x) ∈ [0, ∞) × ℝ, are the unique viscosity solutions of the following G-heat equations respectively {mt+12σ_2(2mx2)=0, m|t=0=φ.and{nt+12ρ_2(2nx2)=0, n|t=0=φ.(7)Due to the facts that σ2ρ2 and the comparison theorem for the viscosity solutions of (7), setting (t, x) = (1, 0) we have E[φ(X)]E[φ(Y)].(8)By combining (6) with (8), we get XconEY. The proof is complete. □

Theorem 2.9: Let (η, X)=dN([μ_, μ¯]×[σ_2, σ¯2]) and (ξ, Y)=dN([v_, v¯]×[ρ_2, ρ¯2]) be two G-distributions on a sublinear expectation space (Ω, 𝓗, 𝔼). Moreover, suppose η is weakly independent from X and Ȉ is weakly independent from Y respectively. Then we have

  • (i)

    (η, X)monE(ξ, Y), if and only if μ̅, μv, σ̅2 = ρ̅2 and σ2ρ2.

  • (ii)

    (η, X)conE(ξ, Y), if and only if μ̅ = , μv, σ̅2ρ̅2 and σ2ρ2.

  • (iii)

    (η, X)iconE(ξ, Y), if and only if μ̅, μv, σ̅2ρ̅2 and σ2ρ2.

Proof: The “only if” parts are the combinations of the results of Theorem 2.7 and Theorem 2.8. In fact, for example, if (η, X)monE(ξ, Y), then we can derive that ηmonEξ and XmonEY. Thus from Theorem 2.7 and Theorem 2.8 we get the results.For the proof of the converse implications, the key ideas are both the applications of the comparison theorem of the viscosity solutions to G-equations. We only show the case (iii). Cases (i) and (ii) are verified by an analogous argument.Assume μ̅, μv, σ̅2ρ̅2 and σ2ρ2. For any increasing convex function φCl.Lip (ℝ2), by Proposition 1.10 in Chapter II of [8], we have that u(t, x, y):=E[φ(x+tX, y+tη)], (t, x, y) ∈ [0, ∞)×ℝ×ℝ, is the unique viscosity solution of the following G-equation for (η, X) {utG(uy, 2ux2)=0, u|t=0=φ, (9)where G(p, a)=E[12aX2+pη].Since η is weakly independent from X, we have G(p, a)=E[12aX2+pη]=E[12aX2]+E[pη]=12(σ¯2a+σ_2a)+(μ¯p+μ_p).On the other hand, since φ is an increasing convex function in Cl.Lip (ℝ2), we can get (t, x, y) is convex in x and increasing in y. Thus (9) becomes {ut(12σ¯2(2ux2)++μ¯(uy)+)=0, u|t=0=φ.(10)Similarly we obtain v(t,x,y):=E[φ(x+tY,y+tξ)],(t,x,y)[0,+)×R×R, is the unique viscosity solution of the following G-equation {vt(12ρ¯2(2vx2)++v¯(vy)+)=0, u|t=0=φ.(11)Since μ̅ and σ̅2ρ̅2, then by the comparison theorem for the viscosity solutions of (10) and (11), setting (t, x, y) = (1, 0, 0), we have E[φ(X, η)]E[φ(Y, ξ)].(12)Since φ is an increasing convex function, we can similarly obtain that m(t, x, y):=E[φ(x+tX, y+tη)] and u(t, x, y):=E[φ(x+tX, y+tη)] (t, x, y) ∈ [0, ∞) × ℝ × ℝ, are the unique viscosity solutions of the following G-equations respectively mt+12σ_2(2mx2)+μ_(my)=0, m|t=0=φ.andnt+12ρ_2(2nx2)+v_(ny)=0, n|t=0=φ.(13)Due to the facts that μv and σ2ρ2, by the comparison theorem for the viscosity solutions of (13), setting (t, x, y) = (1, 0, 0), we have E[φ(X, η)]E[φ(Y, ξ)].(14)By combining (12) with (14), we obtain (η, X)iconE(ξ, Y). The proof is complete. □

Remark 2.10: In the classical linear expectation space, for the stochastic orders’ results to the normal distributions, the reader can refer to [12] and [16]. We list the results as follows. Let X=dN(μ, σ2) and Y=dN(v, ρ2) be the two normal distributions on the probability space (Ω, , P), we have

  • XmonPY, if and only if μv, σ2 = ρ2,

  • XconPY, if and only if μ = v, σ2ρ2,

  • XiconPY, if and only if μv, σ2ρ2,

Hence, our results generalize the classical results.

Remark 2.11: Theorem 2.9 looks like just combining stochastic orders’ results of two normal distributions X=dN(μ¯, σ¯2) and Y=dN(v¯, ρ¯2) with X=dN(μ_, σ_2) and Y=dN(v_, ρ_2) together. However, we can not understand it in this way, because G-distribution is not a simple collection of a family of normal distributions, see [8].In this subsection, we introduce some uncertainty orders in the sublinear expectation space. Here we only consider some random variables with special distributions. It is not easy to characterize other distributions. For more properties or computations, the readers can refer to [17, 18].

3 Characterizations of uncertainty orders from the probability measures viewpoint

In this section, we first list some properties of a capacity and quantile functions. We then introduce the recent results of uncertainty orders in the capacity space introduced by [14, 15]. We also establish some characterizations by distortion functions. Finally, we derive the characterizations for uncertainty orders by capacity space theory, induced by sublinear expectation space.

3.1 Quantile functions and risk measures

Let (Ω, ℱ) be a measurable space, and for simplicity we only consider the situation bounded random variables. Let L = L (Ω, ℱ) be the space of bounded ℱ-measurable functions, endowed with the supremum norm ∥·∥.

Firstly, we introduce some properties of set functions μ : ℱ →[0, 1] by [3]:

  • Monotonicity: if A, B ∈ ℱ and AB, then μ(A) ≤ μ(B);

  • Normalization: if μ (∅) = 0 and μ(Ω) = 1;

  • Continuous from below: if An, A ∈ ℱ, and AnA, then μ(An) ↑ μ(A);

  • Continuous from above: if An, A ∈ ℱ, and AnA, then μ(An) ↓ μ(A).

Now we introduce the definitions of capacity and Choquet integral (see, for instance, [19], [3]).

Definition 3.1: A set function μ : ℱ → [0, 1] is called a capacity if it is monotonic, normalized and continuous from below and continuous from above.

Definition 3.2: Let μ be a capacity, XL, and denote μ[X] by μ(X):=0(μ(X>t)1)dt+0μ(X>t)dt.We call μ(X) the Choquet integral of X with respective to the capacity μ.Let μ be a capacity and XL. Put Gμ, X(x):=μ(X>x).We call Gμ, X the decreasing distribution function of X with respective to μ. Taking into account the continuity property from below of the capacity μ, we derive that Gμ, X is right continuous. We introduce the definition of quantile functions of X with respect to μ by [3].

Definition 3.3: ([3]). Let μ be a capacity and XL. Then we say that Ğμ, X is a quantile function of X under the capacity μ if for all λ ∈ (0, 1), G˘μ, X+(λ):=sup{x|Gμ, X(x)λ}G˘μ, X(λ)sup{x|Gμ, X(x)>λ}=:G˘μ, X(λ).And G˘μ, X+(λ) and G˘μ, X-(λ) are called the upper and lower (1 - λ)-quantile of X with respect to μ.

Remark 3.4: It is easy to verify that the lower and upper (1 - λ)-quantile of the distribution of X with respect to μ can also be represented as G˘μ, X(λ)=inf{x|μ(X>x)λ}, G˘μ, X+(λ)=inf{x|μ(X>x)<λ}.(15)Any two quantile functions coincide for all levels λ, except on at most a countable set. We also have the following properties about the quantile functions of X under the capacity μ (see Chapter 1 and Chapter 4 of [3]). Note that (iv) of Lemma 3.5 holds here because the capacity is continuous from below and above, the reader can obtain a similar proof of Lemma A.23 in [12], see also Remark 2.4 in [15].

Lemma 3.5: Let μ be a capacity and XL, we have

  • (i)

    Ğμ, X (·) is a decreasing function;

  • (ii)

    If v is an another capacity and YL, such that Gv, YGμ, X, then Ğv, YĞμ, X except on at most a countable set;

  • (iii)

    μ[X]=01G˘μ, X(t)dt;

  • (iv)

    If u is an increasing function, then Ğμ, u(X) = uĞμ, X except on at most a countable set.

Note that VaR of a financial position XL under a given probability measure P on (Ω, ℱ) is the quantile function of the distribution of X. We give the following definitions, which generalize the definitions of VaR and AVaR under a priori probability measure. These definitions can also be found in [14, 15], where Grigorova used the different notations but the economic implications are the same.

Definition 3.6: Let μ be a capacity and a loss random variable XL. For λ ∈ (0, 1), we define the Value at Risk with a confidence level λ ∈ (0, 1) of X under the capacity μ as VaRμ, λ(X):=G˘μ, X(λ)=inf{x|μ(X>x)λ}.

Definition 3.7: The Average Value at Risk under a capacity μ at λ ∈ (0, 1] of a loss position XL is given by AVaRμ, λ(X):=1λ0λVaRμ, t(X)dt.

Remark 3.8: For any XL, Definition 3.6 can also be used to define VaRμ, 0(X) and VaRμ, 1 (X) We set that VaRμ, 0. (X) -∞ and VaRμ, 1 (X) = sup X. In the sequel, we will often use the following equivalence relation which holds for all x ∈ ℝ and λ ∈ [0, 1]: VaRμ, λ(X)xGμ, X(x)λ.(16)Since any two quantiles coincide for all levels λ, except on at most a countable set, then the AVaR under a capacity μ at λ ∈ (0, 1] of a financial position XL can be defined by any quantile function of X under μ, i.e., AVaRμ, λ(X)=1λ0λG˘μ, X(t)dt.We list some concepts of uncertainty orders under a capacity introduced by Grigorova. See Definition 2.7 and Definition 3.1 in [15].

Definition 3.9: Let μ be a capacity. Let X and Y be two losses positions in L.

  • (i)

    X is said to precede Y in the increasing convex ordering under μ, denoted by XiconμY, if for all increasing and convex function ϕ : ℝ → ℝ μ(ϕ(X))μ(ϕ(Y)).

  • (ii)

    X is said to precede Y in monotone ordering under μ, denoted by XmonμY, if for all increasing function ϕ : ℝ → ℝ μ(ϕ(X))μ(ϕ(Y)).

Some characterizations about these uncertainty orders were considered in [14, 15], see Propositions 2.62.8 and Proposition 3.1 in [15].

Proposition 3.10: ([15]) Let μ be a capacity, X, YL. The following statements hold.

  • (i)

    YiconμXμ((Xb)+)μ((Yb)+) for any b ∈ ℝ.

  • (ii)

    YiconμXAVaRμ, λ(X)AVaRμ, λ(Y) for any λ ∈ (0, 1)

  • (iii)

    YmonμXGμ, X(x)Gμ, Y(x), xVaRμ, λ(X)VaRμ, λ(Y), λ(0, 1).

Remark 3.11: In Proposition 3.10, we use our notations. In fact, γX(t), in the Proposition 2.6 of [15], is equal to our Ğμ, –X (1 - t) ∀t ∈ (0, 1), thus (ii) and (iii) of our claims hold by simple transformation.Here, we give an another characterization of the uncertainty orders monμ and iconμ by distortion functions. Distortion functions play an important role in the field of mathematical finance. We refer to [20] for decision choice under risk, [21] for insurance premiums and [12] for risk measures.Motivated by [22], we give the characterizations of the uncertainty orders monμ and iconμ in terms of distortion functions. We first list the definition of distortion functions, which can be found in any literature referred above.

Definition 3.12: A distortion function is defined as a non-decreasing function g :[0, 1] → [0, 1] such that g(0) = 0 and g(1) = 1.The distortion risk measure associated with distortion function g and capacity μ is denoted by g ∘μ(·) and is defined as gμ(X)=0(g[μ(X>x)]1)dx+0+g[μ(X>x)]dx, XL.

Proposition 3.13: Let μ be a capacity. For two losses X, YL

  • (i)

    YmonμXgμ(X)gμ(Y) for all distortion functions g.

  • (i)

    YiconμXgμ(X)gμ(Y) for all concave distortion functions g.

Proof: (i) The “⟹” implication follows immediately from Gμ, X(x) ≥ Gμ, Y (x) and the non-decreasing property of any distortion function.“⟸” For λ ∈ (0, 1), let distortion function g be defined by g(x) = Ix>λ, x ∈ [0, 1]. By the translation invariance of VaRμ, λ(·) and gμ(·), we may assume without loss of generality that X ≥ 0, then by (16), we have g[μ(X>x)]={1, ifVaRμ, λ(X)>x, 0, ifVaRμ, λ(X)x.Hence, gμ(X) = VaRμ, λ(X). Then by Proposition 3.10(iii), we have YmonμX.(ii) “⟸” For any continuous distortion function g, by Fubini’s theorem, we get that gμ(X)=0, 1VaRμ, t(X)dg(t).For λ ∈ (0, 1), taking g(x)=min(xλ,1),x[0,1],(17)we see that g is a continuous concave distortion function. Furthermore, we find that gμ(x)=1λ0λVaRμ, t(X)dt=AVaRμ, λ(X).(18)Then by Proposition 3.10, we have YiconμX.“⟹” Any concave distortion functions (the concave distortion function may be not continuous at the point 0). Without loss of generality, we only need to show gμ(X) ≥ gμ (Y) for all continuous concave distortion functions g.First we prove it holds for continuous concave piecewise linear distortion functions. As [22] constructed, any such function can be written as g(x)=i=1nαi(βiβi+1)min(x/αi, 1), where 0 = α0 < α1 < ⋯ < αn-1 < αn = 1 and β1 > β2 > ⋯ > βn > βn+1 = 0. By (17) and (18), we have that gμ(·) can be represented by gμ()=i=1nαi(βiβi+1)AVaRμ, αi().By Proposition 3.10, we thus obtain gμ(X) ≥ gμ(Y) for all continuous concave piecewise linear distortion functions g.For any continuous concave distortion function g, there exists an increasing sequence of continuous concave piecewise linear distortion functions gn such that limn→∞ gn(x) = g(x) for all x ∈ [0, 1]. Since gnμ(X) ≥ gnμ(Y) for all n, by monotone convergence theorem, we have gμ(X) ≥ gμ(Y). The proof is complete. □

Proposition 3.14: If μ is a sub modular capacity, then for any concave distortion function g, gμ(−·) is a coherent risk measure on L. In particular, for λ ∈ (0, 1], AVaRμ, λ(−·) is a coherent risk measure.

Proof: It is known that, for any sub modular capacity v, v(−·) is a coherent risk measure on L (see [12], Theorem 4.88). So the point is to show that v(A):=g(μ(A)), AF, is a sub modular capacity.Suppose that g is concave. Take A, B ∈ 𝓕 with μ(A) ≤ μ(B). We show that g(μ(A))g(μ(AB))g(μ(AB))g(μ(B)).It is trivial if r = 0, where μ(A)μ(AB)μ(AB)μ(B):=r.For r = 0, the concavity of g yields that v(A)v(AB)μ(A)μ(AB)v(AB)v(B)μ(AB)μ(B).Multiplying both sides with (μ(A)− μ(AB)) derives the result.Let g be defined by (17), we then find that AVaRμ, λ(·) = gμ(·) for a continuous concave function. Hence AVaRμ, λ(−·) is a coherent risk measure. The proof is complete. □

3.2 Characterizations from the probability measures viewpoint

Given a sublinear expectation space (Ω, 𝓗, 𝔼). In this subsection, let Ω be a complete separable metric space equipped with the distance d, 𝓑(Ω) the Borel σ-algebra of Ω and 𝒨 the collection of all probability measures on (Ω, 𝓑(Ω)). Let Bb(Ω) be the all bounded functions of 𝓑(Ω)-measurable real functions, and we assume that 𝓗 = Bb(Ω). For the sublinear operator 𝔼, we assume that there exists a family of probability measures 𝒫 ⊂ 𝒨 such that E[X]=supQPEQ[X], XH.

And we assume that the induced set function μ μ(A):=E[IA]=supQPEQ[IA]=supQPQ(A), AB(Ω),

is a capacity. Thus (Ω, 𝓑(Ω), μ) become a capacity space.

Kervarec defined the robust VaR and AVaR using a family of probability measures in [23]. [24] also introduced the similar notations under a family of absolutely continuous probability measures. For any X ∈ 𝓗, λ ∈ (0, 1), we define VaRP, λ(X):=supQPVaRQ, λ(X), (19) AVaRP, λ(X):=supQPAVaRQ, λ(X), (20)

where VaRQ, λ(X) = inf{x|Q(X > x) ≤ λ} and AVaRQ, λ(X)=1λ0λVaRQ, t(X)dt are the classical definitions (see Pages 177-179 in [12]).

Remark 3.15: It can be verified that if μ is the supremum of a family of probability measures, then for all X ∈ 𝓗 and λ ∈ (0, 1) the following VaRP, λ(X)=VaRμ, λ(X), andAVaRP, λ(X)=AVaRμ, λ(X)hold.In the case of AVaR under a given probability measure P, we have known that AVaRP, λ(−·) is a coherent risk measure (see [12], Theorem 4.47). For coherent risk measures, we refer to [1] and [2]. From Remark 3.15, we can see that AVaRμ, λ(−·) is coherent risk measure. And from Theorem 4.47 in [12], we can obtain that for any λ ∈ (0, 1], X ∈ 𝓗, AVaRP, λ(X)=supQPmaxRQλER[X], where Qλ={RM|RQanddRdQ1λ, Qa.s.}.Finally, from the characterizations results of uncertainty orders in Propositions 3.103.13, we conclude the following theorems.

Theorem 3.16: Let μ be a capacity induced by a family of probability measures 𝒫, which is determined by the sublunar expectation 𝔼. Let X; Y ∈ 𝓗. The following statements are equivalent.

  • (i)

    μ(𝜙(X)) ≥ μ(𝜙(Y)) for all increasing and convex function 𝜙 : ℝ → ℝ.

  • (ii)

    μ((Xb+) ≥ μ((Yb+) for any b ∈ ℝ.

  • (iii)

    AVaR𝒫, λ(X) ≥ AVaR𝒫, λ(Y) for any λ ∈ (0, 1).

  • (iv)

    gμ(X) ≥ gμ(Y) for all concave distortion functions g.

Theorem 3.17: Let μ be a capacity induced by a family of probability measures 𝒫, which is determined by the sublunar expectation 𝔼. Let X, Y ∈ 𝓗. The following statements are equivalent.

  • (i)

    μ(𝜙(X)) ≥ μ(𝜙(Y)) for all increasing function 𝜙 : ℝ → ℝ.

  • (ii)

    Gμ, X(x) ≥ Gμ, Y(x), ∀x ∈ ℝ.

  • (iii)

    VaR𝒫, λ(X) ≥ VaR𝒫, λ(Y), for any λ ∈ (0, 1).

  • (iv)

    gμ(X) ≥ gμ (Y) for all distortion functions g.

4 Conclusions

This paper considers the uncertainty orders on the sublinear expectation space from two different viewpoints. It is worth noting that the sublinear expectation does not equal to the Choquet integral generally, where the capacity is induced by sublinear expectation. The readers can refer to [25] and [26] for more details. We only consider some special distributions in the first viewpoint, and other plausible formulation leaves to a future study.

Footnotes

  • Competing interests: The authors declare that they have no competing interests.

  • Authors’ contributions: All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Acknowledgement

The authors are very grateful to the editor and an anonymous referee for several constructive and insightful comments on how to improve the paper. This work was supported by the National Natural Science Foundation of China (11371362), the Natural Science Foundation of Jiangsu Province (BK20150167).

References

  • [1]

    Artzner, P., Delbaen, F., Eber, J. M. and Heath, D. Coherent measures of risk. Math. Financ., 9, 203-228, 1999.

  • [2]

    Delbaen, F. Coherent risk measures on general probability spaces. In: Sandmann, K., Schonbucher, P.J. (Eds.), Advances in Finance Stochastics: Essays in Honour of Dieter Sondermann. Springer, New York, pp. 1-37, 2002.

  • [3]

    Denneberg, D. Non-additive Measure and Integral. Kluwer Academic Publishers, 1994.

  • [4]

    Peng, S. BSDE and related g-expectation. Pitman Research Notes in Mathematics Series, No. 364, N. El Karoui and L. Mazliak (eds.), 141-159, 1997.

  • [5]

    Peng, S. G-Expectation, G-Brownian Motion and Related Stochastic Calculus of Itô’s type. In Stochastic Analysis and Applications, Able Symposium, Abel Symposia 2, Benth et al. (eds.), Springer-Verlag, 541-567, 2006.

  • [6]

    Peng, S. G-Brownian motion and dynamic risk measure under volatility uncertainty. In arXiv:0711.2834v1 [math.PR], 2007.

  • [7]

    Peng, S. Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Proc. Appl., 118(12), 2223-2253, 2008.

  • [8]

    Peng, S. Nonlinear Expectations and Stochastic Calculus under Uncertainty with Robust Central Limit Theorem and G-Brownian Motion. In arXiv:1002.4546v1 [math.PR], 2010.

  • [9]

    He, K., Hu, M., and Chen, Z. The relationship between risk measures and choquet expectations in the framework of g-expectations. Stat. Probabil. Lett., 79, 508-512, 2009,

  • [10]

    Jiang, L. Convexity, Translation Invariance and Subadditivity for g-Expectations and Related Risk Measures. Ann. Appl. Probab., 18, 1245-1258, 2008.

  • [11]

    von Neumann, J., Morgenstern, O. Theory of Games and Economic Behavior. 2nd edn. Princeton University Press, 1947.

  • [12]

    Föllmer, H. and Schied, A. Stochastic Finance, An Introduction in Discrete Time. Welter de Gruyter, 2004.

  • [13]

    Shaked, M. and Shanthikumar, J. G. Stochastic Orders. Springer, 2007.

  • [14]

    Grigorova, M. Stochastic orderings with respect to a capacity and an application to a financial optimization problem. HAL : hal-00614716, version 1, 2011.

  • [15]

    Grigorova, M. Stochastic dominance with respect to a capacity and risk measures. HaL: hal-00639667, version 1, 2011.

  • [16]

    Müller, A. Stochastic ordering of multivariate normal distributions. Ann. Inst. Statist. Math., 53 (3), 567-575, 2001.

  • [17]

    Hu, M. Explicit solutions of the G-heat equation for a class of initial conditions. Nonlinear Analysis, 75, 6588-6595, 2012.

  • [18]

    Song, Y. A note on G-normal distribution. In arXiv:1410.8225, 2014.

  • [19]

    Choquet, G. Theory of capacity. Ann. Inst. Fourier.(Grenoble) 5, 131-295, 1953.

  • [20]

    Yaari, M. The Dual Theory of Choice under Risk. Econometrica, 55, 95-115, 1987.

  • [21]

    Wang, S. Premium calculation by transforming the layer premium density. ASTIN Bulletin, 26, 71-92, 1996.

  • [22]

    Dhaene, J., Vanduffel, S., Goovaerts, M.J., Kaas, R., Tang, Q., Vyncke, D. Risk measures and comotononicity: A review. Stochastic Models 22, 573-606, 2002.

  • [23]

    Kervarec, M., 2008. Etude des modeles non domines en mathematiques financieres. These de Doctorat en Mathematiques, Universite dEvry

  • [24]

    Föllmer, H., Knispel, T. Entropic risk measures: coherence vs. convexity, model ambiguity, and robust large deviations. Stochastics and Dynamics, 11(2-3), 333-351, 2011.

  • [25]

    Huber, P., Strassen, V. Minimax tests the Neyman-Pearson lemma for capacity. A. Statist. 1, 252-263, 1973.

  • [26]

    Chen, Z., Chen, T., Matt, D. Choquet expectation and Peng’s g-expectation. The Annals of Probability, 33, 1179-1199, 2005.

About the article

Received: 2104-05-08

Accepted: 2016-01-08

Published Online: 2016-04-23

Published in Print: 2016-01-01


Competing interests: The authors declare that they have no competing interests.

Authors’ contributions: All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.


Citation Information: Open Mathematics, ISSN (Online) 2391-5455, DOI: https://doi.org/10.1515/math-2016-0023. Export Citation

© 2016 Tian and Jiang, published by De Gruyter Open. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)

Comments (0)

Please log in or register to comment.
Log in