Show Summary Details
More options …

# Open Mathematics

### formerly Central European Journal of Mathematics

Editor-in-Chief: Gianazza, Ugo / Vespri, Vincenzo

1 Issue per year

IMPACT FACTOR 2016 (Open Mathematics): 0.682
IMPACT FACTOR 2016 (Central European Journal of Mathematics): 0.489

CiteScore 2016: 0.62

SCImago Journal Rank (SJR) 2016: 0.454
Source Normalized Impact per Paper (SNIP) 2016: 0.850

Mathematical Citation Quotient (MCQ) 2016: 0.23

Open Access
Online
ISSN
2391-5455
See all formats and pricing
More options …

# Uncertainty orders on the sublinear expectation space

Dejian Tian
/ Long Jiang
Published Online: 2016-04-23 | DOI: https://doi.org/10.1515/math-2016-0023

## Abstract

In this paper, we introduce some definitions of uncertainty orders for random vectors in a sublinear expectation space. We all know that, under some continuity conditions, each sublinear expectation 𝔼 has a robust representation as the supremum of a family of probability measures. We describe uncertainty orders from two different viewpoints. One is from sublinear operator viewpoint. After giving definitions such as monotonic orders, convex orders and increasing convex orders, we use these uncertainty orders to derive characterizations for maximal distributions, G-normal distributions and G-distributions, which are the most important random vectors in the sublinear expectation space theory. On the other hand, we also establish some uncertainty orders’ characterizations from the viewpoint of probability measures and build some connections with the theory of risk measures.

MSC 2010: 62C86; 91B06; 90B50

## 1 Introduction

Nonlinear expectations have been a prominent topic in mathematical economics ever since the famous Allais paradox. Typical examples are monetary risk measures introduced in [1, 2], Choquet expectations developed in [3], g-expectations and G-expectations established and studied by [410] respectively.

The theory of sublinear expectation, an important and a special nonlinear expectation, is not based on a given (linear) probability space. We all know that, under some mild continuity conditions, each sublinear expectation 𝔼 has a robust representation as the supremum of a subset of probability measures. It thus provides a new way to describe the uncertainties.

In a financial context, when facing risks one often uses the stochastic orders to compare the “good or bad” between the portfolios. In this case, a probability measure is given on the set of scenarios, so we can focus on the resulting payoff or loss distributions. The comparison of financial risks plays an important role for both regulators and agents in financial markets. Under the framework of a classical probability space, based on expected utility theory developed by [11], lots of elegant results about the stochastic orders have been obtained. For example, in the stochastic order theory it has been shown that the monotonic order is equivalent to saying that one risk position is preferred over the other by all decision makers who have increasing real functions for which the expectations exist. For more details and properties of stochastic orders, see [12, 13].

Motivated by the classical stochastic orders, the object of this paper is to explore the uncertainty orders for random vectors in a sublinear expectation space. These uncertainty orders will provide useful criterions for describing the comparisons of the uncertainty degrees between two random vectors on the sublinear expectation space.

We establish the uncertainty orders in the sublinear expectation space from two different viewpoints. One is from the sublinear operator viewpoint. We give some definitions of uncertainty orders such as monotonic orders, convex orders and increasing convex orders. Then we use these uncertainty orders to derive the characterizations for maximal distributions, G-normal distributions and G-distributions, which are the most important random vectors in the theory of sublinear expectation space.

On the other hand, we study the uncertainty orders in the sublinear expectation space by a family of probability measures 𝒫 induced by the sublinear expectation 𝔼. We can define the capacity space from 𝒫, then we get some characterizations with the help of the recent results about capacity orders obtained by [14, 15] in the capacity space. Besides, we also give the characterization of uncertainty orders by distortion functions.

The paper is organized as follows. In Section 2, we first introduce some preliminaries about the sublinear expectation space. Then we give the definitions of uncertainty orders from the sublinear operator viewpoint, and establish some characterizations for maximal distributions, G-normal distributions and G-distributions. In Section 3, we introduce the other viewpoint of characterizations for uncertainty orders in the sublinear expectation space by using some terminologies of a capacity space. We conclude the paper in Section 4.

## 2 Characterizations of uncertainty orders from the sublinear operator viewpoint

We first present some preliminaries of sublinear expectation space theory. Then we give the definitions of uncertainty orders in the sublinear expectation space. More details for the first subsection can be found in [6, 8].

## 2.1 Sublinear expectation spaces

Let Ω be a given set and let 𝓗 be a linear space of real valued functions defined on Ω such that c ∈ 𝓗 for all constants c and ∣X∣ ∈ 𝓗 if X ∈ 𝓗. The space 𝓗 can be considered as the space of random variables. We further suppose that if X1, ... , Xn ∈ 𝓗, then φ(X1, ... , Xn) ∈ 𝓗 for each φCl.Lip(ℝn), where Cl.Lip (ℝn) denotes the linear space of functions φ satisfying $|φ(x)−φ(y)| ≤ C(1+|x|m+|y|m)|x−y|,∀x,y∈ℝn,$

where m depends only on φ.

(see [8]). A sublinear expectation 𝔼 on 𝓗 is a functional 𝔼 : 𝓗 → ℝ satisfying the following properties: for all X, Y ∈ 𝓗, we have

• (i)

Monotonicity: 𝔼[X] ≥ 𝔼[Y] if XY.

• (ii)

Constant preserving: 𝔼[c] = c for c ∈ ℝ.

• (iii)

Subadditivity: 𝔼[X + Y] ≤ 𝔼[X] + 𝔼[Y].

• (iv)

Positive homogeneity: 𝔼[λX] = λ𝔼[X] for λ ≥ 0.

The triple (Ω, 𝓗, 𝔼) is called a sublinear expectation space.

Let X = (X1, ... , Xn), Xi ∈ 𝓗, denoted by X ∈ 𝓗n, be a given n-dimensional random vector on a sublinear expectation space (Ω, 𝓗, 𝔼). Define a function on Cl.Lip (ℝn) by $FX[φ]:=E[φ(X)],∀φ∈Cl.Lip(ℝn).$

The triple (ℝn, Cl.Lip (ℝn), 𝔽X) forms a sublinear expectation space. 𝔽X is called the distribution of X. Let Y be another n-dimensional random vector on (Ω, 𝓗, 𝔼), we denote $X\stackrel{d}{=}Y$, if E[φ(X)] = E[φ(Y)] for all φCl.Lip (ℝn)

(see [6, 8]). Let (Ω, 𝓗, 𝔼) be a sublinear expectation space. A random vector Y ∈ 𝓗n is said to be independent from another random vector X ∈ 𝓗m under 𝔼, if for each test function φCl.Lip (ℝm + n) we have $E[φ(X,Y)]=E[E[φ(x,Y)]x=X].$

And Y is said to be weakly independent from X under 𝔼, if the above test functions φ are taken only among, instead of Cl.Lip (ℝm + n), $φ(x,y)=ψ0(y)+ψ1(y)|x|+ψ2(y)|x|2,ψi∈Cl.Lip(ℝn),i=0,1,2.$

Let X, be two n-dimensional random vectors on a sublinear expectation space (Ω, 𝓗, 𝔼). is called an independent copy of X if $X\stackrel{d}{=}\overline{X}$ and independent copy of X

(see [8]). Let (Ω, 𝓗, 𝔼) be a sublinear expectation space. A n-dimensional random vector η on (Ω, 𝓗, 𝔼) is called maximal distributed if $aη+bη¯=d(a+b)η,fora,b≥0,$

where η̅ is an independent copy of η. In particular, for n = 1, we denote $\eta \stackrel{d}{=}N\left(\left[\underset{_}{\mu },\overline{\mu }\right]×\left\{0\right\}\right)$ where μ = - 𝔼[-η] and μ̅ = 𝔼[η].

A n-dimensional random vector X on (Ω, 𝓗, 𝔼) is called G-normal distributed if $aX+bX¯=da2+b2X,fora,b≥0,$

where X̅ is an independent copy of X. In particular, for n = 1, we denote $X\stackrel{d}{=}N\left(\left\{0\right\}×\left[{\underset{_}{\sigma }}^{2},{\overline{\sigma }}^{2}\right]\right)$ where σ2 = − 𝔼[− X2] and σ̅2 = E[X2] and σ̅σ ≥ 0.

A pair of n-dimensional random vectors (X, η) on (Ω, 𝓗, 𝔼) is called G-distributed if $(aX+bX¯,a2​η+b2η¯)=d(a2+b2X,(a2+b2)η)fora,b≥0,$

where (X̅ η̅) is an independent copy of (X η) For n = 1, we denote $\left(X,\eta \right)\stackrel{d}{=}N\left(\left[\underset{_}{\mu },\overline{\mu }\right]×\left[{\underset{_}{\sigma }}^{2},{\overline{\sigma }}^{2}\right]\right)$ where μ, = μ̅ σ2 and σ̅2 are defined as above

## 2.2 Characterizations from the sublinear operator viewpoint

Throughout the following paper, we interpret the risk positions as loss random vectors. Motivated by the definitions of stochastic orders in a probability space, we give the definitions of uncertainty orders in a sublinear expectation space. We then use these uncertainty orders to derive the characterizations for maximal distributions, G-normal distributions and G-distributions.

Let (Ω, 𝓗, 𝔼) be a sublinear expectation space. Let X, Y be two n-dimensional random vectors on (Ω, 𝓗, 𝔼).

• (i)

X is said to precede Y in the monotonic order sense under 𝔼, denoted by $X{\le }_{mon}^{\mathbb{E}}Y$, if for all increasing functions φCl.Lip (ℝn), we have $E[φ(X)]≤E[φ(Y)]and−E[−φ(X)]≤−E[−φ(Y)].$(1)

• (ii)

X is said to precede Y in the convex order sense under 𝔼, denoted by $X{\le }_{con}^{\mathbb{E}}Y$, if (1) holds for all convex functions φCl.Lip (ℝn)

• (iii)

X is said to precede Y in the increasing convex order sense under 𝔼, denoted by $X{\le }_{icon}^{\mathbb{E}}Y$, if (1) holds for all increasing convex functions φCl.Lip (ℝn).

From the definitions of uncertainty orders as above, we can see that the uncertainty orders only involve the distributions of random vectors X and Y, thus we can also consider the random vectors X and Y from the different sublinear expectation spaces.

Compared with the stochastic orders, here we impose an extra restriction condition - 𝔼[−φ(X)] ≤ 𝔼[−φ(Y)] on the uncertainty orders, which is a redundant condition in the linear probability space.

Recall Lemma 3.4 (the representation theorem) in Chapter I established in [8], we can see that it is reasonable to define the uncertainty orders as above. In fact, the condition (1) is equivalent to $maxθ∈ΘEθ[φ(X)]≤maxθ∈ΘEθ[φ(Y)]andminθ∈ΘEθ[φ(X)]≤minθ∈ΘEθ[φ(Y)],$(2)

where Θ is a family of probability measures on (ℝn, 𝓑(ℝn)). In this sense, we say $X{\le }_{mon}^{\mathbb{E}}Y$, i.e., the best and worst expectations of a subset of linear expectations {Eθ : θ ∈ Θ} for loss random variables φ(X) are both less than those of φ(Y) for all increasing functions φCl.Lip (ℝn).

For any loss random variable X ∈ 𝓗 of a sublinear expectation space (Ω, 𝓗, 𝔼), we have the following four typical parameters: $μ_=−E[−X],μ¯=E[X],σ_2=−E[−X2],σ¯2=E[X2].$

The internals [μ, μ̅] and [σ2; σ̅2] characterize the mean-uncertainty and the variance-uncertainty of X respectively.

From Definition 2.4 of the uncertainty orders, we easily obtain that for another loss random variable Y in (Ω, 𝓗, 𝔼), with the mean-uncertainty interval [v ] and the variance-uncertainty interval [ρ2, ρ̅2],

• if $X{\le }_{mon}^{\mathbb{E}}Y$ or $X{\le }_{icon}^{\mathbb{E}}Y$, then μ̅v̅ and μ.

• if $X{\le }_{con}^{\mathbb{E}}Y$ then μ̅ = , μ = v, σ̅2ρ̅2 and σ2ρ2.

In the following three theorems we show that for some particular distributions the above necessary conditions are also the sufficient conditions. And these distributions are very important in the sublinear expectation space theory.

Let $\eta \stackrel{d}{=}N\left(\left[\underset{_}{\mu },\overline{\mu }\right]×\left\{0\right\}\right)$ and $\xi \stackrel{d}{=}N\left(\left[\underset{_}{v},\overline{v}\right]×\left\{0\right\}\right)$ be two maximal distributions on a sublinear expectation space (Ω, 𝓗, 𝔼). Then we have

• (i)

$\eta {\le }_{mon}^{\mathbb{E}}\xi ⇔\eta {\le }_{icon}^{\mathbb{E}}\xi ⇔\overline{\mu }\le \overline{v}$ and μv.

• (ii)

$\eta {\le }_{con}^{\mathbb{E}}\xi ⇔\overline{\mu }=\overline{v}$ and μ = v, i.e., $\eta \stackrel{d}{=}\xi$.

(i) From the definitions of the monotonic order and the increasing convex order, $\eta {\le }_{mon}^{\mathbb{E}}\xi$ implies $\eta {\le }_{icon}^{\mathbb{E}}\xi$ is obvious.

If $\eta {\le }_{icon}^{\mathbb{E}}\xi$, choosing an increasing convex function φ(x) = x satisfying φCl.Lip (ℝ), from the definitions of the increasing convex orders and maximal distributions, we have $μ¯=E[η]≤E[ξ]=v¯andμ_=−E[−η]≤−E[−ξ]=v_.$

If μ̅ and μv, for any increasing function φCl.Lip (ℝ), from the equivalent definition of the maximal distribution Definition 1.1 in Chapter II of [8], we have $E[φ(η)]=maxμ_≤y≤μ¯φ(y)=φ(μ¯)≤φ(v¯)=maxv_≤y≤v¯φ(y)=E[φ(ξ)],$

and $−E[−φ(η)]=minμ_≤y≤μ¯φ(y)=φ(μ_)≤φ(v_)=minv_≤y≤v¯φ(y)=−E[−φ(ξ)].$

Thus we have $\eta {\le }_{mon}^{\mathbb{E}}\xi$

(ii) It is obvious that $\eta \stackrel{d}{=}\xi$ implies $\eta {\le }_{con}^{\mathbb{E}}\xi$. As for the other direction, choosing the convex functions φ(x) = x and φ(x) = -x respectively, we can easily obtain μ̅ = and μ = v by the definitions of ${\le }_{con}^{\mathbb{E}}$ and maximal distributions.

Let $X\stackrel{d}{=}N\left(\left\{0\right\}×\left[{\underset{_}{\sigma }}^{2},{\overline{\sigma }}^{2}\right]\right)$ and $Y\stackrel{d}{=}N\left(\left\{0\right\}×\left[{\underset{_}{\rho }}^{2},{\overline{\rho }}^{2}\right]\right)$ be two G-normal distributions on a sublinear expectation space (Ω, 𝓗, 𝔼). Then we have

• (i)

$X{\le }_{mon}^{\mathbb{E}}Y⇔{\overline{\sigma }}^{2}={\overline{\rho }}^{2}$ and σρ2 i.e., $X\stackrel{d}{=}Y$.

• (ii)

$X{\le }_{con}^{\mathbb{E}}Y⇔X{\le }_{icon}^{\text{E}}Y⇔{\overline{\sigma }}^{2}\le {\overline{\rho }}^{2}$ and σ2ρ2.

(i) $X\stackrel{d}{=}Y$ implies $X{\le }_{mon}^{\mathbb{E}}Y$ is obvious.

Suppose $X{\le }_{mon}^{\mathbb{E}}Y$. Recall the fact that 𝔼[φ(·)] can be explicitly calculated for G-normal distributions such that φCl.Lip (ℝ) is a convex or concave function in [8], for an increasing convex function φ(x) = x+ satisfying φCl.Lip (ℝ), thus we have $E[φ(X)]=12π∫−∞+∞φ(σ¯y)exp(−y22)dy=σ¯2π∫0+∞yexp(−y22)dy=12πσ¯.$

Similarly we obtain $\mathbb{E}\left[\phi \left(Y\right)\right]=\frac{1}{\sqrt{2\pi }}\overline{\rho }$. Since σ̅, ρ̅ ≥ 0, we have σ̅2ρ̅2 by the definition of ${\le }_{mon}^{\mathbb{E}}$ On the other hand, we have $−E[−φ(X)]=12π∫−∞+∞φ(σ_y)exp(−y22)dy=σ_2π∫0+∞yexp(−y22)dy=12πσ_.$

Similarly we can get $-\mathbb{E}\left[-\phi \left(Y\right)\right]=\frac{1}{\sqrt{2\pi }}\underset{_}{\rho }$. Since σ, ρ ≥ 0, we have σ2ρ−2 from the definition of ${\le }_{mon}^{\text{E}}$.

Taking an increasing concave function φ(x) = -x ̅, we can derive σ̅2ρ̅2 and σ2ρ2 using the arguments as φ(x) = x+.

We conclude from the above that σ̅2 = ρ̅2 and σ2 = ρ2, i.e., $X\stackrel{d}{=}Y$

(ii) Clearly we have $X{\le }_{con}^{\mathbb{E}}Y⇒X{\le }_{icon}^{\mathbb{E}}Y$. Repeating the arguments in the part (i) we have $X{\le }_{con}^{\mathbb{E}}Y⇒{\overline{\sigma }}^{2}\le {\overline{\rho }}^{2}and{\underset{_}{\sigma }}^{2}\le {\underset{_}{\rho }}^{2}$ and σ2ρ2 by choosing the increasing convex function φ(x) = x+.

It only needs to show that σ̅2ρ̅2 and σ2ρ2${\underset{_}{\sigma }}^{2}\le {\underset{_}{\rho }}^{2}⇒X{\le }_{con}^{\mathbb{E}}Y$.

For any convex function φCl.Lip (ℝ). Consider the following G-heat equation for X ${∂u∂t−12(σ¯2(∂2u∂x2)+−σ_2(∂2u∂x2)−)=0,u|t=0=φ.$(3)

We have that $v\left(t,x\right):=\mathbb{E}\left[\phi \left(x+\sqrt{t}X\right)\right]$, (t, x) ∈ [0, ∞) × ℝ, is the unique viscosity solution of (3). Furthermore, we can check that u(t, x) is convex in x. Thus the above G-heat equation (3) becomes $∂u∂t−12σ¯2(∂2u∂x2)+=0,u|t=0=φ.$(4)

Similarly we obtain $v\left(t,x\right):=\mathbb{E}\left[\phi \left(x+\sqrt{t}Y\right)\right]$, (t, x) ∈ [0, ∞) × ℝ, is the unique viscosity solution of the following G-heat equation ${∂v∂t−12ρ¯2(∂2v∂x2)+=0, v|t=0=φ.$(5)

Since σ̅2ρ̅2, then by the comparison theorem for the viscosity solutions of (4) and (5) (For example, see [8], Theorem 2.6 in Appendix C), we derive that $u(t, x)≤v(t, x), (t, x)∈[0, +∞]×ℝ.$

In particular, taking (t, x) = (1, 0), we have $E[φ(X)]≤E[φ(Y)].$(6)

Since -φ is a concave function, we can similarly show that $m\left(t,x\right):=\mathbb{E}\left[-\phi \left(x+\sqrt{t}X\right)\right]$ and $n\left(t,x\right):=\mathbb{E}\left[-\phi \left(x+\sqrt{t}Y\right)\right]$, (t, x) ∈ [0, ∞) × ℝ, are the unique viscosity solutions of the following G-heat equations respectively ${∂m∂t+12σ_2(∂2m∂x2)−=0, m|t=0=−φ.and{∂n∂t+12ρ_2(∂2n∂x2)−=0, n|t=0=−φ.$(7)

Due to the facts that σ2ρ2 and the comparison theorem for the viscosity solutions of (7), setting (t, x) = (1, 0) we have $−E[−φ(X)]≤−E[−φ(Y)].$(8)

By combining (6) with (8), we get $X{\le }_{con}^{\mathbb{E}}Y$. The proof is complete. □

Let $\left(\eta ,X\right)\stackrel{d}{=}N\left(\left[\underset{_}{\mu },\overline{\mu }\right]×\left[{\underset{_}{\sigma }}^{2},{\overline{\sigma }}^{2}\right]\right)$ and $\left(\xi ,Y\right)\stackrel{d}{=}N\left(\left[\underset{_}{v},\overline{v}\right]×\left[{\underset{_}{\rho }}^{2},{\overline{\rho }}^{2}\right]\right)$ be two G-distributions on a sublinear expectation space (Ω, 𝓗, 𝔼). Moreover, suppose η is weakly independent from X and Ȉ is weakly independent from Y respectively. Then we have

• (i)

$\left(\eta ,X\right){\le }_{mon}^{\mathbb{E}}\left(\xi ,Y\right)$, if and only if μ̅, μv, σ̅2 = ρ̅2 and σ2ρ2.

• (ii)

$\left(\eta ,X\right){\le }_{con}^{\mathbb{E}}\left(\xi ,Y\right)$, if and only if μ̅ = , μv, σ̅2ρ̅2 and σ2ρ2.

• (iii)

$\left(\eta ,X\right){\le }_{icon}^{\mathbb{E}}\left(\xi ,Y\right)$, if and only if μ̅, μv, σ̅2ρ̅2 and σ2ρ2.

The “only if” parts are the combinations of the results of Theorem 2.7 and Theorem 2.8. In fact, for example, if $\left(\eta ,X\right){\le }_{mon}^{\mathbb{E}}\left(\xi ,Y\right)$, then we can derive that $\eta {\le }_{mon}^{\mathbb{E}}\xi$ and $X{\le }_{mon}^{\mathbb{E}}Y$. Thus from Theorem 2.7 and Theorem 2.8 we get the results.

For the proof of the converse implications, the key ideas are both the applications of the comparison theorem of the viscosity solutions to G-equations. We only show the case (iii). Cases (i) and (ii) are verified by an analogous argument.

Assume μ̅, μv, σ̅2ρ̅2 and σ2ρ2. For any increasing convex function φCl.Lip (ℝ2), by Proposition 1.10 in Chapter II of [8], we have that $u\left(t,x,y\right):=\mathbb{E}\left[\phi \left(x+\sqrt{t}X,y+t\eta \right)\right]$, (t, x, y) ∈ [0, ∞)×ℝ×ℝ, is the unique viscosity solution of the following G-equation for (η, X) ${∂u∂t−G(∂u∂y, ∂2u∂x2)=0, u|t=0=φ,$(9)

where $G\left(p,a\right)=\mathbb{E}\left[\frac{1}{2}a{X}^{2}+p\eta \right]$.

Since η is weakly independent from X, we have $G(p, a)=E[12aX2+pη]=E[12aX2]+E[pη]=12(σ¯2a+−σ_2a−)+(μ¯p+−μ_p−).$

On the other hand, since φ is an increasing convex function in Cl.Lip (ℝ2), we can get (t, x, y) is convex in x and increasing in y. Thus (9) becomes ${∂u∂t−(12σ¯2(∂2u∂x2)++μ¯(∂u∂y)+)=0, u|t=0=φ.$(10)

Similarly we obtain $v\left(t,x,y\right):=\mathbb{E}\left[\phi \left(x+\sqrt{t}Y,y+t\xi \right)\right],\phantom{\rule{thinmathspace}{0ex}}\left(t,x,y\right)\in \left[0,+\mathrm{\infty }\right)×\mathbb{R}×\mathbb{R},$ is the unique viscosity solution of the following G-equation ${∂v∂t−(12ρ¯2(∂2v∂x2)++v¯(∂v∂y)+)=0, u|t=0=φ.$(11)

Since μ̅ and σ̅2ρ̅2, then by the comparison theorem for the viscosity solutions of (10) and (11), setting (t, x, y) = (1, 0, 0), we have $E[φ(X, η)]≤E[φ(Y, ξ)].$(12)

Since φ is an increasing convex function, we can similarly obtain that $m\left(t,x,y\right):=\mathbb{E}\left[-\phi \left(x+\sqrt{t}X,y+t\eta \right)\right]$ and $u\left(t,x,y\right):=\mathbb{E}\left[-\phi \left(x+\sqrt{t}X,y+t\eta \right)\right]$ (t, x, y) ∈ [0, ∞) × ℝ × ℝ, are the unique viscosity solutions of the following G-equations respectively $∂m∂t+12σ_2(∂2m∂x2)−+μ_(∂m∂y)−=0, m|t=0=−φ.and∂n∂t+12ρ_2(∂2n∂x2)−+v_(∂n∂y)−=0, n|t=0=−φ.$(13)

Due to the facts that μv and σ2ρ2, by the comparison theorem for the viscosity solutions of (13), setting (t, x, y) = (1, 0, 0), we have $−E[−φ(X, η)]≤−E[−φ(Y, ξ)].$(14)

By combining (12) with (14), we obtain $\left(\eta ,X\right){\le }_{icon}^{\mathbb{E}}\left(\xi ,Y\right)$. The proof is complete. □

In the classical linear expectation space, for the stochastic orders’ results to the normal distributions, the reader can refer to [12] and [16]. We list the results as follows. Let $X\stackrel{d}{=}N\left(\mu ,{\sigma }^{2}\right)$ and $Y\stackrel{d}{=}N\left(v,{\rho }^{2}\right)$ be the two normal distributions on the probability space (Ω, , P), we have

• $X{\le }_{mon}^{P}Y$, if and only if μv, σ2 = ρ2,

• $X{\le }_{con}^{P}Y$, if and only if μ = v, σ2ρ2,

• $X{\le }_{icon}^{P}Y$, if and only if μv, σ2ρ2,

Hence, our results generalize the classical results.

Theorem 2.9 looks like just combining stochastic orders’ results of two normal distributions $X\stackrel{d}{=}N\left(\overline{\mu },{\overline{\sigma }}^{2}\right)$ and $Y\stackrel{d}{=}N\left(\overline{v},{\overline{\rho }}^{2}\right)$ with ${X}^{\prime }\stackrel{d}{=}N\left(\underset{_}{\mu },{\underset{_}{\sigma }}^{2}\right)$ and ${Y}^{\prime }\stackrel{d}{=}N\left(\underset{_}{v},{\underset{_}{\rho }}^{2}\right)$ together. However, we can not understand it in this way, because G-distribution is not a simple collection of a family of normal distributions, see [8].

In this subsection, we introduce some uncertainty orders in the sublinear expectation space. Here we only consider some random variables with special distributions. It is not easy to characterize other distributions. For more properties or computations, the readers can refer to [17, 18].

## 3 Characterizations of uncertainty orders from the probability measures viewpoint

In this section, we first list some properties of a capacity and quantile functions. We then introduce the recent results of uncertainty orders in the capacity space introduced by [14, 15]. We also establish some characterizations by distortion functions. Finally, we derive the characterizations for uncertainty orders by capacity space theory, induced by sublinear expectation space.

## 3.1 Quantile functions and risk measures

Let (Ω, ℱ) be a measurable space, and for simplicity we only consider the situation bounded random variables. Let L = L (Ω, ℱ) be the space of bounded ℱ-measurable functions, endowed with the supremum norm ∥·∥.

Firstly, we introduce some properties of set functions μ : ℱ →[0, 1] by [3]:

• Monotonicity: if A, B ∈ ℱ and AB, then μ(A) ≤ μ(B);

• Normalization: if μ (∅) = 0 and μ(Ω) = 1;

• Continuous from below: if An, A ∈ ℱ, and AnA, then μ(An) ↑ μ(A);

• Continuous from above: if An, A ∈ ℱ, and AnA, then μ(An) ↓ μ(A).

Now we introduce the definitions of capacity and Choquet integral (see, for instance, [19], [3]).

A set function μ : ℱ → [0, 1] is called a capacity if it is monotonic, normalized and continuous from below and continuous from above.

Let μ be a capacity, XL, and denote μ[X] by $μ(X):=∫∞0(μ(X>t)−1)dt+∫0∞μ(X>t)dt.$

We call μ(X) the Choquet integral of X with respective to the capacity μ.

Let μ be a capacity and XL. Put $Gμ, X(x):=μ(X>x).$

We call Gμ, X the decreasing distribution function of X with respective to μ. Taking into account the continuity property from below of the capacity μ, we derive that Gμ, X is right continuous. We introduce the definition of quantile functions of X with respect to μ by [3].

([3]). Let μ be a capacity and XL. Then we say that Ğμ, X is a quantile function of X under the capacity μ if for all λ ∈ (0, 1), $G˘μ, X+(λ):=sup{x|Gμ, X(x)≥λ}≥G˘μ, X(λ)≥sup{x|Gμ, X(x)>λ}=:G˘μ, X−(λ).$

And ${\stackrel{˘}{G}}_{\mu ,X}^{+}\left(\lambda \right)$ and ${\stackrel{˘}{G}}_{\mu ,X}^{-}\left(\lambda \right)$ are called the upper and lower (1 - λ)-quantile of X with respect to μ.

It is easy to verify that the lower and upper (1 - λ)-quantile of the distribution of X with respect to μ can also be represented as $G˘μ, X−(λ)=inf{x|μ(X>x)≤λ}, G˘μ, X+(λ)=inf{x|μ(X>x)<λ}.$(15)

Any two quantile functions coincide for all levels λ, except on at most a countable set. We also have the following properties about the quantile functions of X under the capacity μ (see Chapter 1 and Chapter 4 of [3]). Note that (iv) of Lemma 3.5 holds here because the capacity is continuous from below and above, the reader can obtain a similar proof of Lemma A.23 in [12], see also Remark 2.4 in [15].

Let μ be a capacity and XL, we have

• (i)

Ğμ, X (·) is a decreasing function;

• (ii)

If v is an another capacity and YL, such that Gv, YGμ, X, then Ğv, YĞμ, X except on at most a countable set;

• (iii)

$\mu \left[X\right]={\int }_{0}^{1}{\stackrel{˘}{G}}_{\mu ,X}\left(t\right)dt;$

• (iv)

If u is an increasing function, then Ğμ, u(X) = uĞμ, X except on at most a countable set.

Note that VaR of a financial position XL under a given probability measure P on (Ω, ℱ) is the quantile function of the distribution of X. We give the following definitions, which generalize the definitions of VaR and AVaR under a priori probability measure. These definitions can also be found in [14, 15], where Grigorova used the different notations but the economic implications are the same.

Let μ be a capacity and a loss random variable XL. For λ ∈ (0, 1), we define the Value at Risk with a confidence level λ ∈ (0, 1) of X under the capacity μ as $VaRμ, λ(X):=G˘μ, X−(λ)=inf{x|μ(X>x)≤λ}.$

The Average Value at Risk under a capacity μ at λ ∈ (0, 1] of a loss position XL is given by $AVaRμ, λ(X):=1λ∫0λVaRμ, t(X)dt.$

For any XL, Definition 3.6 can also be used to define VaRμ, 0(X) and VaRμ, 1 (X) We set that VaRμ, 0. (X) -∞ and VaRμ, 1 (X) = sup X. In the sequel, we will often use the following equivalence relation which holds for all x ∈ ℝ and λ ∈ [0, 1]: $VaRμ, λ(X)≤x⇔Gμ, X(x)≤λ.$(16)

Since any two quantiles coincide for all levels λ, except on at most a countable set, then the AVaR under a capacity μ at λ ∈ (0, 1] of a financial position XL can be defined by any quantile function of X under μ, i.e., $AVaRμ, λ(X)=1λ∫0λG˘μ, X(t)dt.$

We list some concepts of uncertainty orders under a capacity introduced by Grigorova. See Definition 2.7 and Definition 3.1 in [15].

Let μ be a capacity. Let X and Y be two losses positions in L.

• (i)

X is said to precede Y in the increasing convex ordering under μ, denoted by $X{\le }_{icon}^{\mu }Y$, if for all increasing and convex function ϕ : ℝ → ℝ $μ(ϕ(X))≤μ(ϕ(Y)).$

• (ii)

X is said to precede Y in monotone ordering under μ, denoted by $X{\le }_{mon}^{\mu }Y$, if for all increasing function ϕ : ℝ → ℝ $μ(ϕ(X))≤μ(ϕ(Y)).$

Some characterizations about these uncertainty orders were considered in [14, 15], see Propositions 2.62.8 and Proposition 3.1 in [15].

([15]) Let μ be a capacity, X, YL. The following statements hold.

• (i)

$Y{\le }_{icon}^{\mu }X⇔\mu \left({\left(X-b\right)}^{+}\right)\ge \mu \left({\left(Y-b\right)}^{+}\right)$ for any b ∈ ℝ.

• (ii)

$Y{\le }_{icon}^{\mu }X⇔AVa{R}_{\mu ,\text{λ}}\left(X\right)\ge AVa{R}_{\mu ,\text{λ}}\left(Y\right)$ for any λ ∈ (0, 1)

• (iii)

$Y{\le }_{mon}^{\mu }X⇔{G}_{\mu ,X}\left(x\right)\ge {G}_{\mu ,Y}\left(x\right),\forall x\in ℝ⇔Va{R}_{\mu ,\text{λ}}\left(X\right)\ge Va{R}_{\mu ,\text{λ}}\left(Y\right),\forall \text{λ}\in \left(0,1\right).$

In Proposition 3.10, we use our notations. In fact, γX(t), in the Proposition 2.6 of [15], is equal to our Ğμ, –X (1 - t) ∀t ∈ (0, 1), thus (ii) and (iii) of our claims hold by simple transformation.

Here, we give an another characterization of the uncertainty orders ${\le }_{mon}^{\mu }$ and ${\le }_{icon}^{\mu }$ by distortion functions. Distortion functions play an important role in the field of mathematical finance. We refer to [20] for decision choice under risk, [21] for insurance premiums and [12] for risk measures.

Motivated by [22], we give the characterizations of the uncertainty orders ${\le }_{mon}^{\mu }$ and ${\le }_{icon}^{\mu }$ in terms of distortion functions. We first list the definition of distortion functions, which can be found in any literature referred above.

A distortion function is defined as a non-decreasing function g :[0, 1] → [0, 1] such that g(0) = 0 and g(1) = 1.

The distortion risk measure associated with distortion function g and capacity μ is denoted by g ∘μ(·) and is defined as $g∘μ(X)=∫−∞0(g[μ(X>x)]−1)dx+∫0+∞g[μ(X>x)]dx, ∀X∈L∞.$

Let μ be a capacity. For two losses X, YL

• (i)

$Y{\le }_{mon}^{\mu }X⇔g\circ \mu \left(X\right)\ge g\circ \mu \left(Y\right)$ for all distortion functions g.

• (i)

$Y{\le }_{icon}^{\mu }X⇔g\circ \mu \left(X\right)\ge g\circ \mu \left(Y\right)$ for all concave distortion functions g.

(i) The “⟹” implication follows immediately from Gμ, X(x) ≥ Gμ, Y (x) and the non-decreasing property of any distortion function.

“⟸” For λ ∈ (0, 1), let distortion function g be defined by g(x) = Ix>λ, x ∈ [0, 1]. By the translation invariance of VaRμ, λ(·) and gμ(·), we may assume without loss of generality that X ≥ 0, then by (16), we have $g[μ(X>x)]={1, if VaRμ, λ(X)>x, 0, if VaRμ, λ(X)≤x.$

Hence, gμ(X) = VaRμ, λ(X). Then by Proposition 3.10(iii), we have $Y{\le }_{mon}^{\mu }X$.

(ii) “⟸” For any continuous distortion function g, by Fubini’s theorem, we get that $g∘μ(X)=∫0, 1VaRμ, t(X)dg(t).$

For λ ∈ (0, 1), taking $g(x)=min(xλ,1),x∈[0,1],$(17)

we see that g is a continuous concave distortion function. Furthermore, we find that $g∘μ(x)=1λ∫0λVaRμ, t(X)dt=AVaRμ, λ(X).$(18)

Then by Proposition 3.10, we have $Y{\le }_{icon}^{\mu }X$.

“⟹” Any concave distortion functions (the concave distortion function may be not continuous at the point 0). Without loss of generality, we only need to show gμ(X) ≥ gμ (Y) for all continuous concave distortion functions g.

First we prove it holds for continuous concave piecewise linear distortion functions. As [22] constructed, any such function can be written as $g(x)=∑i=1nαi(βi−βi+1) min (x/αi, 1),$

where 0 = α0 < α1 < ⋯ < αn-1 < αn = 1 and β1 > β2 > ⋯ > βn > βn+1 = 0. By (17) and (18), we have that gμ(·) can be represented by $g∘μ(⋅)=∑i=1nαi(βi−βi+1)AVaRμ, αi(⋅).$

By Proposition 3.10, we thus obtain gμ(X) ≥ gμ(Y) for all continuous concave piecewise linear distortion functions g.

For any continuous concave distortion function g, there exists an increasing sequence of continuous concave piecewise linear distortion functions gn such that limn→∞ gn(x) = g(x) for all x ∈ [0, 1]. Since gnμ(X) ≥ gnμ(Y) for all n, by monotone convergence theorem, we have gμ(X) ≥ gμ(Y). The proof is complete. □

If μ is a sub modular capacity, then for any concave distortion function g, gμ(−·) is a coherent risk measure on L. In particular, for λ ∈ (0, 1], AVaRμ, λ(−·) is a coherent risk measure.

It is known that, for any sub modular capacity v, v(−·) is a coherent risk measure on L (see [12], Theorem 4.88). So the point is to show that $v(A):=g(μ(A)), ∀A∈F,$

is a sub modular capacity.

Suppose that g is concave. Take A, B ∈ 𝓕 with μ(A) ≤ μ(B). We show that $g(μ(A))−g(μ(A∩B))≤g(μ(A∪B))−g(μ(B)).$

It is trivial if r = 0, where $μ(A)−μ(A∩B)≥μ(A∪B)−μ(B):=r.$

For r = 0, the concavity of g yields that $v(A)−v(A∩B)μ(A)−μ(A∩B)≥v(A∪B)−v(B)μ(A∪B)−μ(B).$

Multiplying both sides with (μ(A)− μ(AB)) derives the result.

Let g be defined by (17), we then find that AVaRμ, λ(·) = gμ(·) for a continuous concave function. Hence AVaRμ, λ(−·) is a coherent risk measure. The proof is complete. □

## 3.2 Characterizations from the probability measures viewpoint

Given a sublinear expectation space (Ω, 𝓗, 𝔼). In this subsection, let Ω be a complete separable metric space equipped with the distance d, 𝓑(Ω) the Borel σ-algebra of Ω and 𝒨 the collection of all probability measures on (Ω, 𝓑(Ω)). Let Bb(Ω) be the all bounded functions of 𝓑(Ω)-measurable real functions, and we assume that 𝓗 = Bb(Ω). For the sublinear operator 𝔼, we assume that there exists a family of probability measures 𝒫 ⊂ 𝒨 such that $E[X]=supQ∈P⁡EQ[X], ∀X ∈H.$

And we assume that the induced set function μ $μ(A):=E[IA]=supQ∈P⁡EQ[IA]=supQ∈P⁡Q(A), ∀A∈B(Ω),$

is a capacity. Thus (Ω, 𝓑(Ω), μ) become a capacity space.

Kervarec defined the robust VaR and AVaR using a family of probability measures in [23]. [24] also introduced the similar notations under a family of absolutely continuous probability measures. For any X ∈ 𝓗, λ ∈ (0, 1), we define $VaRP, λ(X):=supQ∈P⁡VaRQ, λ(X),$(19) $AVaRP, λ(X):=supQ∈P⁡AVaRQ, λ(X),$(20)

where VaRQ, λ(X) = inf{x|Q(X > x) ≤ λ} and $AVa{R}_{Q,\text{λ}}\left(X\right)=\frac{1}{\text{λ}}{\int }_{0}^{\text{λ}}Va{R}_{Q,t}\left(X\right)dt$ are the classical definitions (see Pages 177-179 in [12]).

It can be verified that if μ is the supremum of a family of probability measures, then for all X ∈ 𝓗 and λ ∈ (0, 1) the following $VaRP, λ(X)=VaRμ, λ(X), andAVaRP, λ(X)=AVaRμ, λ(X)$

hold.

In the case of AVaR under a given probability measure P, we have known that AVaRP, λ(−·) is a coherent risk measure (see [12], Theorem 4.47). For coherent risk measures, we refer to [1] and [2]. From Remark 3.15, we can see that AVaRμ, λ(−·) is coherent risk measure. And from Theorem 4.47 in [12], we can obtain that for any λ ∈ (0, 1], X ∈ 𝓗, $AVaRP, λ(X)=supQ∈PmaxR∈QλER[X],$

where $Qλ={R∈M|R≪QanddRdQ≤1λ, Q−a.s.}.$

Finally, from the characterizations results of uncertainty orders in Propositions 3.103.13, we conclude the following theorems.

Let μ be a capacity induced by a family of probability measures 𝒫, which is determined by the sublunar expectation 𝔼. Let X; Y ∈ 𝓗. The following statements are equivalent.

• (i)

μ(𝜙(X)) ≥ μ(𝜙(Y)) for all increasing and convex function 𝜙 : ℝ → ℝ.

• (ii)

μ((Xb+) ≥ μ((Yb+) for any b ∈ ℝ.

• (iii)

AVaR𝒫, λ(X) ≥ AVaR𝒫, λ(Y) for any λ ∈ (0, 1).

• (iv)

gμ(X) ≥ gμ(Y) for all concave distortion functions g.

Let μ be a capacity induced by a family of probability measures 𝒫, which is determined by the sublunar expectation 𝔼. Let X, Y ∈ 𝓗. The following statements are equivalent.

• (i)

μ(𝜙(X)) ≥ μ(𝜙(Y)) for all increasing function 𝜙 : ℝ → ℝ.

• (ii)

Gμ, X(x) ≥ Gμ, Y(x), ∀x ∈ ℝ.

• (iii)

VaR𝒫, λ(X) ≥ VaR𝒫, λ(Y), for any λ ∈ (0, 1).

• (iv)

gμ(X) ≥ gμ (Y) for all distortion functions g.

## 4 Conclusions

This paper considers the uncertainty orders on the sublinear expectation space from two different viewpoints. It is worth noting that the sublinear expectation does not equal to the Choquet integral generally, where the capacity is induced by sublinear expectation. The readers can refer to [25] and [26] for more details. We only consider some special distributions in the first viewpoint, and other plausible formulation leaves to a future study.

## Acknowledgement

The authors are very grateful to the editor and an anonymous referee for several constructive and insightful comments on how to improve the paper. This work was supported by the National Natural Science Foundation of China (11371362), the Natural Science Foundation of Jiangsu Province (BK20150167).

## References

• [1]

Artzner, P., Delbaen, F., Eber, J. M. and Heath, D. Coherent measures of risk. Math. Financ., 9, 203-228, 1999. Google Scholar

• [2]

Delbaen, F. Coherent risk measures on general probability spaces. In: Sandmann, K., Schonbucher, P.J. (Eds.), Advances in Finance Stochastics: Essays in Honour of Dieter Sondermann. Springer, New York, pp. 1-37, 2002. Google Scholar

• [3]

• [4]

Peng, S. BSDE and related g-expectation. Pitman Research Notes in Mathematics Series, No. 364, N. El Karoui and L. Mazliak (eds.), 141-159, 1997. Google Scholar

• [5]

Peng, S. G-Expectation, G-Brownian Motion and Related Stochastic Calculus of Itô’s type. In Stochastic Analysis and Applications, Able Symposium, Abel Symposia 2, Benth et al. (eds.), Springer-Verlag, 541-567, 2006.Google Scholar

• [6]

Peng, S. G-Brownian motion and dynamic risk measure under volatility uncertainty. In arXiv:0711.2834v1 [math.PR], 2007. Google Scholar

• [7]

Peng, S. Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Proc. Appl., 118(12), 2223-2253, 2008.Google Scholar

• [8]

Peng, S. Nonlinear Expectations and Stochastic Calculus under Uncertainty with Robust Central Limit Theorem and G-Brownian Motion. In arXiv:1002.4546v1 [math.PR], 2010. Google Scholar

• [9]

He, K., Hu, M., and Chen, Z. The relationship between risk measures and choquet expectations in the framework of g-expectations. Stat. Probabil. Lett., 79, 508-512, 2009,Google Scholar

• [10]

Jiang, L. Convexity, Translation Invariance and Subadditivity for g-Expectations and Related Risk Measures. Ann. Appl. Probab., 18, 1245-1258, 2008. Google Scholar

• [11]

von Neumann, J., Morgenstern, O. Theory of Games and Economic Behavior. 2nd edn. Princeton University Press, 1947. Google Scholar

• [12]

Föllmer, H. and Schied, A. Stochastic Finance, An Introduction in Discrete Time. Welter de Gruyter, 2004. Google Scholar

• [13]

Shaked, M. and Shanthikumar, J. G. Stochastic Orders. Springer, 2007. Google Scholar

• [14]

Grigorova, M. Stochastic orderings with respect to a capacity and an application to a financial optimization problem. HAL : hal-00614716, version 1, 2011. Google Scholar

• [15]

Grigorova, M. Stochastic dominance with respect to a capacity and risk measures. HaL: hal-00639667, version 1, 2011. Google Scholar

• [16]

Müller, A. Stochastic ordering of multivariate normal distributions. Ann. Inst. Statist. Math., 53 (3), 567-575, 2001. Google Scholar

• [17]

Hu, M. Explicit solutions of the G-heat equation for a class of initial conditions. Nonlinear Analysis, 75, 6588-6595, 2012. Google Scholar

• [18]

Song, Y. A note on G-normal distribution. In arXiv:1410.8225, 2014. Google Scholar

• [19]

Choquet, G. Theory of capacity. Ann. Inst. Fourier.(Grenoble) 5, 131-295, 1953. Google Scholar

• [20]

Yaari, M. The Dual Theory of Choice under Risk. Econometrica, 55, 95-115, 1987. Google Scholar

• [21]

Wang, S. Premium calculation by transforming the layer premium density. ASTIN Bulletin, 26, 71-92, 1996. Google Scholar

• [22]

Dhaene, J., Vanduffel, S., Goovaerts, M.J., Kaas, R., Tang, Q., Vyncke, D. Risk measures and comotononicity: A review. Stochastic Models 22, 573-606, 2002. Google Scholar

• [23]

Kervarec, M., 2008. Etude des modeles non domines en mathematiques financieres. These de Doctorat en Mathematiques, Universite dEvry Google Scholar

• [24]

Föllmer, H., Knispel, T. Entropic risk measures: coherence vs. convexity, model ambiguity, and robust large deviations. Stochastics and Dynamics, 11(2-3), 333-351, 2011. Google Scholar

• [25]

Huber, P., Strassen, V. Minimax tests the Neyman-Pearson lemma for capacity. A. Statist. 1, 252-263, 1973. Google Scholar

• [26]

Chen, Z., Chen, T., Matt, D. Choquet expectation and Peng’s g-expectation. The Annals of Probability, 33, 1179-1199, 2005. Google Scholar

Accepted: 2016-01-08

Published Online: 2016-04-23

Published in Print: 2016-01-01

Competing interests: The authors declare that they have no competing interests.

Authors’ contributions: All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Citation Information: Open Mathematics, ISSN (Online) 2391-5455,

Export Citation