Throughout the following paper, we interpret the risk positions as loss random vectors. Motivated by the definitions of stochastic orders in a probability space, we give the definitions of uncertainty orders in a sublinear expectation space. We then use these uncertainty orders to derive the characterizations for maximal distributions, *G*-normal distributions and *G*-distributions.

**Definition 2.4:** *Let* (Ω, 𝓗, 𝔼) *be a sublinear expectation space. Let X, Y be two n-dimensional random vectors on* (Ω, 𝓗, 𝔼).

*(i)*

*X is said to precede Y in the monotonic order sense under* 𝔼, *denoted by $X{\le}_{mon}^{\mathbb{E}}Y$, if for all increasing functions φ* ∈ *C*_{l.Lip} (ℝ^{n}), we have
$$\begin{array}{cc}\begin{array}{cc}\mathbb{E}[\phi (X)]\le \mathbb{E}[\phi (Y)]& and\end{array}& -\end{array}\mathbb{E}[-\phi (X)]\le -\mathbb{E}[-\phi (Y)].$$(1)

*(ii)*

*X is said to precede Y in the convex order sense under* 𝔼, *denoted by $X{\le}_{con}^{\mathbb{E}}Y$, if (1) holds for all convex functions φ* ∈ *C*_{l.Lip} (ℝ^{n})

*(iii)*

*X is said to precede Y in the increasing convex order sense under* 𝔼, *denoted by $X{\le}_{icon}^{\mathbb{E}}Y$, if (1) holds for all increasing convex functions φ* ∈ *C*_{l.Lip} (ℝ^{n}).

*–*

*if $X{\le}_{mon}^{\mathbb{E}}Y$ or $X{\le}_{icon}^{\mathbb{E}}Y$, then **μ̅* ≤ *v̅ and μ* ≤ *v̅*.

*–*

*if $X{\le}_{con}^{\mathbb{E}}Y$ **then μ̅* = *v̅*, *μ* = *v*, *σ̅*^{2} ≤ *ρ̅*^{2} and *σ*^{2} ≤ *ρ*^{2}.

*In the following three theorems we show that for some particular distributions the above necessary conditions are also the sufficient conditions. And these distributions are very important in the sublinear expectation space theory.***Theorem 2.7:** *Let* $\eta \stackrel{d}{=}N([\underset{\_}{\mu},\overline{\mu}]\times \{0\})$ and $\xi \stackrel{d}{=}N([\underset{\_}{v},\overline{v}]\times \{0\})$ *be two maximal distributions on a sublinear expectation space* (Ω, 𝓗, 𝔼). *Then we have*

*(i)*

*$\eta {\le}_{mon}^{\mathbb{E}}\xi \iff \eta {\le}_{icon}^{\mathbb{E}}\xi \iff \overline{\mu}\le \overline{v}$ and **μ* ≤ *v*.

*(ii)*

*$\eta {\le}_{con}^{\mathbb{E}}\xi \iff \overline{\mu}=\overline{v}$ and **μ* = *v*, *i.e*., $\eta \stackrel{d}{=}\xi $.

**Proof:** *(i) From the definitions of the monotonic order and the increasing convex order, $\eta {\le}_{mon}^{\mathbb{E}}\xi $ implies $\eta {\le}_{icon}^{\mathbb{E}}\xi $ is obvious.If $\eta {\le}_{icon}^{\mathbb{E}}\xi $, choosing an increasing convex function **φ*(*x*) = *x* satisfying *φ* ∈ *C*_{l.Lip} (ℝ), from the definitions of the increasing convex orders and maximal distributions, we have
$$\begin{array}{ccc}\overline{\mu}=\mathbb{E}[\eta ]\le \mathbb{E}[\xi ]=\overline{v}& \text{and}& \underset{\_}{\mu}=-\mathbb{E}[-\eta ]\le -\mathbb{E}[-\xi ]=\underset{\_}{v}.\end{array}$$If *μ̅* ≤ *v̅* and *μ* ≤ *v*, for any increasing function *φ* ∈ *C*_{l.Lip} (ℝ), from the equivalent definition of the maximal distribution Definition 1.1 in Chapter II of [8], we have
$$\mathbb{E}[\phi (\eta )]=\underset{\underset{\_}{\mu}\le y\le \overline{\mu}}{\mathrm{max}}\phi (y)=\phi (\overline{\mu})\le \phi (\overline{v})=\underset{\underset{\_}{v}\le y\le \overline{v}}{\mathrm{max}}\phi (y)=\mathbb{E}[\phi (\xi )],$$and
$$-\mathbb{E}[-\phi (\eta )]=\underset{\underset{\_}{\mu}\le y\le \overline{\mu}}{\mathrm{min}}\phi (y)=\phi (\underset{\_}{\mu})\le \phi (\underset{\_}{v})=\underset{\underset{\_}{v}\le y\le \overline{v}}{\mathrm{min}}\phi (y)=-\mathbb{E}[-\phi (\xi )].$$Thus we have $\eta {\le}_{mon}^{\mathbb{E}}\xi $(ii) It is obvious that $\eta \stackrel{d}{=}\xi $ implies $\eta {\le}_{con}^{\mathbb{E}}\xi $. As for the other direction, choosing the convex functions *φ*(*x*) = *x* and *φ*(*x*) = -*x* respectively, we can easily obtain *μ̅* = *v̅* and *μ* = *v* by the definitions of ${\le}_{con}^{\mathbb{E}}$ and maximal distributions.

**Theorem 2.8:** *Let* $X\stackrel{d}{=}N(\{0\}\times [{\underset{\_}{\sigma}}^{2},{\overline{\sigma}}^{2}])$ and $Y\stackrel{d}{=}N(\{0\}\times [{\underset{\_}{\rho}}^{2},{\overline{\rho}}^{2}])$ *be two G-normal distributions on a sublinear expectation space* (Ω, 𝓗, 𝔼). *Then we have*

*(i)*

*$X{\le}_{mon}^{\mathbb{E}}Y\iff {\overline{\sigma}}^{2}={\overline{\rho}}^{2}$ and **σ* ≤ *ρ*^{2} *i.e*., $X\stackrel{d}{=}Y$.

*(ii)*

*$X{\le}_{con}^{\mathbb{E}}Y\iff X{\le}_{icon}^{\text{E}}Y\iff {\overline{\sigma}}^{2}\le {\overline{\rho}}^{2}$ and **σ*^{2} ≤ *ρ*^{2}.

**Proof:** *(i) $X\stackrel{d}{=}Y$ implies $X{\le}_{mon}^{\mathbb{E}}Y$ is obvious.Suppose $X{\le}_{mon}^{\mathbb{E}}Y$. Recall the fact that 𝔼[**φ*(·)] can be explicitly calculated for *G*-normal distributions such that *φ* ∈ *C*_{l.Lip} (ℝ) is a convex or concave function in [8], for an increasing convex function *φ*(*x*) = *x*^{+} satisfying *φ* ∈ *C*_{l.Lip} (ℝ), thus we have
$$\mathbb{E}[\phi (X)]=\frac{1}{\sqrt{2\pi}}{\displaystyle \underset{-\infty}{\overset{+\infty}{\int}}\phi (\overline{\sigma}y)\text{e}\text{x}\text{p}(-\frac{{y}^{2}}{2})\text{d}y}=\frac{\overline{\sigma}}{\sqrt{2\pi}}{\displaystyle \underset{0}{\overset{+\infty}{\int}}y\text{e}\text{x}\text{p}(-\frac{{y}^{2}}{2})\text{d}y}=\frac{1}{\sqrt{2\pi}}\overline{\sigma}.$$Similarly we obtain $\mathbb{E}[\phi (Y)]=\frac{1}{\sqrt{2\pi}}\overline{\rho}$. Since *σ̅*, *ρ̅* ≥ 0, we have *σ̅*^{2} ≤ *ρ̅*^{2} *by the definition of* ${\le}_{mon}^{\mathbb{E}}$ *On the other hand, we have*
$$-\mathbb{E}[-\phi (X)]=\frac{1}{\sqrt{2\pi}}{\displaystyle \underset{-\infty}{\overset{+\infty}{\int}}\phi (\underset{\_}{\sigma}y)\text{exp}(-\frac{{y}^{2}}{2})\text{d}y}=\frac{\underset{\_}{\sigma}}{\sqrt{2\pi}}{\displaystyle \underset{0}{\overset{+\infty}{\int}}y\text{exp}(-\frac{{y}^{2}}{2})\text{d}y}=\frac{1}{\sqrt{2\pi}}\underset{\_}{\sigma}.$$Similarly we can get $-\mathbb{E}[-\phi (Y)]=\frac{1}{\sqrt{2\pi}}\underset{\_}{\rho}$. Since *σ*, *ρ* ≥ 0, we have *σ*^{2} ≤ *ρ*^{−2} *from the definition of* ${\le}_{mon}^{\text{E}}$.Taking an increasing concave function *φ*(*x*) = -*x* ̅, we can derive *σ̅*^{2} ≥ *ρ̅*^{2} and *σ*^{2} ≥ *ρ*^{2} using the arguments as *φ*(*x*) = *x*^{+}.We conclude from the above that *σ̅*^{2} = *ρ̅*^{2} and *σ*^{2} = *ρ*^{2}, *i.e*., $X\stackrel{d}{=}Y$(ii) Clearly we have $X{\le}_{con}^{\mathbb{E}}Y\Rightarrow X{\le}_{icon}^{\mathbb{E}}Y$. Repeating the arguments in the part (i) we have $X{\le}_{con}^{\mathbb{E}}Y\Rightarrow {\overline{\sigma}}^{2}\le {\overline{\rho}}^{2}and{\underset{\_}{\sigma}}^{2}\le {\underset{\_}{\rho}}^{2}$ and *σ*^{2} ≤ *ρ*^{2} by choosing the increasing convex function *φ*(*x*) = *x*^{+}.It only needs to show that *σ̅*^{2} ≤ *ρ̅*^{2} and *σ*^{2} ≤ *ρ*^{2} ⟹ ${\underset{\_}{\sigma}}^{2}\le {\underset{\_}{\rho}}^{2}\Rightarrow X{\le}_{con}^{\mathbb{E}}Y$.For any convex function *φ* ∈ *C*_{l.Lip} (ℝ). Consider the following *G*-heat equation for *X*
$$\{\begin{array}{l}\frac{\partial u}{\partial t}-\frac{1}{2}\left({\overline{\sigma}}^{2}{(\frac{{\partial}^{2}u}{\partial {x}^{2}})}^{+}-{\underset{\_}{\sigma}}^{2}{(\frac{{\partial}^{2}u}{\partial {x}^{2}})}^{-}\right)=0,\\ u{|}_{t=0}=\phi .\end{array}$$(3)We have that $v(t,x):=\mathbb{E}[\phi (x+\sqrt{t}X)]$, (*t*, *x*) ∈ [0, ∞) × ℝ, is the unique viscosity solution of (3). Furthermore, we can check that *u*(*t*, *x*) is convex in *x*. Thus the above *G*-heat equation (3) becomes
$$\left\{\begin{array}{l}\frac{\mathrm{\partial}u}{\mathrm{\partial}t}-\frac{1}{2}{\overline{\sigma}}^{2}(\frac{{\mathrm{\partial}}^{2}u}{\mathrm{\partial}{x}^{2}}{)}^{+}=0,\\ u{|}_{t=0}=\phi .\end{array}\right.$$(4)Similarly we obtain $v(t,x):=\mathbb{E}[\phi (x+\sqrt{t}Y)]$, (*t*, *x*) ∈ [0, ∞) × ℝ, is the unique viscosity solution of the following *G*-heat equation
$$\{\begin{array}{c}\frac{\mathrm{\partial}v}{\mathrm{\partial}t}-\frac{1}{2}{\overline{\rho}}^{2}(\frac{{\mathrm{\partial}}^{2}v}{\mathrm{\partial}{x}^{2}}{)}^{+}=0,\hfill \\ v{|}_{t=0}=\phi .\hfill \end{array}$$(5)Since *σ̅*^{2} ≤ *ρ̅*^{2}, then by the comparison theorem for the viscosity solutions of (4) and (5) (For example, see [8], Theorem 2.6 in Appendix C), we derive that
$$\begin{array}{cc}u(t,x)\le v(t,x),& (t,x)\in [0,+\infty ]\times \mathbb{R}.\end{array}$$In particular, taking (*t*, *x*) = (1, 0), we have
$$\mathbb{E}[\phi (X)]\le \mathbb{E}[\phi (Y)].$$(6)Since -*φ* is a concave function, we can similarly show that $m(t,x):=\mathbb{E}[-\phi (x+\sqrt{t}X)]$ and $n(t,x):=\mathbb{E}[-\phi (x+\sqrt{t}Y)]$, (*t*, *x*) ∈ [0, ∞) × ℝ, are the unique viscosity solutions of the following *G*-heat equations respectively
$$\begin{array}{ccc}\{\begin{array}{c}\frac{\mathrm{\partial}m}{\mathrm{\partial}t}+\frac{1}{2}{\underset{\_}{\sigma}}^{2}(\frac{{\mathrm{\partial}}^{2}m}{\mathrm{\partial}{x}^{2}}{)}^{-}=0,\hfill \\ m{|}_{t=0}=-\phi .\hfill \end{array}& \mathrm{a}\mathrm{n}\mathrm{d}& \{\begin{array}{c}\frac{\mathrm{\partial}n}{\mathrm{\partial}t}+\frac{1}{2}{\underset{\_}{\rho}}^{2}(\frac{{\mathrm{\partial}}^{2}n}{\mathrm{\partial}{x}^{2}}{)}^{-}=0,\hfill \\ n{|}_{t=0}=-\phi .\hfill \end{array}\end{array}$$(7)Due to the facts that *σ*^{2} ≤ *ρ*^{2} and the comparison theorem for the viscosity solutions of (7), setting (*t*, *x*) = (1, 0) we have
$$-\mathbb{E}[-\phi (X)]\le -\mathbb{E}[-\phi (Y)].$$(8)By combining (6) with (8), we get $X{\le}_{con}^{\mathbb{E}}Y$. The proof is complete. □

**Theorem 2.9:** *Let* $(\eta ,X)\stackrel{d}{=}N([\underset{\_}{\mu},\overline{\mu}]\times [{\underset{\_}{\sigma}}^{2},{\overline{\sigma}}^{2}])$ and $(\xi ,Y)\stackrel{d}{=}N([\underset{\_}{v},\overline{v}]\times [{\underset{\_}{\rho}}^{2},{\overline{\rho}}^{2}])$ *be two G-distributions on a sublinear expectation space* (Ω, 𝓗, 𝔼). *Moreover, suppose η is weakly independent from X and Ȉ is weakly independent from Y respectively. Then we have*

*(i)*

*$(\eta ,X){\le}_{mon}^{\mathbb{E}}(\xi ,Y)$, **if and only if* *μ̅* ≤ *v̅*, *μ* ≤ *v*, *σ̅*^{2} = *ρ̅*^{2} *and* *σ*^{2} ≤ *ρ*^{2}.

*(ii)*

*$(\eta ,X){\le}_{con}^{\mathbb{E}}(\xi ,Y)$, **if and only if* *μ̅* = *v̅*, *μ* ≤ *v*, *σ̅*^{2} ≤ *ρ̅*^{2} *and* *σ*^{2} ≤ *ρ*^{2}.

*(iii)*

*$(\eta ,X){\le}_{icon}^{\mathbb{E}}(\xi ,Y)$, **if and only if* *μ̅* ≤ *v̅*, *μ* ≤ *v*, *σ̅*^{2} ≤ *ρ̅*^{2} *and* *σ*^{2} ≤ *ρ*^{2}.

**Proof:** *The “only if” parts are the combinations of the results of Theorem 2.7 and Theorem 2.8. In fact, for example, if $(\eta ,X){\le}_{mon}^{\mathbb{E}}(\xi ,Y)$, then we can derive that $\eta {\le}_{mon}^{\mathbb{E}}\xi $ and $X{\le}_{mon}^{\mathbb{E}}Y$. Thus from Theorem 2.7 and Theorem 2.8 we get the results.For the proof of the converse implications, the key ideas are both the applications of the comparison theorem of the viscosity solutions to **G*-equations. We only show the case (iii). Cases (i) and (ii) are verified by an analogous argument.Assume *μ̅* ≤ *v̅*, *μ* ≤ *v*, *σ̅*^{2} ≤ *ρ̅*^{2} and *σ*^{2} ≤ *ρ*^{2}. For any increasing convex function *φ* ∈ *C*_{l.Lip} (ℝ^{2}), by Proposition 1.10 in Chapter II of [8], we have that $u(t,x,y):=\mathbb{E}[\phi (x+\sqrt{t}X,y+t\eta )]$, (*t*, *x*, *y*) ∈ [0, ∞)×ℝ×ℝ, is the unique viscosity solution of the following *G*-equation for (*η*, *X*)
$$\{\begin{array}{l}\frac{\partial u}{\partial t}-G(\frac{\partial u}{\partial y},\frac{{\partial}^{2}u}{\partial {x}^{2}})=0,\\ u{|}_{t=0}=\phi ,\end{array}$$(9)where $G(p,a)=\mathbb{E}[\frac{1}{2}a{X}^{2}+p\eta ]$.Since *η* is weakly independent from *X*, we have
$$G(p,a)=\mathbb{E}[\frac{1}{2}a{X}^{2}+p\eta ]=\mathbb{E}[\frac{1}{2}a{X}^{2}]+\mathbb{E}[p\eta ]=\frac{1}{2}({\overline{\sigma}}^{2}{a}^{+}-{\underset{\_}{\sigma}}^{2}{a}^{-})+(\overline{\mu}{p}^{+}-\underset{\_}{\mu}{p}^{-}).$$On the other hand, since *φ* is an increasing convex function in *C*_{l.Lip} (ℝ^{2}), we can get (*t*, *x*, *y*) is convex in *x* and increasing in *y*. Thus (9) becomes
$$\{\begin{array}{l}\frac{\partial u}{\partial t}-\left(\frac{1}{2}{\overline{\sigma}}^{2}{(\frac{{\partial}^{2}u}{\partial {x}^{2}})}^{+}+\overline{\mu}{(\frac{\partial u}{\partial y})}^{+}\right)=0,\\ u{|}_{t=0}=\phi .\end{array}$$(10)Similarly we obtain $v(t,x,y):=\mathbb{E}[\phi (x+\sqrt{t}Y,y+t\xi )],\phantom{\rule{thinmathspace}{0ex}}(t,x,y)\in [0,+\mathrm{\infty})\times \mathbb{R}\times \mathbb{R},$ is the unique viscosity solution of the following *G*-equation
$$\{\begin{array}{l}\frac{\partial v}{\partial t}-\left(\frac{1}{2}{\overline{\rho}}^{2}{(\frac{{\partial}^{2}v}{\partial {x}^{2}})}^{+}+\overline{v}{(\frac{\partial v}{\partial y})}^{+}\right)=0,\\ u{|}_{t=0}=\phi .\end{array}$$(11)Since *μ̅* ≤ *v̅* and *σ̅*^{2} ≤ *ρ̅*^{2}, then by the comparison theorem for the viscosity solutions of (10) and (11), setting (*t*, *x*, *y*) = (1, 0, 0), we have
$$\mathbb{E}[\phi (X,\eta )]\le \mathbb{E}[\phi (Y,\xi )].$$(12)Since *φ* is an increasing convex function, we can similarly obtain that $m(t,x,y):=\mathbb{E}[-\phi (x+\sqrt{t}X,y+t\eta )]$ and $u(t,x,y):=\mathbb{E}[-\phi (x+\sqrt{t}X,y+t\eta )]$ (*t*, *x*, *y*) ∈ [0, ∞) × ℝ × ℝ, are the unique viscosity solutions of the following *G*-equations respectively
$$\begin{array}{lll}\left\{\begin{array}{l}\frac{\mathrm{\partial}m}{\mathrm{\partial}t}+\frac{1}{2}{\underset{\_}{\sigma}}^{2}(\frac{{\mathrm{\partial}}^{2}m}{\mathrm{\partial}{x}^{2}}{)}^{-}+\underset{\_}{\mu}(\frac{\mathrm{\partial}m}{\mathrm{\partial}y}{)}^{-}=0,\\ m{|}_{t=0}=-\phi .\end{array}\right.& \mathrm{a}\mathrm{n}\mathrm{d}& \left\{\begin{array}{l}\frac{\mathrm{\partial}n}{\mathrm{\partial}t}+\frac{1}{2}{\underset{\_}{\rho}}^{2}(\frac{{\mathrm{\partial}}^{2}n}{\mathrm{\partial}{x}^{2}}{)}^{-}+\underset{\_}{v}(\frac{\mathrm{\partial}n}{\mathrm{\partial}y}{)}^{-}=0,\\ n{|}_{t=0}=-\phi .\end{array}\right.\end{array}$$(13)Due to the facts that *μ* ≤ *v* and *σ*^{2} ≤ *ρ*^{2}, by the comparison theorem for the viscosity solutions of (13), setting (*t*, *x*, *y*) = (1, 0, 0), we have
$$-\mathbb{E}[-\phi (X,\eta )]\le -\mathbb{E}[-\phi (Y,\xi )].$$(14)By combining (12) with (14), we obtain $(\eta ,X){\le}_{icon}^{\mathbb{E}}(\xi ,Y)$. The proof is complete. □

*–*

*$X{\le}_{mon}^{P}Y$, **if and only if* *μ* ≤ *v*, *σ*^{2} = *ρ*^{2},

*–*

*$X{\le}_{con}^{P}Y$, **if and only if* *μ* = *v*, *σ*^{2} ≤ *ρ*^{2},

*–*

*$X{\le}_{icon}^{P}Y$, **if and only if* *μ* ≤ *v*, *σ*^{2} ≤ *ρ*^{2},

*Hence, our results generalize the classical results*.
## Comments (0)