strm strm strm Statistics & Risk Modeling 2193-1402 2196-7040 De Gruyter Oldenbourg strm-2020-0037 10.1515/strm-2020-0037 On the elicitability of range value at risk https://orcid.org/0000-0002-6541-7347 Fissler Tobias tobias.fissler@wu.ac.at https://orcid.org/0000-0002-5916-9746 Ziegel Johanna F. johanna.ziegel@stat.unibe.ch Department of Finance, Accounting and Statistics, Institute for Statistics and Mathematics, Vienna University of Economics and Business (WU), Welthandelsplatz 1, 1020 Vienna, Austria Department of Mathematics and Statistics, Institute of Mathematical Statistics and Actuarial Science, University of Bern, Alpeneggstrasse 22, 3012 Bern, Switzerland 01 01 2021 01 11 2021 25 09 2021 38 1-2 25 46 03 12 2020 24 06 2021 13 08 2021 © 2021 Walter de Gruyter GmbH, Berlin/Boston 2021 Walter de Gruyter GmbH, Berlin/Boston This work is licensed under the Creative Commons Attribution 4.0 International License. Abstract

The debate of which quantitative risk measure to choose in practice has mainly focused on the dichotomy between value at risk (VaR) and expected shortfall (ES). Range value at risk (RVaR) is a natural interpolation between VaR and ES, constituting a tradeoff between the sensitivity of ES and the robustness of VaR, turning it into a practically relevant risk measure on its own. Hence, there is a need to statistically assess, compare and rank the predictive performance of different RVaR models, tasks subsumed under the term “comparative backtesting” in finance. This is best done in terms of strictly consistent loss or scoring functions, i.e., functions which are minimized in expectation by the correct risk measure forecast. Much like ES, RVaR does not admit strictly consistent scoring functions, i.e., it is not elicitable. Mitigating this negative result, we show that a triplet of RVaR with two VaR-components is elicitable. We characterize all strictly consistent scoring functions for this triplet. Additional properties of these scoring functions are examined, including the diagnostic tool of Murphy diagrams. The results are illustrated with a simulation study, and we put our approach in perspective with respect to the classical approach of trimmed least squares regression.

Keywords Backtesting consistency expected shortfall point forecasts scoring functions trimmed mean MSC 2010 62C99 62G35 62P05 91G70 Tobias Fissler is grateful to the Department of Mathematics at Imperial College London who funded his fellowship during which most of the work of this paper has been done. Johanna Ziegel is grateful for financial support from the Swiss National Science Foundation.
Introduction

In the field of quantitative risk management, the last one or two decades have seen a lively debate about which monetary risk measure  would be best in (regulatory) practice. The debate mainly focused on the dichotomy between value at risk ( VaR β {\operatorname{VaR}_{\beta}} ) on the one hand and expected shortfall ( ES β {\operatorname{ES}_{\beta}} ) on the other hand, at some probability level β ( 0 , 1 ) {\beta\in(0,1)} (see Section 2 for definitions). Mirroring the historical joust between median and mean as centrality measures in classical statistics, VaR β {\operatorname{VaR}_{\beta}} , basically a quantile, is esteemed for its robustness, while ES β {\operatorname{ES}_{\beta}} , a tail expectation, is deemed attractive due to its sensitivity and the fact that it satisfies the axioms of a coherent risk measure . We refer the reader to [15, 17] for comprehensive academic discussions, and to  for a regulatory perspective in banking.

Cont, Deguest and Scandolo  considered the issue of statistical robustness of risk measure estimates in the sense of . They showed that a risk measure cannot be both robust and coherent. As a compromise, they propose the risk measure “range value at risk”, RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} at probability levels 0 < α < β < 1 {0<\alpha<\beta<1} . It is defined as the average of all VaR γ {\operatorname{VaR}_{\gamma}} with γ between α and β (see Section 2 for definitions). As limiting cases, one obtains RVaR β , β = VaR β {\operatorname{RVaR}_{\beta,\beta}=\operatorname{VaR}_{\beta}} and RVaR 0 , β = ES β {\operatorname{RVaR}_{0,\beta}=\operatorname{ES}_{\beta}} , which presents RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} as a natural interpolation of VaR β {\operatorname{VaR}_{\beta}} and ES β {\operatorname{ES}_{\beta}} . Quantifying its robustness in terms of the breakdown point and following the arguments provided in [33, p. 59], RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} has a breakdown point of min { α , 1 - β } {\min\{\alpha,1-\beta\}} , placing it between the very robust VaR β {\operatorname{VaR}_{\beta}} (with a breakdown point of min { β , 1 - β } {\min\{\beta,1-\beta\}} ) and the entirely non-robust ES β {\operatorname{ES}_{\beta}} (breakdown point 0). This means it is a robust – and hence not coherent – risk measure, unless it degenerates to RVaR 0 , β = ES β {\operatorname{RVaR}_{0,\beta}=\operatorname{ES}_{\beta}} (or if 0 α < β = 1 {0\leq\alpha<\beta=1} ). Moreover, RVaR {\operatorname{RVaR}} belongs to the wide class of distortion risk measures [55, 52]. For further contributions to robustness in the context of risk measures, we refer the reader to [37, 38, 36, 16, 56]. Since the influential article , RVaR has gained increasing attention in the risk management literature – see [13, 14] for extensive studies – as well as in econometrics  where RVaR sometimes has the alternative denomination interquantile expectation. For the symmetric case β = 1 - α > 1 2 {\beta=1-\alpha>\frac{1}{2}} , RVaR α , 1 - α {\operatorname{RVaR}_{\alpha,1-\alpha}} is known under the term α-trimmed mean in classical statistics and it constitutes an alternative to and interpolation of the mean and the median as centrality measures; see  for a recent study and a multivariate extension of the trimmed mean. It is closely connected to the α-Winsorized mean; see (2.3).

How is it possible to evaluate the predictive performance of point forecasts X t {X_{t}} for a statistical functional T, such as the mean, median or a risk measure, of the (conditional) distribution of a quantity of interest Y t {Y_{t}} ? It is commonly measured in terms of the average realized score 1 n t = 1 n S ( X t , Y t ) {\frac{1}{n}\sum_{t=1}^{n}S(X_{t},Y_{t})} for some loss or scoring function S, using the orientation the smaller the better. Consequently, the loss function S should be strictly consistent for T in that T ( F ) = arg min x S ( x , y ) d F ( y ) {T(F)=\operatornamewithlimits{arg\,min}_{x}\int S(x,y)\,\mathrm{d}F(y)} : correct predictions are honored and encouraged in the long run, e.g., the squared loss S ( x , y ) = ( x - y ) 2 {S(x,y)=(x-y)^{2}} is consistent for the mean, and the absolute loss S ( x , y ) = | x - y | {S(x,y)=\lvert x-y\rvert} is consistent for the median. If a functional admits a strictly consistent score, it is called elicitable [44, 39, 27]. By definition, elicitable functionals allow for M-estimation and have natural estimation paradigms in regression frameworks [11, Section 2] such as quantile regression [35, 34] or expectile regression . Elicitability is crucial for meaningful forecast evaluation [18, 41, 27]. In the context of probabilistic forecasts with distributional forecasts F t {F_{t}} or density forecasts f t {f_{t}} , (strictly) consistent scoring functions are often referred to as (strictly) proper rules such as the log-score S ( f , y ) = - log f ( y ) {S(f,y)=-\log f(y)} (see ). In quantitative finance, and particularly in the debate about which risk measure is best in practice, elicitability has gained considerable attention [17, 57, 9]. Especially, the role of elicitability for backtesting purposes has been highly debated [27, 1, 2]. It has been clarified that elicitability is central for comparative backtesting [24, 43]. On the other hand, if one strives to validate forecasts, (strict) identification functions are crucial. Much like scoring functions, they are functions in the forecast and the observation, which, however, vanish in expectation at (and only at) the correct report. Thus, they can be used to check (conditional) calibration [26, 43].

Not all functionals are elicitable or identifiable. Osband  showed that an elicitable or identifiable functional necessarily has convex level sets (CxLS): If T ( F 0 ) = T ( F 1 ) = t {T(F_{0})=T(F_{1})=t} for two distributions F 0 , F 1 {F_{0},F_{1}} , then T ( F λ ) = t {T(F_{\lambda})=t} where F λ = ( 1 - λ ) F 0 + λ F 1 {F_{\lambda}=(1-\lambda)F_{0}+\lambda F_{1}} , λ ( 0 , 1 ) {\lambda\in(0,1)} . Variance and ES generally do not have CxLS [53, 27], therefore failing to be elicitable and identifiable. The revelation principle [44, 27, 19] asserts that any bijection of an elicitable/identifiable functional is elicitable/identifiable. This implies that the pair (mean, variance) – being a bijection of the first two moments – is elicitable and identifiable despite the variance failing to be so. Similarly, Fissler and Ziegel  showed that the pair ( VaR β , ES β ) {(\operatorname{VaR}_{\beta},\operatorname{ES}_{\beta})} is elicitable and identifiable, with the structural difference that the revelation principle is not applicable in this instance. This is followed by the more general finding that the minimal expected score and its minimizer are always jointly elicitable [6, 25].

Recently, Wang and Wei [51, Theorem 5.3] showed that RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} , 0 < α < β < 1 {0<\alpha<\beta<1} , similarly to ES α {\operatorname{ES}_{\alpha}} , fails to have CxLS as a standalone measure, which rules out its elicitability and identifiability. In contrast, they observe that the identity

RVaR α , β = β ES β - α ES α β - α , 0 < α < β < 1 , \operatorname{RVaR}_{\alpha,\beta}=\frac{\beta\operatorname{ES}_{\beta}-\alpha% \operatorname{ES}_{\alpha}}{\beta-\alpha},\quad 0<\alpha<\beta<1,

which holds if ES α {\operatorname{ES}_{\alpha}} and ES β {\operatorname{ES}_{\beta}} are finite, and the CxLS property of the pairs ( VaR α , ES α ) {(\operatorname{VaR}_{\alpha},\operatorname{ES}_{\alpha})} , ( VaR β , ES β ) {(\operatorname{VaR}_{\beta},\operatorname{ES}_{\beta})} implies the CxLS property of the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} (see [51, Example 4.6]). This raises the question whether this triplet is elicitable and identifiable or not. By invoking the elicitability and identifiability of ( VaR α , ES α ) {(\operatorname{VaR}_{\alpha},\operatorname{ES}_{\alpha})} , identity (1.1) and the revelation principle establish the elicitability and identifiability of the quadruples ( VaR α , VaR β , ES α , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{ES}_{% \alpha},\operatorname{RVaR}_{\alpha,\beta})} and ( VaR α , VaR β , ES β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{ES}_{% \beta},\operatorname{RVaR}_{\alpha,\beta})} . This approach has already been used in the context of regression in .

Improving this result, we show that the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} is elicitable (Theorem 3.3) and identifiable (Proposition 3.1) under weak regularity conditions. Practically, our results open the way to model validation, to meaningful forecast performance comparison, and in particular to comparative backtests, of this triplet, as well as to a regression framework. Theoretically, they show that the elicitation complexity [39, 25] or elicitation order  of RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} is at most 3. Moreover, requiring only VaR-forecasts besides the RVaR-forecast is particularly advantageous in comparison to additionally requiring ES-forecasts since the triplet ( VaR α ( F ) , VaR β ( F ) , RVaR α , β ( F ) ) {(\operatorname{VaR}_{\alpha}(F),\operatorname{VaR}_{\beta}(F),\operatorname{% RVaR}_{\alpha,\beta}(F))} , 0 < α < β < 1 {0<\alpha<\beta<1} , exists and is finite for any distribution F, whereas ES α ( F ) {\operatorname{ES}_{\alpha}(F)} and ES β ( F ) {\operatorname{ES}_{\beta}(F)} are only finite if the (left) tail of the gains-and-loss distribution F is integrable. As RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} is used often for robustness purposes, safeguarding against outliers and heavy-tailedness, this advantage is important.

We would like to point out the structural difference between the elicitability result of

( VaR α , VaR β , RVaR α , β ) (\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})

provided in this paper and the one concerning ( VaR α , ES α ) {(\operatorname{VaR}_{\alpha},\operatorname{ES}_{\alpha})} in  as well as the more general results of [25, 6]. While ES α {\operatorname{ES}_{\alpha}} corresponds to the negative of a minimum of an expected score which is strictly consistent for VaR α {\operatorname{VaR}_{\alpha}} , it turns out that RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} can be represented as the negative of a scaled difference of minima of expected strictly consistent scoring functions for VaR α {\operatorname{VaR}_{\alpha}} and VaR β {\operatorname{VaR}_{\beta}} ; see equations (3.1) and (3.2). As a consequence, the class of strictly consistent scoring functions for the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} turns out to be less flexible than the one for ( VaR α , ES α ) {(\operatorname{VaR}_{\alpha},\operatorname{ES}_{\alpha})} ; see Remark 3.9 for details. In particular, there is essentially no translation invariant or positively homogeneous scoring function which is strictly consistent for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} ; see Section 4.

The paper is organized as follows. In Section 2, we introduce the relevant notation and definitions concerning RVaR, scoring functions and elicitability. The main results are presented in Section 3, establishing the elicitability of the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} (Theorem 3.3) and characterizing the class of strictly consistent scoring functions (Theorem 3.7), exploiting the identifiability result of Proposition 3.1. Section 4 shows that there are basically no strictly consistent scoring functions for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} which are positively homogeneous or translation invariant. In Section 5, we establish a mixture representation of the strictly consistent scoring functions in the spirit of . This result allows to compare forecasts simultaneously with respect to all consistent scoring functions in terms of Murphy diagrams. We demonstrate the applicability of our results and compare the discrimination ability of different scoring functions in a simulation study presented in Section 6. The paper finishes in Section 7 with a discussion of our results in the context of M-estimation and compares them to other suggestions in the statistical literature, in variants of a trimmed least squares procedure [35, 49, 47].

Notation and definitions Definition of range value at risk

There are different sign conventions in the literature on risk measures. In this paper, we use the following convention: if a random variable Y models the gains and losses, then positive values of Y represent gains and negative values of Y losses. Moreover, if ρ is a risk measure, we assume that ρ ( Y ) {\rho(Y)\in\mathbb{R}} corresponds to the maximal amount of money one can withdraw such that the position Y - ρ ( Y ) {Y-\rho(Y)} is still acceptable. Hence, negative values of ρ correspond to risky positions. In the sequel, let 0 {\mathcal{F}_{0}} be the class of probability distribution functions on {\mathbb{R}} . Recall that the α-quantile, α [ 0 , 1 ] {\alpha\in[0,1]} , of F 0 {F\in\mathcal{F}_{0}} is defined as the set q α ( F ) = { x F ( x - ) α F ( x ) } {q_{\alpha}(F)=\{x\in\mathbb{R}\mid F(x-)\leq\alpha\leq F(x)\}} , where F ( x - ) := lim t x F ( t ) {F(x-):=\lim_{t\uparrow x}F(t)} .

Definition 2.1.

Value at risk of F 0 {F\in\mathcal{F}_{0}} at level α [ 0 , 1 ] {\alpha\in[0,1]} is defined by VaR α ( F ) = inf q α ( F ) {\operatorname{VaR}_{\alpha}(F)=\inf q_{\alpha}(F)} .

For any α [ 0 , 1 ] {\alpha\in[0,1]} we introduce the following subclasses of 0 {\mathcal{F}_{0}} :

α = { F 0 q α ( F ) = { VaR α ( F ) } } , ( α ) = { F 0 F ( VaR α ( F ) ) = α } . \mathcal{F}^{\alpha}=\bigl{\{}F\in\mathcal{F}_{0}\mid q_{\alpha}(F)=\{% \operatorname{VaR}_{\alpha}(F)\}\bigr{\}},\quad\mathcal{F}^{(\alpha)}=\bigl{\{% }F\in\mathcal{F}_{0}\mid F(\operatorname{VaR}_{\alpha}(F))=\alpha\bigr{\}}.

Distributions F ( α ) {F\in\mathcal{F}^{(\alpha)}} have at least one solution to the equation F ( x ) = α {F(x)=\alpha} ; distributions F α {F\in\mathcal{F}^{\alpha}} have at most one solution to the equation F ( x ) = α {F(x)=\alpha} .

Definition 2.2.

Range value at risk of F 0 {F\in\mathcal{F}_{0}} at levels 0 α β 1 {0\leq\alpha\leq\beta\leq 1} is defined by

RVaR α , β ( F ) = { 1 β - α α β VaR γ ( F ) d γ if  α < β , VaR β ( F ) if  α = β . \operatorname{RVaR}_{\alpha,\beta}(F)=\begin{dcases}\frac{1}{\beta-\alpha}\int% _{\alpha}^{\beta}\operatorname{VaR}_{\gamma}(F)\,\mathrm{d}\gamma&\phantom{}% \text{if }\alpha<\beta,\\ \operatorname{VaR}_{\beta}(F)&\phantom{}\text{if }\alpha=\beta.\end{dcases}

Note that lim α β RVaR α , β ( F ) = VaR β ( F ) = RVaR β , β ( F ) {\lim_{\alpha\uparrow\beta}\operatorname{RVaR}_{\alpha,\beta}(F)=\operatorname% {VaR}_{\beta}(F)=\operatorname{RVaR}_{\beta,\beta}(F)} . The definition of RVaR and the fact that γ VaR γ ( F ) {\gamma\mapsto\operatorname{VaR}_{\gamma}(F)} is increasing imply that

VaR α ( F ) RVaR α , β ( F ) VaR β ( F ) . \operatorname{VaR}_{\alpha}(F)\leq\operatorname{RVaR}_{\alpha,\beta}(F)\leq% \operatorname{VaR}_{\beta}(F).

For 0 < α β < 1 {0<\alpha\leq\beta<1} and F 0 {F\in\mathcal{F}_{0}} one obtains that (i) RVaR α , β ( F ) {\operatorname{RVaR}_{\alpha,\beta}(F)\in\mathbb{R}} ; (ii) RVaR 0 , β ( F ) { - } {\operatorname{RVaR}_{0,\beta}(F)\in\mathbb{R}\cup\{-\infty\}} and it is finite if and only if - 0 | y | d F ( y ) < {\int_{-\infty}^{0}\lvert y\rvert\,\mathrm{d}F(y)<\infty} ; and (iii) RVaR α , 1 ( F ) { } {\operatorname{RVaR}_{\alpha,1}(F)\in\mathbb{R}\cup\{\infty\}} and it is finite if and only if 0 | y | d F ( y ) < {\int_{0}^{\infty}\lvert y\rvert\,\mathrm{d}F(y)<\infty} . Moreover, RVaR 0 , 1 ( F ) {\operatorname{RVaR}_{0,1}(F)} exists if and only if

- 0 | y | d F ( y ) < or 0 | y | d F ( y ) < \int_{-\infty}^{0}\lvert y\rvert\,\mathrm{d}F(y)<\infty\quad\text{or}\quad\int% _{0}^{\infty}\lvert y\rvert\,\mathrm{d}F(y)<\infty

and then coincides with y d F ( y ) { ± } {\int y\,\mathrm{d}F(y)\in\mathbb{R}\cup\{\pm\infty\}} . For α < β {\alpha<\beta} and provided that RVaR α , β ( F ) {\operatorname{RVaR}_{\alpha,\beta}(F)} exists, it holds that

RVaR α , β ( F ) = 1 β - α ( ( VaR α ( F ) , VaR β ( F ) ] y d F ( y ) + VaR α ( F ) ( F ( VaR α ( F ) ) - α ) - VaR β ( F ) ( F ( VaR β ( F ) ) - β ) ) , \operatorname{RVaR}_{\alpha,\beta}(F)=\frac{1}{\beta-\alpha}\bigg{(}\int_{(% \operatorname{VaR}_{\alpha}(F),\operatorname{VaR}_{\beta}(F)]}y\,\mathrm{d}F(y% )+\operatorname{VaR}_{\alpha}(F)\bigl{(}F(\operatorname{VaR}_{\alpha}(F))-% \alpha\bigr{)}-\operatorname{VaR}_{\beta}(F)\bigl{(}F(\operatorname{VaR}_{% \beta}(F))-\beta\bigr{)}\biggr{)},

using the usual conventions F ( - ) = 0 {F(-\infty)=0} , F ( ) = 1 {F(\infty)=1} and 0 = 0 ( - ) = 0 {0\cdot\infty=0\cdot(-\infty)=0} . If F ( α ) ( β ) {F\in\mathcal{F}^{(\alpha)}\cap\mathcal{F}^{(\beta)}} , then the correction terms in the second line of (2.2) vanish, yielding

RVaR α , β ( F ) = 𝔼 F ( Y 𝟙 { VaR α ( F ) < Y VaR β ( F ) } ) β - α , \operatorname{RVaR}_{\alpha,\beta}(F)=\frac{\mathbb{E}_{F}(Y\mathbb{1}\{% \operatorname{VaR}_{\alpha}(F)<Y\leq\operatorname{VaR}_{\beta}(F)\})}{\beta-% \alpha},

which justifies an alternative name for RVaR, namely Interquantile Expectation.

Definition 2.3.

Expected shortfall of F 0 {F\in\mathcal{F}_{0}} at level α ( 0 , 1 ) {\alpha\in(0,1)} is defined by

ES α ( F ) = RVaR 0 , α ( F ) { - } . \operatorname{ES}_{\alpha}(F)=\operatorname{RVaR}_{0,\alpha}(F)\in\mathbb{R}% \cup\{-\infty\}.

Hence, provided that ES α ( F ) {\operatorname{ES}_{\alpha}(F)} and ES β ( F ) {\operatorname{ES}_{\beta}(F)} are finite, one obtains identity (1.1). If F has a finite left tail ( - 0 | y | d F ( y ) < {\int_{-\infty}^{0}\lvert y\rvert\,\mathrm{d}F(y)<\infty} ), then one could use the right-hand side of (1.1) as a definition of RVaR α , β ( F ) {\operatorname{RVaR}_{\alpha,\beta}(F)} . However, in line with our discussion in the introduction, RVaR α , β ( F ) {\operatorname{RVaR}_{\alpha,\beta}(F)} always exists and is finite for 0 < α < β < 1 {0<\alpha<\beta<1} even if the right-hand side of (1.1) is not defined.

Interestingly, [14, Theorem 2] establish that RVaR {\operatorname{RVaR}} can be written as an inf-convolution of VaR {\operatorname{VaR}} and ES {\operatorname{ES}} at appropriate levels. This result amounts to a sup-convolution in our sign convention. Also note that our parametrization of RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} differs from theirs.

Now, for α ( 0 , 1 2 ) {\alpha\in(0,\frac{1}{2})} , RVaR α , 1 - α {\operatorname{RVaR}_{\alpha,1-\alpha}} corresponds to the α-trimmed mean and has a close connection to the α-Winsorized mean W α {W_{\alpha}} (see [33, pp. 57–59]) via

W α ( F ) := ( 1 - 2 α ) RVaR α , 1 - α ( F ) + α VaR α ( F ) + α VaR 1 - α ( F ) , α ( 0 , 1 2 ) . W_{\alpha}(F):=(1-2\alpha)\operatorname{RVaR}_{\alpha,1-\alpha}(F)+\alpha% \operatorname{VaR}_{\alpha}(F)+\alpha\operatorname{VaR}_{1-\alpha}(F),\quad% \alpha\in\Bigl{(}0,\frac{1}{2}\Bigr{)}.

Elicitability and scoring functions

Using the decision-theoretic framework of [21, 27], we introduce the following notation. Let 0 {\mathcal{F}\subseteq\mathcal{F}_{0}} be some generic subclass and let 𝖠 k {\mathsf{A}\subseteq\mathbb{R}^{k}} be an action domain. Whenever we consider a functional T : 𝖠 {T\colon\mathcal{F}\to\mathsf{A}} , we tacitly assume that T ( F ) {T(F)} is well-defined for all F {F\in\mathcal{F}} and is an element of 𝖠 {\mathsf{A}} . Then T ( ) {T(\mathcal{F})} corresponds to the image { T ( F ) 𝖠 F } {\{T(F)\in\mathsf{A}\mid F\in\mathcal{F}\}} . For any subset M k {M\subseteq\mathbb{R}^{k}} we denote with int ( M ) {\operatorname{int}(M)} the largest open subset of M. Moreover, conv ( M ) {\operatorname{conv}(M)} denotes the convex hull of the set M.

We say that a function a : {a\colon\mathbb{R}\to\mathbb{R}} is {\mathcal{F}} -integrable if it is measurable and | a ( y ) | d F ( y ) < {\int\lvert a(y)\rvert\,\mathrm{d}F(y)<\infty} for all F {F\in\mathcal{F}} . Similarly, a function g : 𝖠 × {g\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}} is called {\mathcal{F}} -integrable if g ( x , ) : {g(x,\cdot\,)\colon\mathbb{R}\to\mathbb{R}} is {\mathcal{F}} -integrable for all x 𝖠 {x\in\mathsf{A}} . If g is {\mathcal{F}} -integrable, we define the map

g ¯ : 𝖠 × , g ¯ ( x , F ) := g ( x , y ) d F ( y ) . \bar{g}\colon\mathsf{A}\times\mathcal{F}\to\mathbb{R},\quad\bar{g}(x,F):=\int g% (x,y)\,\mathrm{d}F(y).

If g : 𝖠 × {g\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}} is sufficiently smooth in its first argument, we denote the m-th partial derivative of g ( , y ) {g(\,\cdot\,,y)} by m g ( , y ) {\partial_{m}g(\,\cdot\,,y)} .

Definition 2.4.

A map S : 𝖠 × {S\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}} is an {\mathcal{F}} -consistent scoring function for T : 𝖠 {T\colon\mathcal{F}\to\mathsf{A}} if it is {\mathcal{F}} -integrable and if S ¯ ( T ( F ) , F ) S ¯ ( x , F ) {\bar{S}(T(F),F)\leq\bar{S}(x,F)} for all x 𝖠 {x\in\mathsf{A}} and F {F\in\mathcal{F}} . It is strictly {\mathcal{F}} -consistent for T if it is consistent and if S ¯ ( T ( F ) , F ) = S ¯ ( x , F ) {\bar{S}(T(F),F)=\bar{S}(x,F)} implies that x = T ( F ) {x=T(F)} for all x 𝖠 {x\in\mathsf{A}} and for all F {F\in\mathcal{F}} . A functional T : 𝖠 {T\colon\mathcal{F}\to\mathsf{A}} is elicitable on {\mathcal{F}} if it possesses a strictly {\mathcal{F}} -consistent scoring function.

Definition 2.5.

Two scoring functions S , S ~ : 𝖠 × {S,\widetilde{S}\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}} are equivalent if there is some a : {a\colon\mathbb{R}\to\mathbb{R}} and some λ > 0 {\lambda>0} such that S ~ ( x , y ) = λ S ( x , y ) + a ( y ) {\widetilde{S}(x,y)=\lambda S(x,y)+a(y)} for all ( x , y ) 𝖠 × {(x,y)\in\mathsf{A}\times\mathbb{R}} . They are proportional if they are equivalent with a 0 {a\equiv 0} .

This equivalence relation preserves (strict) consistency: If S is (strictly) {\mathcal{F}} -consistent for T and if a is {\mathcal{F}} -integrable, then S ~ {\widetilde{S}} is also (strictly) {\mathcal{F}} -consistent for T. Closely related to the concept of elicitability is the notion of identifiability.

Definition 2.6.

A map V : 𝖠 × k {V\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}^{k}} is an {\mathcal{F}} -identification function for T : 𝖠 {T\colon\mathcal{F}\to\mathsf{A}} if it is {\mathcal{F}} -integrable and if V ¯ ( T ( F ) , F ) = 0 {\bar{V}(T(F),F)=0} for all F {F\in\mathcal{F}} . It is a strict {\mathcal{F}} -identification function for T if additionally V ¯ ( x , F ) = 0 {\bar{V}(x,F)=0} implies that x = T ( F ) {x=T(F)} for all x 𝖠 {x\in\mathsf{A}} and for all F {F\in\mathcal{F}} . A functional T : 𝖠 {T\colon\mathcal{F}\to\mathsf{A}} is identifiable on {\mathcal{F}} if it possesses a strict {\mathcal{F}} -identification function.

In contrast to , we consider point-valued functionals only. For a recent comprehensive study on elicitability of set-valued functionals, we refer to .

Elicitability and identifiability results

Wang and Wei [51, Theorem 5.3] showed that for 0 < α < β < 1 {0<\alpha<\beta<1} , RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} (and also the pairs ( VaR α , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{RVaR}_{\alpha,\beta})} and ( VaR β , RVaR α , β ) {(\operatorname{VaR}_{\beta},\operatorname{RVaR}_{\alpha,\beta})} ) do not have CxLS on dis {\mathcal{F}_{\mathrm{dis}}} , the class of distributions with bounded and discrete support. Hence, by invoking that CxLS are necessary both for elicitability and for identifiability, RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} and the pairs ( VaR α , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{RVaR}_{\alpha,\beta})} and ( VaR β , RVaR α , β ) {(\operatorname{VaR}_{\beta},\operatorname{RVaR}_{\alpha,\beta})} are neither elicitable nor identifiable on dis {\mathcal{F}_{\mathrm{dis}}} . Our novel contribution is that the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} , however, is elicitable and identifiable, subject to mild conditions. We use the notation S α ( x , y ) = ( 𝟙 { y x } - α ) x - 𝟙 { y x } y {S_{\alpha}(x,y)=(\mathbb{1}\{y\leq x\}-\alpha)x-\mathbb{1}\{y\leq x\}y} and recall that S α {S_{\alpha}} is {\mathcal{F}} -consistent for VaR α {\operatorname{VaR}_{\alpha}} if - 0 | y | d F ( y ) < {\int_{-\infty}^{0}\lvert y\rvert\,\mathrm{d}F(y)<\infty} for all F {F\in\mathcal{F}} , and strictly {\mathcal{F}} -consistent if furthermore α {\mathcal{F}\subseteq\mathcal{F}^{\alpha}} (see ).

Proposition 3.1.

For 0 < α < β < 1 {0<\alpha<\beta<1} , the map V : R 3 × R R 3 {V\colon\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}^{3}} defined by

V ( x 1 , x 2 , x 3 , y ) = ( 𝟙 { y x 1 } - α 𝟙 { y x 2 } - β x 3 + 1 β - α ( S β ( x 2 , y ) - S α ( x 1 , y ) ) ) V(x_{1},x_{2},x_{3},y)\ =\begin{pmatrix}\mathbb{1}\{y\leq x_{1}\}-\alpha\\ \mathbb{1}\{y\leq x_{2}\}-\beta\\ x_{3}+\frac{1}{\beta-\alpha}(S_{\beta}(x_{2},y)-S_{\alpha}(x_{1},y))\end{pmatrix}

is an F ( α ) F ( β ) {\mathcal{F}^{(\alpha)}\cap\mathcal{F}^{(\beta)}} -identification function for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} , which is strict on F α F ( α ) F β F ( β ) {\mathcal{F}^{\alpha}\cap\mathcal{F}^{(\alpha)}\cap\mathcal{F}^{\beta}\cap% \mathcal{F}^{(\beta)}} .

Proof.

The proof is standard, observing that

V ¯ 3 ( VaR α ( F ) , VaR β ( F ) , x 3 , F ) = x 3 - RVaR α , β ( F ) , \bar{V}_{3}(\operatorname{VaR}_{\alpha}(F),\operatorname{VaR}_{\beta}(F),x_{3}% ,F)=x_{3}-\operatorname{RVaR}_{\alpha,\beta}(F),

which follows from the representation (2.2). ∎

Remark 3.2.

The benefits of the identifiability result of Proposition 3.1 are two-fold. First, it facilitates (conditional) calibration backtests in the spirit of . There, the null hypothesis is that a sequence of forecasts ( X 1 , t , X 2 , t , X 3 , t ) {(X_{1,t},X_{2,t},X_{3,t})} , measurable with respect to the most recent information 𝒜 t - 1 {\mathcal{A}_{t-1}} , is correctly specified in the sense that

( X 1 , t , X 2 , t , X 3 , t ) = ( VaR α ( Y t | 𝒜 t - 1 ) , VaR β ( Y t | 𝒜 t - 1 ) , RVaR α , β ( Y t | 𝒜 t - 1 ) ) . (X_{1,t},X_{2,t},X_{3,t})=\bigl{(}\operatorname{VaR}_{\alpha}(Y_{t}\lvert% \mathcal{A}_{t-1}),\operatorname{VaR}_{\beta}(Y_{t}\lvert\mathcal{A}_{t-1}),% \operatorname{RVaR}_{\alpha,\beta}(Y_{t}\lvert\mathcal{A}_{t-1})\bigr{)}.

By exploiting the strict identification property of V in (3.1), this null hypothesis corresponds to

𝔼 ( V ( X 1 , t , X 2 , t , X 3 , t , Y t ) | 𝒜 t - 1 ) = 0 . \mathbb{E}\bigl{(}V(X_{1,t},X_{2,t},X_{3,t},Y_{t})\lvert\mathcal{A}_{t-1}\bigr% {)}=0.

Clearly, such a conditional backtest can be conducted using any strict identification function. By invoking [19, Proposition 3.2.1], any strict α ( α ) β ( β ) {\mathcal{F}^{\alpha}\cap\mathcal{F}^{(\alpha)}\cap\mathcal{F}^{\beta}\cap% \mathcal{F}^{(\beta)}} -identification function for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} is given by

H ( x 1 , x 2 , x 3 ) V ( x 1 , x 2 , x 3 , y ) , H(x_{1},x_{2},x_{3})V(x_{1},x_{2},x_{3},y),

where V is given in (3.1) and H : 3 3 × 3 {H\colon\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}} is a matrix-valued function whose determinant does not vanish.

Second, Proposition 3.1 enables the characterization result of strictly consistent scoring functions presented in Theorem 3.7.

The following theorem establishes a rich class of (strictly) consistent scoring functions S : 3 × {S\colon\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}} for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} . By a priori assuming the forecasts to be bounded with values in some cube [ c min , c max ] 3 {[c_{\min},c_{\max}]^{3}} , - c min < c max {-\infty\leq c_{\min}<c_{\max}\leq\infty} (here and throughout the paper, we make the tacit convention that [ c min , c max ] := [ c min , c max ] {[c_{\min},c_{\max}]:=[c_{\min},c_{\max}]\cap\mathbb{R}} if c min = - {c_{\min}=-\infty} or c max = {c_{\max}=\infty} ), the class gets even broader.

Theorem 3.3.

For 0 < α < β < 1 {0<\alpha<\beta<1} , the map S : [ c min , c max ] 3 × R R {S\colon[c_{\min},c_{\max}]^{3}\times\mathbb{R}\to\mathbb{R}} defined by

S ( x 1 , x 2 , x 3 , y ) = ( 𝟙 { y x 1 } - α ) g 1 ( x 1 ) - 𝟙 { y x 1 } g 1 ( y ) + ( 𝟙 { y x 2 } - β ) g 2 ( x 2 ) - 𝟙 { y x 2 } g 2 ( y ) \displaystyle S(x_{1},x_{2},x_{3},y)=(\mathbb{1}\{y\leq x_{1}\}-\alpha)g_{1}(x% _{1})-\mathbb{1}\{y\leq x_{1}\}g_{1}(y)+(\mathbb{1}\{y\leq x_{2}\}-\beta)g_{2}% (x_{2})-\mathbb{1}\{y\leq x_{2}\}g_{2}(y) + ϕ ( x 3 ) ( x 3 + 1 β - α ( S β ( x 2 , y ) - S α ( x 1 , y ) ) ) - ϕ ( x 3 ) + a ( y ) \displaystyle +\phi^{\prime}(x_{3})\Bigl{(}x_{3}+\frac{1}{\beta-\alpha% }(S_{\beta}(x_{2},y)-S_{\alpha}(x_{1},y))\Bigr{)}-\phi(x_{3})+a(y)

is an F {\mathcal{F}} -consistent scoring function for T = ( VaR α , VaR β , RVaR α , β ) {T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}% _{\alpha,\beta})} if the following conditions hold:

ϕ : [ c min , c max ] {\phi\colon[c_{\min},c_{\max}]\to\mathbb{R}} is convex with subgradient ϕ {\phi^{\prime}} .

For all x 3 [ c min , c max ] {x_{3}\in[c_{\min},c_{\max}]} the functions

G 1 , x 3 : [ c min , c max ] , \displaystyle G_{1,x_{3}}\colon[c_{\min},c_{\max}]\to\mathbb{R}, x 1 g 1 ( x 1 ) - x 1 ϕ ( x 3 ) / ( β - α ) , \displaystyle\quad x_{1}\mapsto g_{1}(x_{1})-x_{1}\phi^{\prime}(x_{3})/(\beta-% \alpha), G 2 , x 3 : [ c min , c max ] , \displaystyle G_{2,x_{3}}\colon[c_{\min},c_{\max}]\to\mathbb{R}, x 2 g 2 ( x 2 ) + x 2 ϕ ( x 3 ) / ( β - α ) , \displaystyle\quad x_{2}\mapsto g_{2}(x_{2})+x_{2}\phi^{\prime}(x_{3})/(\beta-% \alpha),

are increasing.

y a ( y ) - 𝟙 { y x 1 } g 1 ( y ) - 𝟙 { y x 2 } g 2 ( y ) {y\mapsto a(y)-\mathbb{1}\{y\leq x_{1}\}g_{1}(y)-\mathbb{1}\{y\leq x_{2}\}g_{2% }(y)} is {\mathcal{F}} -integrable for all x 1 , x 2 [ c min , c max ] {x_{1},x_{2}\in[c_{\min},c_{\max}]} .

If moreover ϕ is strictly convex and the functions in G 1 , x 3 {G_{1,x_{3}}} and G 2 , x 3 {G_{2,x_{3}}} in (3.4) and (3.5) are strictly increasing for all x 3 [ c min , c max ] {x_{3}\in[c_{\min},c_{\max}]} , then S is strictly F α F β {\mathcal{F}^{\alpha}\cap\mathcal{F}^{\beta}} -consistent for T.

Proof.

Let ( x 1 , x 2 , x 3 ) 𝖠 {(x_{1},x_{2},x_{3})\in\mathsf{A}} , F {F\in\mathcal{F}} and ( t 1 , t 2 , t 3 ) := T ( F ) {(t_{1},t_{2},t_{3}):=T(F)} . Then, since G 1 , x 3 {G_{1,x_{3}}} is increasing,

[ c min , c max ] × ( x 1 , y ) S ( x 1 , x 2 , x 3 , y ) [c_{\min},c_{\max}]\times\mathbb{R}\ni(x_{1}^{\prime},y)\mapsto S(x_{1}^{% \prime},x_{2},x_{3},y)

is {\mathcal{F}} -consistent for VaR α {\operatorname{VaR}_{\alpha}} and it is strictly α {\mathcal{F}^{\alpha}} -consistent if G 1 , x 3 {G_{1,x_{3}}} is strictly increasing. Similar comments apply to the map [ c min , c max ] × ( x 2 , y ) S ( t 1 , x 2 , x 3 , y ) {[c_{\min},c_{\max}]\times\mathbb{R}\ni(x_{2}^{\prime},y)\mapsto S(t_{1},x_{2}% ^{\prime},x_{3},y)} . Hence,

0 S ¯ ( x 1 , x 2 , x 3 , F ) - S ¯ ( t 1 , x 2 , x 3 , F ) + S ¯ ( t 1 , x 2 , x 3 , F ) - S ¯ ( t 1 , t 2 , x 3 , F ) \displaystyle 0\leq\bar{S}(x_{1},x_{2},x_{3},F)-\bar{S}(t_{1},x_{2},x_{3},F)+% \bar{S}(t_{1},x_{2},x_{3},F)-\bar{S}(t_{1},t_{2},x_{3},F) = S ¯ ( x 1 , x 2 , x 3 , F ) - S ¯ ( t 1 , t 2 , x 3 , F ) , \displaystyle=\bar{S}(x_{1},x_{2},x_{3},F)-\bar{S}(t_{1},t_{2},x_{3},F),

with a strict inequality under the conditions for strict consistency and if ( x 1 , x 2 ) ( t 1 , t 2 ) {(x_{1},x_{2})\neq(t_{1},t_{2})} . Finally,

S ¯ ( t 1 , t 2 , x 3 , F ) - S ¯ ( t 1 , t 2 , t 3 , F ) = ϕ ( x 3 ) ( x 3 - t 3 ) - ϕ ( x 3 ) + ϕ ( t 3 ) 0 \bar{S}(t_{1},t_{2},x_{3},F)-\bar{S}(t_{1},t_{2},t_{3},F)=\phi^{\prime}(x_{3})% (x_{3}-t_{3})-\phi(x_{3})+\phi(t_{3})\geq 0

since ϕ is convex. If ϕ is strictly convex and if x 3 t 3 {x_{3}\neq t_{3}} , the inequality in (3.6) is strict. ∎

Remark 3.4.

Provided condition (iii) in Theorem 3.3 holds and if ϕ is strictly convex and G 1 , x 3 {G_{1,x_{3}}} and G 2 , x 3 {G_{2,x_{3}}} are strictly increasing, then S given in (3.3) is still strictly {\mathcal{F}} -consistent in the RVaR {\operatorname{RVaR}} -component for general 0 {\mathcal{F}\subseteq\mathcal{F}_{0}} . That is, for F {F\in\mathcal{F}} ,

arg min x 𝖠 0 S ¯ ( x , F ) = q α ( F ) × q β ( F ) × { RVaR α , β ( F ) } . \operatornamewithlimits{arg\,min}_{x\in\mathsf{A}_{0}}\bar{S}(x,F)=q_{\alpha}(% F)\times q_{\beta}(F)\times\{\operatorname{RVaR}_{\alpha,\beta}(F)\}.

By making use of (2.3) and the revelation principle [44, 27, 19], Theorem 3.3 also provides a rich class of strictly consistent scoring functions for ( VaR α , VaR 1 - α , W α ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{1-\alpha},W_{\alpha})} , where W α {W_{\alpha}} is the α-Winsorized mean. The following proposition is useful to construct examples; see Section 6.

Proposition 3.5.

Let S be of the form (3.3) with a (strictly) convex and non-constant function ϕ, and functions g 1 {g_{1}} , g 2 {g_{2}} such that the functions at (3.4) and (3.5) are (strictly) increasing and condition (iii) of Theorem 3.3 is satisfied. Then the following assertions hold:

The subgradient ϕ {\phi^{\prime}} of ϕ is necessarily bounded and the one-sided derivatives of g 1 {g_{1}} and g 2 {g_{2}} are necessarily bounded from below.

S is proportional to a scoring function S ~ {\tilde{S}} of the form ( 3.3 ) with a (strictly) convex function ϕ ~ {\tilde{\phi}} such that ϕ ~ {\tilde{\phi}^{\prime}} is bounded with

β - α = - inf x [ c min , c max ] ϕ ~ ( x ) = sup x [ c min , c max ] ϕ ~ ( x ) , \beta-\alpha=-\inf_{x\in[c_{\min},c_{\max}]}\tilde{\phi}^{\prime}(x)=\sup_{x% \in[c_{\min},c_{\max}]}\tilde{\phi}^{\prime}(x),

and strictly increasing functions g ~ 1 {\tilde{g}_{1}} , g ~ 2 {\tilde{g}_{2}} such that their one-sided derivatives are bounded from below by one and such that the functions at ( 3.4 ) and ( 3.5 ) are (strictly) increasing and condition (iii) of Theorem 3.3 is satisfied.

Proof.

(i) The proof is similar to the one of [21, Corollary 5.5]: condition (ii) implies that for any

x 1 , x 1 , x 2 , x 2 , x 3 [ c min , c max ] x_{1},x_{1}^{\prime},x_{2},x_{2}^{\prime},x_{3}\in[c_{\min},c_{\max}]

with x 1 < x 1 {x_{1}<x_{1}^{\prime}} and x 2 < x 2 {x_{2}<x_{2}^{\prime}} it holds that

- < - g 2 ( x 2 ) - g 1 ( x 2 ) x 2 - x 2 ϕ ( x 3 ) β - α g 1 ( x 1 ) - g 1 ( x 1 ) x 1 - x 1 < . -\infty<-\frac{g_{2}(x^{\prime}_{2})-g_{1}(x_{2})}{x_{2}^{\prime}-x_{2}}\leq% \frac{\phi^{\prime}(x_{3})}{\beta-\alpha}\leq\frac{g_{1}(x_{1}^{\prime})-g_{1}% (x_{1})}{x_{1}^{\prime}-x_{1}}<\infty.

Therefore, ϕ {\phi^{\prime}} is bounded, and the one-sided derivative of g 1 {g_{1}} is bounded from below by sup x 3 ϕ ( x 3 ) / ( β - α ) {\sup_{x_{3}}\phi^{\prime}(x_{3})/(\beta-\alpha)} , while the one-sided derivative of g 2 {g_{2}} is bounded from below by - inf x 3 ϕ ( x 3 ) / ( β - α ) {-\inf_{x_{3}}\phi^{\prime}(x_{3})/(\beta-\alpha)} .

(ii) For any c {c\in\mathbb{R}} , if we replace ϕ by ϕ ^ : x ϕ ( x ) + c x {\widehat{\phi}:x\mapsto\phi(x)+cx} , g 1 {g_{1}} by g ^ 1 : x g 1 ( x ) + c x / ( β - α ) {\widehat{g}_{1}:x\mapsto g_{1}(x)+cx/(\beta-\alpha)} , and g 2 {g_{2}} by g ^ 2 : x g 2 ( x ) - c x / ( β - α ) {\widehat{g}_{2}:x\mapsto g_{2}(x)-cx/(\beta-\alpha)} in formula (3.3) for S, then S does not change. Also, ϕ ^ {\widehat{\phi}} is (strictly) convex if and only if ϕ is (strictly) convex. Furthermore, conditions (ii) and (iii) of Theorem 3.3 hold for ( ϕ , g 1 , g 2 ) {(\phi,g_{1},g_{2})} if and only if they hold for ( ϕ ^ , g ^ 1 , g ^ 2 ) {(\widehat{\phi},\widehat{g}_{1},\widehat{g}_{2})} . By part (i) of the proposition, ϕ {\phi^{\prime}} is bounded. Therefore, we can assume without loss of generality that

- inf x [ c min , c max ] ϕ ( x ) = sup x [ c min , c max ] ϕ ( x ) = λ > 0 -\inf_{x\in[c_{\min},c_{\max}]}\phi^{\prime}(x)=\sup_{x\in[c_{\min},c_{\max}]}% \phi^{\prime}(x)=\lambda>0

since ϕ is non-constant. Then the argument follows by setting S ~ = λ β - α S {\tilde{S}=\frac{\lambda}{\beta-\alpha}S} . ∎

Example 3.6.

Proposition 3.5 in combination with Theorem 3.3 yields a straightforward recipe to generate (strictly) consistent scoring functions for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} . The main degree of flexibility is the choice of ϕ. For practical purposes, it can be easier to start with the choice of ϕ {\phi^{\prime}} , which should be a (strictly) increasing and bounded function. A rich source for such functions is the class of (strictly increasing) cumulative distribution functions, which can easily be scaled to have an infimum of - ( β - α ) {-(\beta-\alpha)} and a supremum of β - α {\beta-\alpha} . Then ϕ can be obtained by integrating ϕ {\phi^{\prime}} . The simplest choice for g 1 {g_{1}} and g 2 {g_{2}} is the identity, i.e., g 1 ( x 1 ) = x 1 {g_{1}(x_{1})=x_{1}} and g 2 ( x 2 ) = x 2 {g_{2}(x_{2})=x_{2}} . The only remaining degree of flexibility is then to add consistent scoring functions for VaR α {\operatorname{VaR}_{\alpha}} or for VaR β {\operatorname{VaR}_{\beta}} . Table 1 contains some examples for choices of ϕ {\phi^{\prime}} . For illustrative purposes, let us discuss the score S 1 {S_{1}} from Table 1 more closely. Just as in the case of S 3 {S_{3}} , but less obviously so, the corresponding ϕ {\phi^{\prime}} is motivated by a distribution function. In this case, it is the logistic distribution e x 3 / ( 1 + e x 3 ) {\mathrm{e}^{x_{3}}/(1+\mathrm{e}^{x_{3}})} . Proper translation and scaling according to Proposition 3.5 leads to

ϕ ( x 3 ) = ( β - α ) ( 2 e x 3 1 + e x 3 - 1 ) = ( β - α ) e x 3 - 1 e x 3 + 1 = ( β - α ) tanh ( x 3 2 ) . \phi^{\prime}(x_{3})=(\beta-\alpha)\Bigl{(}\frac{2\mathrm{e}^{x_{3}}}{1+% \mathrm{e}^{x_{3}}}-1\Bigr{)}=(\beta-\alpha)\frac{\mathrm{e}^{x_{3}}-1}{% \mathrm{e}^{x_{3}}+1}=(\beta-\alpha)\tanh\Bigl{(}\frac{x_{3}}{2}\Bigr{)}.

An antiderivative of ϕ {\phi^{\prime}} is given by ϕ ( x 3 ) = ( β - α ) ( 2 log ( e x 3 + 1 ) - x 3 ) {\phi(x_{3})=(\beta-\alpha)(2\log(\mathrm{e}^{x_{3}}+1)-x_{3})} . Therefore, upon choosing a ( y ) = 2 y {a(y)=2y} , the explicit form of S 1 {S_{1}} reads

S 1 ( x 1 , x 2 , x 3 , y ) = ( 𝟙 { y x 1 } - α ) x 1 + 𝟙 { y > x 1 } y + ( 𝟙 { y x 2 } - β ) x 2 \displaystyle S_{1}(x_{1},x_{2},x_{3},y)=(\mathbb{1}\{y\leq x_{1}\}-\alpha)x_{% 1}+\mathbb{1}\{y>x_{1}\}y+(\mathbb{1}\{y\leq x_{2}\}-\beta)x_{2} + 𝟙 { y > x 2 } y + 2 ( β - α ) ( x 3 e x 3 e x 3 + 1 - log ( e x 3 + 1 ) ) \displaystyle +\mathbb{1}\{y>x_{2}\}y+2(\beta-\alpha)\Bigl{(}x_{3}% \frac{\mathrm{e}^{x_{3}}}{\mathrm{e}^{x_{3}}+1}-\log(\mathrm{e}^{x_{3}}+1)% \Bigr{)} + e x 3 - 1 e x 3 + 1 ( ( 𝟙 { y x 2 } - β ) x 2 - 𝟙 { y x 2 } y - ( 𝟙 { y x 1 } - α ) x 1 + 𝟙 { y x 1 } y ) . \displaystyle +\frac{\mathrm{e}^{x_{3}}-1}{\mathrm{e}^{x_{3}}+1}\bigl{% (}(\mathbb{1}\{y\leq x_{2}\}-\beta)x_{2}-\mathbb{1}\{y\leq x_{2}\}y-(\mathbb{1% }\{y\leq x_{1}\}-\alpha)x_{1}+\mathbb{1}\{y\leq x_{1}\}y\bigr{)}.

The particular choice of a ( y ) = 2 y {a(y)=2y} can be beneficial with regard to integrability conditions: With this choice, S 1 {S_{1}} is F-integrable if and only if the right tail of F is integrable, i.e., if 0 y d F ( y ) < {\int_{0}^{\infty}y\,\mathrm{d}F(y)<\infty} . In a risk management context with our sign convention, the right tail corresponds to the gains, which are commonly less heavy-tailed than the losses. While ϕ {\phi^{\prime}} appearing in S 2 {S_{2}} can easily be integrated with an antiderivative of

ϕ ( x 3 ) = ( β - α ) ( 2 π ) ( x 3 arctan ( x 3 ) - log ( x 3 2 + 1 ) 2 ) , \phi(x_{3})=(\beta-\alpha)\Bigl{(}\frac{2}{\pi}\Bigr{)}\Bigl{(}x_{3}\arctan(x_% {3})-\frac{\log(x_{3}^{2}+1)}{2}\Bigr{)},

the antiderivative of ϕ {\phi^{\prime}} for S 3 {S_{3}} has no closed form solution, therefore requiring numerical integration. The scoring function S 4 {S_{4}} , where ϕ {\phi^{\prime}} is an increasing piecewise linear function which is strictly increasing only on [ c 1 , c 2 ] {[c_{1},c_{2}]} , is in the spirit of the Huber loss [32, p. 79]. It is only strictly consistent on [ c 1 , c 2 ] 3 {[c_{1},c_{2}]^{3}} , but remains consistent for all of 3 {\mathbb{R}^{3}} .

Examples of scoring functions. In all cases we choose g 1 ( x 1 ) = x 1 {g_{1}(x_{1})=x_{1}} and g 2 ( x 2 ) = x 2 {g_{2}(x_{2})=x_{2}} . The parameters c 1 , c 2 {c_{1},c_{2}\in\mathbb{R}} satisfy c 1 < c 2 {c_{1}<c_{2}} , and Φ is the cumulative distribution function of a standard normal law.

 Scoring function ϕ ′ ⁢ ( x 3 ) {\phi^{\prime}(x_{3})} S 1 {S_{1}} ( β - α ) ⁢ tanh ⁡ ( x 3 / 2 ) {(\beta-\alpha)\tanh(x_{3}/2)} S 2 {S_{2}} ( β - α ) ⁢ ( 2 / π ) ⁢ arctan ⁡ ( x 3 ) {(\beta-\alpha)(2/\pi)\arctan(x_{3})} S 3 {S_{3}} ( β - α ) ⁢ ( 2 ⁢ Φ ⁢ ( x 3 ) - 1 ) {(\beta-\alpha)(2\Phi(x_{3})-1)} S 4 {S_{4}} ( β - α ) ( - 𝟙 { x 3 < c 1 } + 𝟙 { x 3 > c 2 } + 𝟙 { c 1 ≤ x 3 ≤ c 2 } 2 ( x 3 - ( c 1 + c 2 ) / 2 ) / ( c 2 - c 1 ) ) {(\beta-\alpha)(-\mathbb{1}\{x_{3}c_{2}\}+\mathbb{1% }\{c_{1}\leq x_{3}\leq c_{2}\}2(x_{3}-(c_{1}+c_{2})/2)/(c_{2}-c_{1}))}

Striving for a full characterization of the class of strictly consistent scoring functions for

T = ( VaR α , VaR β , RVaR α , β ) , T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_% {\alpha,\beta}),

we shall next establish the counterpart of Theorem 3.3, providing necessary conditions for the strict consistency. The main tool to derive such necessary conditions is Osband’s principle, originating from the seminal dissertation of Osband ; see also  for an accessible intuition. We use the precise technical formulation of [21, Theorem 3.2]. It is no wonder that necessary conditions for strictly {\mathcal{F}} -consistent scores for T can only be obtained for action domains 𝖠 3 {\mathsf{A}\subseteq\mathbb{R}^{3}} such that the surjectivity condition 𝖠 = { T ( F ) : F } {\mathsf{A}=\{T(F)\colon F\in\mathcal{F}\}} holds. By invoking inequality (2.1), any such action domain is necessarily a subset of

𝖠 0 := { ( x 1 , x 2 , x 3 ) 3 x 1 x 3 x 2 } , \mathsf{A}_{0}:=\bigl{\{}(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\mid x_{1}\leq x_% {3}\leq x_{2}\bigr{\}},

which we therefore call the maximal sensible action domain. Issuing forecasts for T outside of 𝖠 0 {\mathsf{A}_{0}} , thus violating (2.1), would be irrational, comparable to, say, negative variance forecasts. Still, the scoring functions of the form (3.3) allow for the evaluation of forecasts violating (2.1). Besides the surjectivity assumption and further richness assumptions on the class of distributions {\mathcal{F}} , we need to impose smoothness conditions on the expected score as to exploit first-order conditions stemming from the minimization problem of strict consistency; see Section A for the detailed technical formulations and  for a discussion of these conditions.

We introduce the class cont α α {\mathcal{F}^{\alpha}_{\mathrm{cont}}\subset\mathcal{F}^{\alpha}} of distributions in α {\mathcal{F}^{\alpha}} which are continuously differentiable (and therefore also in ( α ) {\mathcal{F}^{(\alpha)}} ). For any 𝖠 3 {\mathsf{A}\subseteq\mathbb{R}^{3}} , we denote the projections on the r-th component by

𝖠 r := { x r there exists  ( z 1 , z 2 , z 3 ) 𝖠 , z r = x r } , r { 1 , 2 , 3 } . \mathsf{A}^{\prime}_{r}:=\bigl{\{}x_{r}\in\mathbb{R}\mid\text{there exists }(z% _{1},z_{2},z_{3})\in\mathsf{A},\,z_{r}=x_{r}\bigr{\}},\quad r\in\{1,2,3\}.

For any x 3 𝖠 3 {x_{3}\in\mathsf{A}^{\prime}_{3}} and m { 1 , 2 } {m\in\{1,2\}} , let

𝖠 m , x 3 := { x m there exists  ( z 1 , z 2 , z 3 ) 𝖠 , z m = x m , z 3 = x 3 } . \mathsf{A}^{\prime}_{m,x_{3}}:=\bigl{\{}x_{m}\in\mathbb{R}\mid\text{there % exists }(z_{1},z_{2},z_{3})\in\mathsf{A},\,z_{m}=x_{m},\,z_{3}=x_{3}\bigr{\}}.

Theorem 3.7.

Let F F cont α {\mathcal{F}\subseteq\mathcal{F}^{\alpha}_{\mathrm{cont}}} , 0 < α < β < 1 {0<\alpha<\beta<1} , T = ( VaR α , VaR β , RVaR α , β ) : F A A 0 {T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}% _{\alpha,\beta})\colon\mathcal{F}\to\mathsf{A}\subseteq\mathsf{A}_{0}} , and let V = ( V 1 , V 2 , V 3 ) {V=(V_{1},V_{2},V_{3})^{\intercal}} be defined at (3.1). If Assumptions (V1) and (F1) hold and ( V 1 , V 2 ) {(V_{1},V_{2})^{\intercal}} satisfies Assumption (V4), then any strictly F {\mathcal{F}} -consistent scoring function S : A × R R {S\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}} for T that satisfies Assumptions (VS1) and (S2) is necessarily of the form (3.3) almost everywhere, where the functions G r , x 3 : A r , x 3 R {G_{r,x_{3}}\colon\mathsf{A}^{\prime}_{r,x_{3}}\to\mathbb{R}} , r { 1 , 2 } {r\in\{1,2\}} , x 3 A 3 {x_{3}\in\mathsf{A}^{\prime}_{3}} , in (3.4) and (3.5) are strictly increasing and ϕ : A 3 R {\phi\colon\mathsf{A}^{\prime}_{3}\to\mathbb{R}} is strictly convex.

Proof.

First note that V satisfies Assumption (V3) on cont α {\mathcal{F}\subseteq\mathcal{F}^{\alpha}_{\mathrm{cont}}} . Let F {F\in\mathcal{F}} with derivative f and let x int ( 𝖠 ) {x\in\operatorname{int}(\mathsf{A})} . Then one obtains

V ¯ 3 ( x , F ) = x 3 + 1 β - α ( x 2 ( F ( x 2 ) - β ) - x 1 ( F ( x 1 ) - α ) - x 1 x 2 y f ( y ) d y ) . \bar{V}_{3}(x,F)=x_{3}+\frac{1}{\beta-\alpha}\biggl{(}x_{2}(F(x_{2})-\beta)-x_% {1}(F(x_{1})-\alpha)-\int_{x_{1}}^{x_{2}}yf(y)\,\mathrm{d}y\biggr{)}.

The partial derivatives of V are given by

1 V ¯ 1 ( x , F ) = f ( x 1 ) , \displaystyle\partial_{1}\bar{V}_{1}(x,F)=f(x_{1}), 2 V ¯ 2 ( x , F ) = f ( x 2 ) , \displaystyle\partial_{2}\bar{V}_{2}(x,F)=f(x_{2}), 1 V ¯ 3 ( x , F ) = - F ( x 1 ) - α β - α , \displaystyle\partial_{1}\bar{V}_{3}(x,F)=-\frac{F(x_{1})-\alpha}{\beta-\alpha}, 2 V ¯ 3 ( x , F ) = F ( x 2 ) - β β - α , \displaystyle\partial_{2}\bar{V}_{3}(x,F)=\frac{F(x_{2})-\beta}{\beta-\alpha}, 3 V ¯ 3 ( x , F ) = 1 , \displaystyle\partial_{3}\bar{V}_{3}(x,F)=1,

and r V ¯ 1 ( x , F ) {\partial_{r}\bar{V}_{1}(x,F)} and m V ¯ 2 ( x , F ) {\partial_{m}\bar{V}_{2}(x,F)} vanish for r { 2 , 3 } {r\in\{2,3\}} and m { 1 , 3 } {m\in\{1,3\}} . Applying [21, Theorem 3.2] yields the existence of continuously differentiable functions h l m : int ( 𝖠 ) {h_{lm}\colon\operatorname{int}(\mathsf{A})\to\mathbb{R}} , l , m { 1 , 2 , 3 } {l,m\in\{1,2,3\}} , such that

m S ¯ ( x , F ) = i = 1 3 h m i ( x ) V ¯ i ( x , F ) \partial_{m}\bar{S}(x,F)=\sum_{i=1}^{3}h_{mi}(x)\bar{V}_{i}(x,F)

for m { 1 , 2 , 3 } {m\in\{1,2,3\}} . Since we assume that S ¯ ( , F ) {\bar{S}(\,\cdot\,,F)} is twice continuously differentiable for any F {F\in\mathcal{F}} , the second-order partial derivatives need to commute. Let t = T ( F ) {t=T(F)} . Then 1 2 S ¯ ( t , F ) = 2 1 S ¯ ( t , F ) {\partial_{1}\partial_{2}\bar{S}(t,F)=\partial_{2}\partial_{1}\bar{S}(t,F)} is equivalent to

h 21 ( t ) f ( t 1 ) = h 12 ( t ) f ( t 2 ) . h_{21}(t)f(t_{1})=h_{12}(t)f(t_{2}).

This needs to hold for all F {F\in\mathcal{F}} . The variation in the densities implied by Assumption (V4) in combination with the surjectivity of T yields that h 12 h 21 0 {h_{12}\equiv h_{21}\equiv 0} on int ( 𝖠 ) {\operatorname{int}(\mathsf{A})} . Similarly, evaluating 1 3 S ¯ ( x , F ) = 3 1 S ¯ ( x , F ) {\partial_{1}\partial_{3}\bar{S}(x,F)=\partial_{3}\partial_{1}\bar{S}(x,F)} and 2 3 S ¯ ( x , F ) = 3 2 S ¯ ( x , F ) {\partial_{2}\partial_{3}\bar{S}(x,F)=\partial_{3}\partial_{2}\bar{S}(x,F)} at x = t = T ( F ) {x=t=T(F)} yields

h 13 ( t ) = h 31 ( t ) f ( t 1 ) , h 23 ( t ) = h 32 ( t ) f ( t 2 ) . h_{13}(t)=h_{31}(t)f(t_{1}),\quad h_{23}(t)=h_{32}(t)f(t_{2}).

By using again Assumption (V4) as well as the surjectivity of T, this implies that

h 13 h 31 h 23 h 32 0 . h_{13}\equiv h_{31}\equiv h_{23}\equiv h_{32}\equiv 0.

So we are left with characterizing h m m {h_{mm}} for m { 1 , 2 , 3 } {m\in\{1,2,3\}} . Note that Assumption (V1) implies that for any x = ( x 1 , x 2 , x 3 ) int ( 𝖠 ) {x=(x_{1},x_{2},x_{3})\in\operatorname{int}(\mathsf{A})} there are two distributions F 1 , F 2 {F_{1},F_{2}\in\mathcal{F}} such that

( F 1 ( x 1 ) - α , F 1 ( x 2 ) - β ) and ( F 2 ( x 1 ) - α , F 2 ( x 2 ) - β ) (F_{1}(x_{1})-\alpha,F_{1}(x_{2})-\beta)^{\intercal}\quad\text{and}\quad(F_{2}% (x_{1})-\alpha,F_{2}(x_{2})-\beta)^{\intercal}

are linearly independent. Then the requirement that

1 2 S ¯ ( x , F ) = 1 h 22 ( x ) ( F ( x 2 ) - β ) = 2 h 11 ( x ) ( F ( x 1 ) - α ) = 2 1 S ¯ ( x , F ) \partial_{1}\partial_{2}\bar{S}(x,F)=\partial_{1}h_{22}(x)(F(x_{2})-\beta)=% \partial_{2}h_{11}(x)(F(x_{1})-\alpha)=\partial_{2}\partial_{1}\bar{S}(x,F)

for all x int ( 𝖠 ) {x\in\operatorname{int}(\mathsf{A})} and for all F {F\in\mathcal{F}} implies that 1 h 22 2 h 11 0 {\partial_{1}h_{22}\equiv\partial_{2}h_{11}\equiv 0} . Starting with 1 3 S ¯ ( x , F ) = 3 1 S ¯ ( x , F ) {\partial_{1}\partial_{3}\bar{S}(x,F)=\partial_{3}\partial_{1}\bar{S}(x,F)} implies that

1 h 33 V ¯ 3 ( x , F ) = ( 3 h 11 ( x ) + h 33 ( x ) β - α ) V ¯ 1 ( x , F ) . \partial_{1}h_{33}\bar{V}_{3}(x,F)=\Bigl{(}\partial_{3}h_{11}(x)+\frac{h_{33}(% x)}{\beta-\alpha}\Bigr{)}\bar{V}_{1}(x,F).

Again, Assumption (V1) implies that there are F 1 , F 2 {F_{1},F_{2}\in\mathcal{F}} such that

( V ¯ 1 ( x , F 1 ) , V ¯ 3 ( x , F 1 ) ) and ( V ¯ 1 ( x , F 2 ) , V ¯ 3 ( x , F 2 ) ) (\bar{V}_{1}(x,F_{1}),\bar{V}_{3}(x,F_{1}))^{\intercal}\quad\text{and}\quad(% \bar{V}_{1}(x,F_{2}),\bar{V}_{3}(x,F_{2}))^{\intercal}

are linearly independent. Hence, we obtain that 1 h 33 0 {\partial_{1}h_{33}\equiv 0} and 3 h 11 - h 33 / ( β - α ) {\partial_{3}h_{11}\equiv-h_{33}/(\beta-\alpha)} . With the same argumentation and starting from 2 3 S ¯ ( x , F ) = 3 2 S ¯ ( x , F ) {\partial_{2}\partial_{3}\bar{S}(x,F)=\partial_{3}\partial_{2}\bar{S}(x,F)} , one can show that 2 h 33 0 {\partial_{2}h_{33}\equiv 0} and 3 h 22 h 33 / ( β - α ) {\partial_{3}h_{22}\equiv h_{33}/(\beta-\alpha)} . This means there exist functions

η 1 : { ( x 1 , x 3 ) 2 there exists  ( z 1 , z 2 , z 3 ) int ( 𝖠 ) , x 1 = z 1 , x 3 = z 3 } , \displaystyle\eta_{1}\colon\bigl{\{}(x_{1},x_{3})\in\mathbb{R}^{2}\mid\text{% there exists }(z_{1},z_{2},z_{3})\in\operatorname{int}(\mathsf{A}),\,x_{1}=z_{% 1},\,x_{3}=z_{3}\bigr{\}}\to\mathbb{R}, η 2 : { ( x 2 , x 3 ) 2 there exists  ( z 1 , z 2 , z 3 ) int ( 𝖠 ) , x 2 = z 2 , x 3 = z 3 } , \displaystyle\eta_{2}\colon\bigl{\{}(x_{2},x_{3})\in\mathbb{R}^{2}\mid\text{% there exists }(z_{1},z_{2},z_{3})\in\operatorname{int}(\mathsf{A}),\,x_{2}=z_{% 2},\,x_{3}=z_{3}\bigr{\}}\to\mathbb{R}, η 3 : int ( 𝖠 ) 3 , \displaystyle\eta_{3}\colon\operatorname{int}(\mathsf{A})^{\prime}_{3}\to% \mathbb{R},

and some z int ( 𝖠 ) 3 {z\in\operatorname{int}(\mathsf{A})^{\prime}_{3}} such that for any x = ( x 1 , x 2 , x 3 ) int ( 𝖠 ) {x=(x_{1},x_{2},x_{3})\in\operatorname{int}(\mathsf{A})} it holds that

h 33 ( x ) = η 3 ( x 3 ) , \displaystyle h_{33}(x)=\eta_{3}(x_{3}), h 11 ( x ) = η 1 ( x 1 , x 3 ) = - 1 β - α z x 3 η 3 ( z ) d z + ζ 1 ( x 1 ) , \displaystyle h_{11}(x)=\eta_{1}(x_{1},x_{3})=-\frac{1}{\beta-\alpha}\int_{z}^% {x_{3}}\eta_{3}(z)\,\mathrm{d}z+\zeta_{1}(x_{1}), h 22 ( x ) = η 2 ( x 2 , x 3 ) = 1 β - α z x 3 η 3 ( z ) d z + ζ 2 ( x 2 ) , \displaystyle h_{22}(x)=\eta_{2}(x_{2},x_{3})=\frac{1}{\beta-\alpha}\int_{z}^{% x_{3}}\eta_{3}(z)\,\mathrm{d}z+\zeta_{2}(x_{2}),

where ζ r : int ( 𝖠 ) r {\zeta_{r}\colon\operatorname{int}(\mathsf{A})^{\prime}_{r}\to\mathbb{R}} , r { 1 , 2 } {r\in\{1,2\}} . Due to the fact that any component of T is mixture-continuous

For convex {\mathcal{F}} a functional T : k {T\colon\mathcal{F}\to\mathbb{R}^{k}} is called mixture-continuous if for any F , G {F,G\in\mathcal{F}} the map [ 0 , 1 ] λ T ( ( 1 - λ ) F + λ G ) {[0,1]\ni\lambda\mapsto T((1-\lambda)F+\lambda G)} is continuous.

and since {\mathcal{F}} is convex and T surjective, the projection int ( 𝖠 ) 3 {\operatorname{int}(\mathsf{A})^{\prime}_{3}} is an open interval. Hence, [ min ( z , x 3 ) , max ( z , x 3 ) ] int ( 𝖠 ) 3 {[\min(z,x_{3}),\max(z,x_{3})]\subset\operatorname{int}(\mathsf{A})^{\prime}_{% 3}} . Due to Assumptions (V3) and (S2), [21, Theorem 3.2] implies that η 1 , η 2 , η 3 {\eta_{1},\eta_{2},\eta_{3}} are locally Lipschitz continuous.

The above calculations imply that the Hessian of the expected score, i.e., 2 S ¯ ( x , F ) {\nabla^{2}\bar{S}(x,F)} , at its minimizer x = t = T ( F ) {x=t=T(F)} , is a diagonal matrix with entries η 1 ( t 1 , t 3 ) f ( t 1 ) {\eta_{1}(t_{1},t_{3})f(t_{1})} , η 2 ( t 2 , t 3 ) f ( t 2 ) {\eta_{2}(t_{2},t_{3})f(t_{2})} and η 3 ( t 3 ) {\eta_{3}(t_{3})} . As a second-order condition, 2 S ¯ ( t , F ) {\nabla^{2}\bar{S}(t,F)} must be positive semi-definite. By invoking the surjectivity of T once again, this shows that η 1 , η 2 , η 3 0 {\eta_{1},\eta_{2},\eta_{3}\geq 0} . More to the point, invoking the continuous differentiability of the expected score and the fact that S is strictly {\mathcal{F}} -consistent for T, one obtains that for any F {F\in\mathcal{F}} with t = T ( F ) {t=T(F)} and for any v 3 {v\in\mathbb{R}^{3}} , v 0 {v\neq 0} , there exists an ε > 0 {\varepsilon>0} such that d d s S ¯ ( t + s v , F ) {\frac{\mathrm{d}}{\mathrm{d}s}\bar{S}(t+sv,F)} is negative for all s ( - ε , 0 ) {s\in(-\varepsilon,0)} , zero for s = 0 {s=0} , and positive for all s ( ε , 0 ) {s\in(\varepsilon,0)} . For v = e 3 = ( 0 , 0 , 1 ) {v=e_{3}=(0,0,1)^{\intercal}} , this means that for any F {F\in\mathcal{F}} with t = T ( F ) {t=T(F)} there is an ε > 0 {\varepsilon>0} such that d d s S ¯ ( t + s e 3 , F ) = η 3 ( t 3 + s ) s {\frac{\mathrm{d}}{\mathrm{d}s}\bar{S}(t+se_{3},F)=\eta_{3}(t_{3}+s)s} has the same sign as s for all s ( - ε , ε ) {s\in(-\varepsilon,\varepsilon)} . Therefore, η 3 ( t 3 + s ) > 0 {\eta_{3}(t_{3}+s)>0} for all s ( - ε , ε ) { 0 } {s\in(-\varepsilon,\varepsilon)\setminus\{0\}} . Using the surjectivity of T and invoking a compactness argument, η 3 {\eta_{3}} attains a 0 only finitely many times on any compact interval. Recall that int ( 𝖠 ) 3 {\operatorname{int}(\mathsf{A})^{\prime}_{3}} is an open interval. Hence, it can be approximated by an increasing sequence of compact intervals. Therefore, η 3 - 1 ( { 0 } ) {\eta_{3}^{-1}(\{0\})} is at most countable, and therefore a Lebesgue null set. With similar arguments, one can show that for any x 3 int ( 𝖠 ) 3 {x_{3}\in\operatorname{int}(\mathsf{A})^{\prime}_{3}} the sets

{ x 1 there exists  ( z 1 , z 2 , z 3 ) int ( 𝖠 ) , x 1 = z 1 , x 3 = z 3 , η 1 ( x 1 , x 3 ) = 0 } , \displaystyle\bigl{\{}x_{1}\in\mathbb{R}\mid\text{there exists }(z_{1},z_{2},z% _{3})\in\operatorname{int}(\mathsf{A}),\,x_{1}=z_{1},\,x_{3}=z_{3},\,\eta_{1}(% x_{1},x_{3})=0\bigr{\}}, { x 2 [ x 3 , ) there exists  ( z 1 , z 2 , z 3 ) int ( 𝖠 ) , x 2 = z 2 , x 3 = z 3 , η 2 ( x 2 , x 3 ) = 0 } \displaystyle\bigl{\{}x_{2}\in[x_{3},\infty)\mid\text{there exists }(z_{1},z_{% 2},z_{3})\in\operatorname{int}(\mathsf{A}),\,x_{2}=z_{2},\,x_{3}=z_{3},\,\eta_% {2}(x_{2},x_{3})=0\bigr{\}}

are at most countable, and therefore also Lebesgue null sets.

Finally, using [23, Proposition 1 in the supplement] (recognizing that V is locally bounded), one obtains that S is almost everywhere of the form (3.3). Moreover, it holds almost everywhere that ϕ ′′ = η 3 {\phi^{\prime\prime}=\eta_{3}} and g m = ζ m {g_{m}^{\prime}=\zeta_{m}} for m { 1 , 2 } {m\in\{1,2\}} . Hence, ϕ is strictly convex and the functions at (3.4) and (3.5) are strictly increasing. ∎

Combining Theorems 3.3 and 3.7, one can show that the scoring functions given at (3.3) are essentially the only strictly consistent scoring functions for the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} on the action domain

𝖠 = { ( x 1 , x 2 , x 3 ) 3 c min x 1 x 3 x 2 c max } . \mathsf{A}=\bigl{\{}(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\mid c_{\min}\leq x_{1% }\leq x_{3}\leq x_{2}\leq c_{\max}\bigr{\}}.

Corollary 3.8.

Let

𝖠 = { ( x 1 , x 2 , x 3 ) 3 c min x 1 x 3 x 2 c max } \mathsf{A}=\bigl{\{}(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\mid c_{\min}\leq x_{1% }\leq x_{3}\leq x_{2}\leq c_{\max}\bigr{\}}

for some - c min < c max {-\infty\leq c_{\min}<c_{\max}\leq\infty} . Under the conditions of Theorem 3.7, a scoring function S : A × R R {S\colon\mathsf{A}\times\mathbb{R}\to\mathbb{R}} is strictly F {\mathcal{F}} -consistent for T = ( VaR α , VaR β , RVaR α , β ) {T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}% _{\alpha,\beta})} , 0 < α < β < 1 {0<\alpha<\beta<1} , if and only if it is of the form (3.3) almost everywhere satisfying conditions (i)(iii). Moreover, the function ϕ : [ c min , c max ] R {\phi^{\prime}\colon[c_{\min},c_{\max}]\to\mathbb{R}} is necessarily bounded.

Proof.

For the proof it suffices to show that for r { 1 , 2 } {r\in\{1,2\}} , G r , x 3 {G_{r,x_{3}}} defined in (3.4) and (3.5) is not only increasing on 𝖠 r , x 3 {\mathsf{A}_{r,x_{3}}^{\prime}} for any x 3 𝖠 3 {x_{3}\in\mathsf{A}_{3}^{\prime}} , but on 𝖠 r = [ c min , c max ] {\mathsf{A}_{r}^{\prime}=[c_{\min},c_{\max}]} . For x 3 [ c min , c max ] = 𝖠 3 {x_{3}\in[c_{\min},c_{\max}]=\mathsf{A}_{3}^{\prime}} , we have 𝖠 1 , x 3 = [ c min , x 3 ] {\mathsf{A}_{1,x_{3}}^{\prime}=[c_{\min},x_{3}]} and 𝖠 2 , x 3 = [ x 3 , c max ] {\mathsf{A}_{2,x_{3}}^{\prime}=[x_{3},c_{\max}]} . Let x 3 𝖠 3 {x_{3}\in\mathsf{A}^{\prime}_{3}} and x 1 , x 1 𝖠 1 {x_{1},x_{1}^{\prime}\in\mathsf{A}^{\prime}_{1}} with x 1 < x 1 {x_{1}<x_{1}^{\prime}} . If x 1 , x 1 𝖠 1 , x 3 {x_{1},x_{1}^{\prime}\in\mathsf{A}^{\prime}_{1,x_{3}}} , there is nothing to show. If however x 3 < x 1 {x_{3}<x_{1}^{\prime}} , then x 1 , x 1 𝖠 1 , x 1 {x_{1},x^{\prime}_{1}\in\mathsf{A}^{\prime}_{1,x^{\prime}_{1}}} . This means that

0 g 1 ( x 1 ) - g 1 ( x 1 ) - ( x 1 - x 1 ) ϕ ( x 1 ) β - α \displaystyle 0\leq g_{1}(x^{\prime}_{1})-g_{1}(x_{1})-(x^{\prime}_{1}-x_{1})% \frac{\phi^{\prime}(x_{1}^{\prime})}{\beta-\alpha} g 1 ( x 1 ) - g 1 ( x 1 ) - ( x 1 - x 1 ) ϕ ( x 3 ) β - α , \displaystyle\leq g_{1}(x^{\prime}_{1})-g_{1}(x_{1})-(x^{\prime}_{1}-x_{1})% \frac{\phi^{\prime}(x_{3})}{\beta-\alpha},

where the second inequality stems from the fact that ϕ {\phi^{\prime}} is increasing. If the function G 1 , x 1 {G_{1,x^{\prime}_{1}}} is strictly increasing, then the first inequality is strict. The argument for G 2 , x 3 {G_{2,x_{3}}} works analogously. ∎

Remark 3.9.

Note the structural difference of Theorems 3.3 and 3.7 to [25, Theorem 1], [6, Proposition 4.14] and in particular [21, Theorem 5.2 and Corollary 5.5]. Our functional of interest, RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} with 0 < α < β < 1 {0<\alpha<\beta<1} , is not a minimum of an expected scoring function – or Bayes risk –, but a difference of minima of two scoring functions. Indeed, while ES β ( F ) = - 1 β S ¯ β ( VaR β ( F ) , F ) {\operatorname{ES}_{\beta}(F)=-\frac{1}{\beta}\bar{S}_{\beta}(\operatorname{% VaR}_{\beta}(F),F)} , we have that

RVaR α , β ( F ) = - 1 β - α ( S ¯ β ( VaR β ( F ) , F ) - S ¯ α ( VaR α ( F ) , F ) ) . \operatorname{RVaR}_{\alpha,\beta}(F)=-\frac{1}{\beta-\alpha}\bigl{(}\bar{S}_{% \beta}(\operatorname{VaR}_{\beta}(F),F)-\bar{S}_{\alpha}(\operatorname{VaR}_{% \alpha}(F),F)\bigr{)}.

This structural difference is reflected in the minus sign appearing at (3.4). In particular, it means that the functions g 1 {g_{1}} and g 2 {g_{2}} cannot identically vanish if we want to ensure strict consistency of S, whereas the corresponding function in [21, Theorem 5.2] may well be set to zero. [25, Theorem 2] generalizes our results and presents an elicitability result of any linear combination of Bayes risks.

Translation invariance and homogeneity

There are many choices for the functions g 1 {g_{1}} , g 2 {g_{2}} and ϕ appearing in the formula for the scoring function S at (3.3). Often, these choices can be limited by imposing secondary desirable criteria on S, e.g., acknowledging that T = ( VaR α , VaR β , RVaR α , β ) {T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}% _{\alpha,\beta})} is translation equivariant (meaning that T ( F Y + z ) = T ( F Y ) + z {T(F_{Y+z})=T(F_{Y})+z} for any constant z {z\in\mathbb{R}} ) and positively homogeneous of degree 1 (meaning that T ( F c Y ) = c T ( F Y ) {T(F_{cY})=cT(F_{Y})} for any c > 0 {c>0} ), it would make sense if the forecast ranking were also invariant under a joint translation of the forecasts and the observations on the one hand, and joint scaling of the forecasts and the observations on the other hand. This would require translation invariance of the score differences on the one hand, i.e.,

S ( x 1 + z , x 2 + z , x 3 + z , y + z ) - S ( x 1 + z , x 2 + z , x 3 + z , y + z ) = S ( x 1 , x 2 , x 3 , y ) - S ( x 1 , x 2 , x 3 , y ) S(x_{1}+z,x_{2}+z,x_{3}+z,y+z)-S(x^{\prime}_{1}+z,x^{\prime}_{2}+z,x^{\prime}_% {3}+z,y+z)=S(x_{1},x_{2},x_{3},y)-S(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{% \prime},y)

for all ( x 1 , x 2 , x 3 ) , ( x 1 , x 2 , x 3 ) 𝖠 {(x_{1},x_{2},x_{3}),(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})\in\mathsf{% A}} and y , z {y,z\in\mathbb{R}} . On the other hand, it would require positively homogeneous score differences, that is, there is some b {b\in\mathbb{R}} such that

S ( c x 1 , c x 2 , c x 3 , c y ) - S ( c x 1 , c x 2 , c x 3 , c y ) = c b ( S ( x 1 , x 2 , x 3 , y ) - S ( x 1 , x 2 , x 3 , y ) ) S(cx_{1},cx_{2},cx_{3},cy)-S(cx_{1}^{\prime},cx_{2}^{\prime},cx_{3},cy)=c^{b}(% S(x_{1},x_{2},x_{3},y)-S(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime},y))

for all ( x 1 , x 2 , x 3 ) , ( x 1 , x 2 , x 3 ) 𝖠 {(x_{1},x_{2},x_{3}),(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})\in\mathsf{% A}} , y {y\in\mathbb{R}} and for all c > 0 {c>0} . While translation invariance seems to be particularly important when RVaR is used as a location parameter, i.e., when α = 1 - β < 1 2 {\alpha=1-\beta<\frac{1}{2}} , corresponding to the α-trimmed mean, positively homogeneous score differences are relevant in a risk management context: the forecast ranking should not depend on the unit in which the risk measures and the gains and losses are reported, be it in, say Euros or in Euro Cents. We also refer to [45, 43, 22] for further motivations. This section establishes that, unfortunately, there are no strictly consistent scoring functions for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} which admit translation invariant or positively homogeneous score differences under practically relevant settings.

If one is interested in scoring functions with an action domain of the form

𝖠 = { x 3 c min x 1 x 3 x 2 c max } \mathsf{A}=\bigl{\{}x\in\mathbb{R}^{3}\mid c_{\min}\leq x_{1}\leq x_{3}\leq x_% {2}\leq c_{\max}\bigr{\}}

possessing the additional property of translation invariant score differences, the only sensible choice is c min = - {c_{\min}=-\infty} , c max = {c_{\max}=\infty} , amounting to the maximal action domain 𝖠 0 {\mathsf{A}_{0}} . Similarly, for scoring functions with positively homogeneous score differences, the most interesting choices for action domains are

𝖠 = 𝖠 0 , \displaystyle\mathsf{A}=\mathsf{A}_{0}, 𝖠 = 𝖠 0 + = { ( x 1 , x 2 , x 3 ) 3 0 x 1 x 3 x 2 } , \displaystyle\mathsf{A}=\mathsf{A}_{0}^{+}=\bigl{\{}(x_{1},x_{2},x_{3})\in% \mathbb{R}^{3}\mid 0\leq x_{1}\leq x_{3}\leq x_{2}\bigr{\}}, 𝖠 = 𝖠 0 - = { ( x 1 , x 2 , x 3 ) 3 x 1 x 3 x 2 0 } . \displaystyle\mathsf{A}=\mathsf{A}_{0}^{-}=\bigl{\{}(x_{1},x_{2},x_{3})\in% \mathbb{R}^{3}\mid x_{1}\leq x_{3}\leq x_{2}\leq 0\bigr{\}}.

Proposition 4.1 (Translation invariance).

Under the conditions of Theorem 3.7 there are no strictly F {\mathcal{F}} -consistent scoring functions for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} , 0 < α < β < 1 {0<\alpha<\beta<1} , on A 0 {\mathsf{A}_{0}} with translation invariant score differences.

Proof.

By using Theorem 3.7, any strictly {\mathcal{F}} -consistent scoring function for the functional

T = ( VaR α , VaR β , RVaR α , β ) T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_% {\alpha,\beta})

must be of the form (3.3), where in particular ϕ is strictly convex, twice differentiable and ϕ {\phi^{\prime}} is bounded. Assume that S has translation invariant score differences. That means that the function

Ψ : × 𝖠 0 × 𝖠 0 × \Psi\colon\mathbb{R}\times\mathsf{A}_{0}\times\mathsf{A}_{0}\times\mathbb{R}% \to\mathbb{R}

defined by

Ψ ( z , x , x , y ) = S ( x 1 + z , x 2 + z , x 3 + z , y + z ) - S ( x 1 + z , x 2 + z , x 3 + z , y + z ) \displaystyle\Psi(z,x,x^{\prime},y)=S(x_{1}+z,x_{2}+z,x_{3}+z,y+z)-S(x^{\prime% }_{1}+z,x^{\prime}_{2}+z,x^{\prime}_{3}+z,y+z) - S ( x 1 , x 2 , x 3 , y ) + S ( x 1 , x 2 , x 3 , y ) \displaystyle -S(x_{1},x_{2},x_{3},y)+S(x^{\prime}_{1},x^{\prime}_{2},% x^{\prime}_{3},y)

vanishes. Then, for all x 𝖠 0 {x\in\mathsf{A}_{0}} and for all z , y {z,y\in\mathbb{R}} ,

0 = d d x 3 Ψ ( z , x , x , y ) = ( ϕ ′′ ( x 3 + z ) - ϕ ′′ ( x 3 ) ) ( x 3 + 1 β - α ( S β ( x 2 , y ) - S α ( x 1 , y ) ) ) . 0=\frac{\mathrm{d}}{\mathrm{d}x_{3}}\Psi(z,x,x^{\prime},y)=(\phi^{\prime\prime% }(x_{3}+z)-\phi^{\prime\prime}(x_{3}))\Bigl{(}x_{3}+\frac{1}{\beta-\alpha}(S_{% \beta}(x_{2},y)-S_{\alpha}(x_{1},y))\Bigr{)}.

Therefore, ϕ ′′ {\phi^{\prime\prime}} needs to be constant. Since ϕ is convex that means that ϕ ( x 3 ) = d x 3 + d {\phi^{\prime}(x_{3})=dx_{3}+d^{\prime}} with d > 0 {d>0} . But since 𝖠 3 = {\mathsf{A}^{\prime}_{3}=\mathbb{R}} , ϕ {\phi^{\prime}} is unbounded, which is a contradiction. ∎

The proof of Proposition 4.1 closely follows the one of [22, Proposition 4.10]. The fact that the latter assertion entails a positive result has the following background: The strictly consistent scoring function for ( VaR α , ES α ) {(\operatorname{VaR}_{\alpha},\operatorname{ES}_{\alpha})} given in [22, Proposition 4.10] works only on a very restricted action domain. To guarantee strict consistency on such an action domain, one would need a refinement of Theorem 3.3 in the spirit of [23, Proposition 2 of the supplement]. However, since such a positive result on a quite restricted action domain is practically irrelevant, we dispense with such a refinement and only state the relevant negative result here.

Proposition 4.2 (Positive homogeneity).

Under the conditions of Theorem 3.7 there are no strictly F {\mathcal{F}} -consistent scoring functions for ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} , 0 < α < β < 1 {0<\alpha<\beta<1} , on A { A 0 , A 0 + , A 0 - } {\mathsf{A}\in\{\mathsf{A}_{0},\mathsf{A}_{0}^{+},\mathsf{A}_{0}^{-}\}} with positively homogeneous score differences.

Proof.

By using Theorem 3.7, any strictly {\mathcal{F}} -consistent scoring function for the functional

T = ( VaR α , VaR β , RVaR α , β ) T=(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_% {\alpha,\beta})

must be of the form (3.3), where in particular ϕ is strictly convex, twice differentiable and ϕ {\phi^{\prime}} is bounded. Assume that S has positively homogeneous score differences of some degree b {b\in\mathbb{R}} . That means that the function Ψ : ( 0 , ) × 𝖠 × 𝖠 × {\Psi\colon(0,\infty)\times\mathsf{A}\times\mathsf{A}\times\mathbb{R}\to% \mathbb{R}} defined by

Ψ ( c , x , x , y ) = S ( c x , c y ) - S ( c x , c y ) - c b S ( x , y ) + c b S ( x , y ) \Psi(c,x,x^{\prime},y)=S(cx,cy)-S(cx^{\prime},cy)-c^{b}S(x,y)+c^{b}S(x^{\prime% },y)

vanishes. Therefore, for all x 𝖠 {x\in\mathsf{A}} , for all y {y\in\mathbb{R}} and all c > 0 {c>0} ,

0 = d d x 3 Ψ ( z , x , x , y ) = ( c 2 ϕ ′′ ( c x 3 ) - c b ϕ ′′ ( x 3 ) ) ( x 3 + 1 β - α ( S β ( x 2 , y ) - S α ( x 1 , y ) ) ) . 0=\frac{\mathrm{d}}{\mathrm{d}x_{3}}\Psi(z,x,x^{\prime},y)=(c^{2}\phi^{\prime% \prime}(cx_{3})-c^{b}\phi^{\prime\prime}(x_{3}))\Bigl{(}x_{3}+\frac{1}{\beta-% \alpha}(S_{\beta}(x_{2},y)-S_{\alpha}(x_{1},y))\Bigr{)}.

For the sake of brevity, we only consider the case 𝖠 = 𝖠 0 - {\mathsf{A}=\mathsf{A}_{0}^{-}} , the other cases being similar. Equation (4.1) implies that ϕ ′′ ( - x 3 ) = ϕ ′′ ( - 1 ) x 3 b - 2 {\phi^{\prime\prime}(-x_{3})=\phi^{\prime\prime}(-1)x_{3}^{b-2}} for any x 3 > 0 {x_{3}>0} . Due to the strict convexity of ϕ, we need that ϕ ′′ ( - 1 ) > 0 {\phi^{\prime\prime}(-1)>0} . However, for b 1 {b\geq 1} we have inf x 3 > 0 ϕ ( - x 3 ) = - {\inf_{x_{3}>0}\phi^{\prime}(-x_{3})=-\infty} , and for b 1 {b\leq 1} we have sup x 3 > 0 ϕ ( - x 3 ) = {\sup_{x_{3}>0}\phi^{\prime}(-x_{3})=\infty} . Hence, ϕ {\phi^{\prime}} cannot be bounded. ∎

Remark 4.3.

The negative result of Proposition 4.2 should be compared with the results of Nolde and Ziegel [43, Theorem C.3] characterizing homogeneous strictly consistent scoring functions for the pair ( VaR β , ES β ) {(\operatorname{VaR}_{\beta},\operatorname{ES}_{\beta})} . Since they use a different sign convention for VaR {\operatorname{VaR}} and ES {\operatorname{ES}} than we do in this paper, their choice of the action domain × ( 0 , ) {\mathbb{R}\times(0,\infty)} corresponds to our choice 𝖠 0 - {\mathsf{A}_{0}^{-}} . When interpreting RVaR α , β {\operatorname{RVaR}_{\alpha,\beta}} as a risk measure, negative values of RVaR {\operatorname{RVaR}} are the more interesting and relevant ones, using our sign convention. Inspecting the proofs of Proposition 4.2 and of Proposition 3.5 (i) one makes the following observation: for b 1 {b\geq 1} , Nolde and Ziegel  state an impossibility result for their choice of action domain. In fact, the problem occurring in our context is that ϕ {\phi^{\prime}} is not bounded from below. In Proposition 3.5, this property is implied by the fact that the function G 2 , x 3 {G_{2,x_{3}}} at (3.5) is increasing. And it is exactly such a condition that is also present for strictly consistent scoring functions for the pair ( VaR β , ES β ) {(\operatorname{VaR}_{\beta},\operatorname{ES}_{\beta})} ; see [21, Theorem 5.2]. On the other hand, the complication for b < 1 {b<1} stems from the fact that ϕ {\phi^{\prime}} is not bounded from above. This condition is related to the monotonicity of G 1 , x 3 {G_{1,x_{3}}} at (3.4). Such a condition is not present for strictly consistent scoring functions for the pair ( VaR β , ES β ) {(\operatorname{VaR}_{\beta},\operatorname{ES}_{\beta})} . Correspondingly, there can be homogeneous and strictly consistent scoring functions for b < 1 {b<1} for this pair , while this is not possible for the triplet ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} .

Mixture representation of scoring functions

When forecasts are compared and ranked with respect to consistent scoring functions, one has to be aware that in the presence of non-nested information sets, model mis-specification and/or finite samples, the ranking may depend on the chosen consistent scoring function . In the specific case of ( VaR α , VaR β , RVaR α , β ) {(\operatorname{VaR}_{\alpha},\operatorname{VaR}_{\beta},\operatorname{RVaR}_{% \alpha,\beta})} , the forecast ranking may depend on the specific choice for the functions g 1 {g_{1}} , g 2 {g_{2}} , and ϕ appearing in Theorem 3.3. A possible remedy to this problem is to compare forecasts simultaneously with respect to all consistent scoring functions in terms of Murphy diagrams as introduced by Ehm, Gneiting, Jordan and Krüger . Murphy diagrams are based on the fact that the class of all consistent scoring functions can be characterized as a class of mixtures of elementary scoring functions that depend on a low-dimensional parameter. The following theorem provides such a mixture representation for the scoring functions at (3.3). The applicability is illustrated in Section 6. Recall that S α ( x , y ) = ( 𝟙 { y x } - α ) x - 𝟙 { y x } y {S_{\alpha}(x,y)=(\mathbb{1}\{y\leq x\}-\alpha)x-\mathbb{1}\{y\leq x\}y} .

Theorem 5.1.

Let 0 < α < β < 1 {0<\alpha<\beta<1} . Any scoring function

S : [ c min , c max ] 3 × S:[c_{\min},c_{\max}]^{3}\times\mathbb{R}\to\mathbb{R}

of the form (3.3) with a : R R {a\colon\mathbb{R}\to\mathbb{R}} chosen such that S ( y , y , y , y ) = 0 {S(y,y,y,y)=0} can be written as

S ( x 1 , x 2 , x 3 , y ) = L v 1 ( x 1 , y ) d H 1 ( v ) + L v 2 ( x 2 , y ) d H 2 ( v ) + L v 3 ( x 1 , x 2 , x 3 , y ) d H 3 ( v ) , S(x_{1},x_{2},x_{3},y)=\int L^{1}_{v}(x_{1},y)\,\mathrm{d}H_{1}(v)+\int L^{2}_% {v}(x_{2},y)\,\mathrm{d}H_{2}(v)+\int L^{3}_{v}(x_{1},x_{2},x_{3},y)\,\mathrm{% d}H_{3}(v),

where

L v 1 ( x 1 , y ) = ( 𝟙 { y x 1 } - α ) ( 𝟙 { v x 1 } - 𝟙 { v y } ) , \displaystyle L^{1}_{v}(x_{1},y)=(\mathbb{1}\{y\leq x_{1}\}-\alpha)(\mathbb{1}% \{v\leq x_{1}\}-\mathbb{1}\{v\leq y\}),