BY 4.0 license Open Access Published online by De Gruyter (A) September 25, 2021

On the elicitability of range value at risk

Tobias Fissler ORCID logo and Johanna F. Ziegel ORCID logo

Abstract

The debate of which quantitative risk measure to choose in practice has mainly focused on the dichotomy between value at risk (VaR) and expected shortfall (ES). Range value at risk (RVaR) is a natural interpolation between VaR and ES, constituting a tradeoff between the sensitivity of ES and the robustness of VaR, turning it into a practically relevant risk measure on its own. Hence, there is a need to statistically assess, compare and rank the predictive performance of different RVaR models, tasks subsumed under the term “comparative backtesting” in finance. This is best done in terms of strictly consistent loss or scoring functions, i.e., functions which are minimized in expectation by the correct risk measure forecast. Much like ES, RVaR does not admit strictly consistent scoring functions, i.e., it is not elicitable. Mitigating this negative result, we show that a triplet of RVaR with two VaR-components is elicitable. We characterize all strictly consistent scoring functions for this triplet. Additional properties of these scoring functions are examined, including the diagnostic tool of Murphy diagrams. The results are illustrated with a simulation study, and we put our approach in perspective with respect to the classical approach of trimmed least squares regression.

1 Introduction

In the field of quantitative risk management, the last one or two decades have seen a lively debate about which monetary risk measure [3] would be best in (regulatory) practice. The debate mainly focused on the dichotomy between value at risk (VaRβ) on the one hand and expected shortfall (ESβ) on the other hand, at some probability level β(0,1) (see Section 2 for definitions). Mirroring the historical joust between median and mean as centrality measures in classical statistics, VaRβ, basically a quantile, is esteemed for its robustness, while ESβ, a tail expectation, is deemed attractive due to its sensitivity and the fact that it satisfies the axioms of a coherent risk measure [3]. We refer the reader to [15, 17] for comprehensive academic discussions, and to [58] for a regulatory perspective in banking.

Cont, Deguest and Scandolo [8] considered the issue of statistical robustness of risk measure estimates in the sense of [30]. They showed that a risk measure cannot be both robust and coherent. As a compromise, they propose the risk measure “range value at risk”, RVaRα,β at probability levels 0<α<β<1. It is defined as the average of all VaRγ with γ between α and β (see Section 2 for definitions). As limiting cases, one obtains RVaRβ,β=VaRβ and RVaR0,β=ESβ, which presents RVaRα,β as a natural interpolation of VaRβ and ESβ. Quantifying its robustness in terms of the breakdown point and following the arguments provided in [33, p. 59], RVaRα,β has a breakdown point of min{α,1-β}, placing it between the very robust VaRβ (with a breakdown point of min{β,1-β}) and the entirely non-robust ESβ (breakdown point 0). This means it is a robust – and hence not coherent – risk measure, unless it degenerates to RVaR0,β=ESβ (or if 0α<β=1). Moreover, RVaR belongs to the wide class of distortion risk measures [55, 52]. For further contributions to robustness in the context of risk measures, we refer the reader to [37, 38, 36, 16, 56]. Since the influential article [8], RVaR has gained increasing attention in the risk management literature – see [13, 14] for extensive studies – as well as in econometrics [5] where RVaR sometimes has the alternative denomination interquantile expectation. For the symmetric case β=1-α>12, RVaRα,1-α is known under the term α-trimmed mean in classical statistics and it constitutes an alternative to and interpolation of the mean and the median as centrality measures; see [40] for a recent study and a multivariate extension of the trimmed mean. It is closely connected to the α-Winsorized mean; see (2.3).

How is it possible to evaluate the predictive performance of point forecasts Xt for a statistical functional T, such as the mean, median or a risk measure, of the (conditional) distribution of a quantity of interest Yt? It is commonly measured in terms of the average realized score1nt=1nS(Xt,Yt) for some loss or scoring function S, using the orientation the smaller the better. Consequently, the loss function S should be strictly consistent for T in that T(F)=argminxS(x,y)dF(y): correct predictions are honored and encouraged in the long run, e.g., the squared loss S(x,y)=(x-y)2 is consistent for the mean, and the absolute loss S(x,y)=|x-y| is consistent for the median. If a functional admits a strictly consistent score, it is called elicitable [44, 39, 27]. By definition, elicitable functionals allow for M-estimation and have natural estimation paradigms in regression frameworks [11, Section 2] such as quantile regression [35, 34] or expectile regression [42]. Elicitability is crucial for meaningful forecast evaluation [18, 41, 27]. In the context of probabilistic forecasts with distributional forecasts Ft or density forecasts ft, (strictly) consistent scoring functions are often referred to as (strictly) proper rules such as the log-score S(f,y)=-logf(y) (see [29]). In quantitative finance, and particularly in the debate about which risk measure is best in practice, elicitability has gained considerable attention [17, 57, 9]. Especially, the role of elicitability for backtesting purposes has been highly debated [27, 1, 2]. It has been clarified that elicitability is central for comparative backtesting [24, 43]. On the other hand, if one strives to validate forecasts, (strict) identification functions are crucial. Much like scoring functions, they are functions in the forecast and the observation, which, however, vanish in expectation at (and only at) the correct report. Thus, they can be used to check (conditional) calibration [26, 43].

Not all functionals are elicitable or identifiable. Osband [44] showed that an elicitable or identifiable functional necessarily has convex level sets (CxLS): If T(F0)=T(F1)=t for two distributions F0,F1, then T(Fλ)=t where Fλ=(1-λ)F0+λF1, λ(0,1). Variance and ES generally do not have CxLS [53, 27], therefore failing to be elicitable and identifiable. The revelation principle [44, 27, 19] asserts that any bijection of an elicitable/identifiable functional is elicitable/identifiable. This implies that the pair (mean, variance) – being a bijection of the first two moments – is elicitable and identifiable despite the variance failing to be so. Similarly, Fissler and Ziegel [21] showed that the pair (VaRβ,ESβ) is elicitable and identifiable, with the structural difference that the revelation principle is not applicable in this instance. This is followed by the more general finding that the minimal expected score and its minimizer are always jointly elicitable [6, 25].

Recently, Wang and Wei [51, Theorem 5.3] showed that RVaRα,β, 0<α<β<1, similarly to ESα, fails to have CxLS as a standalone measure, which rules out its elicitability and identifiability. In contrast, they observe that the identity

(1.1)RVaRα,β=βESβ-αESαβ-α,0<α<β<1,

which holds if ESα and ESβ are finite, and the CxLS property of the pairs (VaRα,ESα), (VaRβ,ESβ) implies the CxLS property of the triplet (VaRα,VaRβ,RVaRα,β) (see [51, Example 4.6]). This raises the question whether this triplet is elicitable and identifiable or not. By invoking the elicitability and identifiability of (VaRα,ESα), identity (1.1) and the revelation principle establish the elicitability and identifiability of the quadruples (VaRα,VaRβ,ESα,RVaRα,β) and (VaRα,VaRβ,ESβ,RVaRα,β). This approach has already been used in the context of regression in [5].

Improving this result, we show that the triplet (VaRα,VaRβ,RVaRα,β) is elicitable (Theorem 3.3) and identifiable (Proposition 3.1) under weak regularity conditions. Practically, our results open the way to model validation, to meaningful forecast performance comparison, and in particular to comparative backtests, of this triplet, as well as to a regression framework. Theoretically, they show that the elicitation complexity [39, 25] or elicitation order [21] of RVaRα,β is at most 3. Moreover, requiring only VaR-forecasts besides the RVaR-forecast is particularly advantageous in comparison to additionally requiring ES-forecasts since the triplet (VaRα(F),VaRβ(F),RVaRα,β(F)), 0<α<β<1, exists and is finite for any distribution F, whereas ESα(F) and ESβ(F) are only finite if the (left) tail of the gains-and-loss distribution F is integrable. As RVaRα,β is used often for robustness purposes, safeguarding against outliers and heavy-tailedness, this advantage is important.

We would like to point out the structural difference between the elicitability result of

(VaRα,VaRβ,RVaRα,β)

provided in this paper and the one concerning (VaRα,ESα) in [21] as well as the more general results of [25, 6]. While ESα corresponds to the negative of a minimum of an expected score which is strictly consistent for VaRα, it turns out that RVaRα,β can be represented as the negative of a scaled difference of minima of expected strictly consistent scoring functions for VaRα and VaRβ; see equations (3.1) and (3.2). As a consequence, the class of strictly consistent scoring functions for the triplet (VaRα,VaRβ,RVaRα,β) turns out to be less flexible than the one for (VaRα,ESα); see Remark 3.9 for details. In particular, there is essentially no translation invariant or positively homogeneous scoring function which is strictly consistent for (VaRα,VaRβ,RVaRα,β); see Section 4.

The paper is organized as follows. In Section 2, we introduce the relevant notation and definitions concerning RVaR, scoring functions and elicitability. The main results are presented in Section 3, establishing the elicitability of the triplet (VaRα,VaRβ,RVaRα,β) (Theorem 3.3) and characterizing the class of strictly consistent scoring functions (Theorem 3.7), exploiting the identifiability result of Proposition 3.1. Section 4 shows that there are basically no strictly consistent scoring functions for (VaRα,VaRβ,RVaRα,β) which are positively homogeneous or translation invariant. In Section 5, we establish a mixture representation of the strictly consistent scoring functions in the spirit of [12]. This result allows to compare forecasts simultaneously with respect to all consistent scoring functions in terms of Murphy diagrams. We demonstrate the applicability of our results and compare the discrimination ability of different scoring functions in a simulation study presented in Section 6. The paper finishes in Section 7 with a discussion of our results in the context of M-estimation and compares them to other suggestions in the statistical literature, in variants of a trimmed least squares procedure [35, 49, 47].

2 Notation and definitions

2.1 Definition of range value at risk

There are different sign conventions in the literature on risk measures. In this paper, we use the following convention: if a random variable Y models the gains and losses, then positive values of Y represent gains and negative values of Y losses. Moreover, if ρ is a risk measure, we assume that ρ(Y) corresponds to the maximal amount of money one can withdraw such that the position Y-ρ(Y) is still acceptable. Hence, negative values of ρ correspond to risky positions. In the sequel, let 0 be the class of probability distribution functions on . Recall that the α-quantile, α[0,1], of F0 is defined as the set qα(F)={xF(x-)αF(x)}, where F(x-):=limtxF(t).

Definition 2.1.

Value at risk of F0 at level α[0,1] is defined by VaRα(F)=infqα(F).

For any α[0,1] we introduce the following subclasses of 0:

α={F0qα(F)={VaRα(F)}},(α)={F0F(VaRα(F))=α}.

Distributions F(α) have at least one solution to the equation F(x)=α; distributions Fα have at most one solution to the equation F(x)=α.

Definition 2.2.

Range value at risk of F0 at levels 0αβ1 is defined by

RVaRα,β(F)={1β-ααβVaRγ(F)dγif α<β,VaRβ(F)if α=β.

Note that limαβRVaRα,β(F)=VaRβ(F)=RVaRβ,β(F). The definition of RVaR and the fact that γVaRγ(F) is increasing imply that

(2.1)VaRα(F)RVaRα,β(F)VaRβ(F).

For 0<αβ<1 and F0 one obtains that (i) RVaRα,β(F); (ii) RVaR0,β(F){-} and it is finite if and only if -0|y|dF(y)<; and (iii) RVaRα,1(F){} and it is finite if and only if 0|y|dF(y)<. Moreover, RVaR0,1(F) exists if and only if

-0|y|dF(y)<or0|y|dF(y)<

and then coincides with ydF(y){±}. For α<β and provided that RVaRα,β(F) exists, it holds that

(2.2)RVaRα,β(F)=1β-α((VaRα(F),VaRβ(F)]ydF(y)+VaRα(F)(F(VaRα(F))-α)-VaRβ(F)(F(VaRβ(F))-β)),

using the usual conventions F(-)=0, F()=1 and 0=0(-)=0. If F(α)(β), then the correction terms in the second line of (2.2) vanish, yielding

RVaRα,β(F)=𝔼F(Y𝟙{VaRα(F)<YVaRβ(F)})β-α,

which justifies an alternative name for RVaR, namely Interquantile Expectation.

Definition 2.3.

Expected shortfall of F0 at level α(0,1) is defined by

ESα(F)=RVaR0,α(F){-}.

Hence, provided that ESα(F) and ESβ(F) are finite, one obtains identity (1.1). If F has a finite left tail (-0|y|dF(y)<), then one could use the right-hand side of (1.1) as a definition of RVaRα,β(F). However, in line with our discussion in the introduction, RVaRα,β(F) always exists and is finite for 0<α<β<1 even if the right-hand side of (1.1) is not defined.

Interestingly, [14, Theorem 2] establish that RVaR can be written as an inf-convolution of VaR and ES at appropriate levels. This result amounts to a sup-convolution in our sign convention. Also note that our parametrization of RVaRα,β differs from theirs.

Now, for α(0,12), RVaRα,1-α corresponds to the α-trimmed mean and has a close connection to the α-Winsorized meanWα (see [33, pp. 57–59]) via

(2.3)Wα(F):=(1-2α)RVaRα,1-α(F)+αVaRα(F)+αVaR1-α(F),α(0,12).

2.2 Elicitability and scoring functions

Using the decision-theoretic framework of [21, 27], we introduce the following notation. Let 0 be some generic subclass and let 𝖠k be an action domain. Whenever we consider a functional T:𝖠, we tacitly assume that T(F) is well-defined for all F and is an element of 𝖠. Then T() corresponds to the image {T(F)𝖠F}. For any subset Mk we denote with int(M) the largest open subset of M. Moreover, conv(M) denotes the convex hull of the set M.

We say that a function a: is -integrable if it is measurable and |a(y)|dF(y)< for all F. Similarly, a function g:𝖠× is called -integrable if g(x,): is -integrable for all x𝖠. If g is -integrable, we define the map

g¯:𝖠×,g¯(x,F):=g(x,y)dF(y).

If g:𝖠× is sufficiently smooth in its first argument, we denote the m-th partial derivative of g(,y) by mg(,y).

Definition 2.4.

A map S:𝖠× is an -consistent scoring function for T:𝖠 if it is -integrable and if S¯(T(F),F)S¯(x,F) for all x𝖠 and F. It is strictly-consistent for T if it is consistent and if S¯(T(F),F)=S¯(x,F) implies that x=T(F) for all x𝖠 and for all F. A functional T:𝖠 is elicitable on if it possesses a strictly -consistent scoring function.

Definition 2.5.

Two scoring functions S,S~:𝖠× are equivalent if there is some a: and some λ>0 such that S~(x,y)=λS(x,y)+a(y) for all (x,y)𝖠×. They are proportional if they are equivalent with a0.

This equivalence relation preserves (strict) consistency: If S is (strictly) -consistent for T and if a is -integrable, then S~ is also (strictly) -consistent for T. Closely related to the concept of elicitability is the notion of identifiability.

Definition 2.6.

A map V:𝖠×k is an -identification function for T:𝖠 if it is -integrable and if V¯(T(F),F)=0 for all F. It is a strict-identification function for T if additionally V¯(x,F)=0 implies that x=T(F) for all x𝖠 and for all F. A functional T:𝖠 is identifiable on if it possesses a strict -identification function.

In contrast to [27], we consider point-valued functionals only. For a recent comprehensive study on elicitability of set-valued functionals, we refer to [20].

3 Elicitability and identifiability results

Wang and Wei [51, Theorem 5.3] showed that for 0<α<β<1, RVaRα,β (and also the pairs (VaRα,RVaRα,β) and (VaRβ,RVaRα,β)) do not have CxLS on dis, the class of distributions with bounded and discrete support. Hence, by invoking that CxLS are necessary both for elicitability and for identifiability, RVaRα,β and the pairs (VaRα,RVaRα,β) and (VaRβ,RVaRα,β) are neither elicitable nor identifiable on dis. Our novel contribution is that the triplet(VaRα,VaRβ,RVaRα,β), however, is elicitable and identifiable, subject to mild conditions. We use the notation Sα(x,y)=(𝟙{yx}-α)x-𝟙{yx}y and recall that Sα is -consistent for VaRα if -0|y|dF(y)< for all F, and strictly -consistent if furthermore α (see [27]).

Proposition 3.1.

For 0<α<β<1, the map V:R3×RR3 defined by

(3.1)V(x1,x2,x3,y)=(𝟙{yx1}-α𝟙{yx2}-βx3+1β-α(Sβ(x2,y)-Sα(x1,y)))

is an F(α)F(β)-identification function for (VaRα,VaRβ,RVaRα,β), which is strict on FαF(α)FβF(β).

Proof.

The proof is standard, observing that

(3.2)V¯3(VaRα(F),VaRβ(F),x3,F)=x3-RVaRα,β(F),

which follows from the representation (2.2). ∎

Remark 3.2.

The benefits of the identifiability result of Proposition 3.1 are two-fold. First, it facilitates (conditional) calibration backtests in the spirit of [43]. There, the null hypothesis is that a sequence of forecasts (X1,t,X2,t,X3,t), measurable with respect to the most recent information 𝒜t-1, is correctly specified in the sense that

(X1,t,X2,t,X3,t)=(VaRα(Yt|𝒜t-1),VaRβ(Yt|𝒜t-1),RVaRα,β(Yt|𝒜t-1)).

By exploiting the strict identification property of V in (3.1), this null hypothesis corresponds to

𝔼(V(X1,t,X2,t,X3,t,Yt)|𝒜t-1)=0.

Clearly, such a conditional backtest can be conducted using any strict identification function. By invoking [19, Proposition 3.2.1], any strict α(α)β(β)-identification function for (VaRα,VaRβ,RVaRα,β) is given by

H(x1,x2,x3)V(x1,x2,x3,y),

where V is given in (3.1) and H:33×3 is a matrix-valued function whose determinant does not vanish.

Second, Proposition 3.1 enables the characterization result of strictly consistent scoring functions presented in Theorem 3.7.

The following theorem establishes a rich class of (strictly) consistent scoring functions S:3× for (VaRα,VaRβ,RVaRα,β). By a priori assuming the forecasts to be bounded with values in some cube [cmin,cmax]3, -cmin<cmax (here and throughout the paper, we make the tacit convention that [cmin,cmax]:=[cmin,cmax] if cmin=- or cmax=), the class gets even broader.

Theorem 3.3.

For 0<α<β<1, the map S:[cmin,cmax]3×RR defined by

S(x1,x2,x3,y)=(𝟙{yx1}-α)g1(x1)-𝟙{yx1}g1(y)+(𝟙{yx2}-β)g2(x2)-𝟙{yx2}g2(y)
(3.3)+ϕ(x3)(x3+1β-α(Sβ(x2,y)-Sα(x1,y)))-ϕ(x3)+a(y)

is an F-consistent scoring function for T=(VaRα,VaRβ,RVaRα,β) if the following conditions hold:

  1. (i)

    ϕ:[cmin,cmax] is convex with subgradient ϕ.

  2. (ii)

    For all x3[cmin,cmax] the functions

    (3.4)G1,x3:[cmin,cmax],x1g1(x1)-x1ϕ(x3)/(β-α),
    (3.5)G2,x3:[cmin,cmax],x2g2(x2)+x2ϕ(x3)/(β-α),

    are increasing.

  3. (iii)

    ya(y)-𝟙{yx1}g1(y)-𝟙{yx2}g2(y) is -integrable for all x1,x2[cmin,cmax].

If moreover ϕ is strictly convex and the functions in G1,x3 and G2,x3 in (3.4) and (3.5) are strictly increasing for all x3[cmin,cmax], then S is strictly FαFβ-consistent for T.

Proof.

Let (x1,x2,x3)𝖠, F and (t1,t2,t3):=T(F). Then, since G1,x3 is increasing,

[cmin,cmax]×(x1,y)S(x1,x2,x3,y)

is -consistent for VaRα and it is strictly α-consistent if G1,x3 is strictly increasing. Similar comments apply to the map [cmin,cmax]×(x2,y)S(t1,x2,x3,y). Hence,

0S¯(x1,x2,x3,F)-S¯(t1,x2,x3,F)+S¯(t1,x2,x3,F)-S¯(t1,t2,x3,F)
=S¯(x1,x2,x3,F)-S¯(t1,t2,x3,F),

with a strict inequality under the conditions for strict consistency and if (x1,x2)(t1,t2). Finally,

(3.6)S¯(t1,t2,x3,F)-S¯(t1,t2,t3,F)=ϕ(x3)(x3-t3)-ϕ(x3)+ϕ(t3)0

since ϕ is convex. If ϕ is strictly convex and if x3t3, the inequality in (3.6) is strict. ∎

Remark 3.4.

Provided condition (iii) in Theorem 3.3 holds and if ϕ is strictly convex and G1,x3 and G2,x3 are strictly increasing, then S given in (3.3) is still strictly -consistent in the RVaR-component for general 0. That is, for F,

argminx𝖠0S¯(x,F)=qα(F)×qβ(F)×{RVaRα,β(F)}.

By making use of (2.3) and the revelation principle [44, 27, 19], Theorem 3.3 also provides a rich class of strictly consistent scoring functions for (VaRα,VaR1-α,Wα), where Wα is the α-Winsorized mean. The following proposition is useful to construct examples; see Section 6.

Proposition 3.5.

Let S be of the form (3.3) with a (strictly) convex and non-constant function ϕ, and functions g1, g2 such that the functions at (3.4) and (3.5) are (strictly) increasing and condition (iii) of Theorem 3.3 is satisfied. Then the following assertions hold:

  1. (i)

    The subgradient ϕ of ϕ is necessarily bounded and the one-sided derivatives of g1 and g2 are necessarily bounded from below.

  2. (ii)

    S is proportional to a scoring function S~ of the form (3.3) with a (strictly) convex function ϕ~ such that ϕ~ is bounded with

    β-α=-infx[cmin,cmax]ϕ~(x)=supx[cmin,cmax]ϕ~(x),

    and strictly increasing functions g~1, g~2 such that their one-sided derivatives are bounded from below by one and such that the functions at (3.4) and (3.5) are (strictly) increasing and condition (iii) of Theorem 3.3 is satisfied.

Proof.

(i) The proof is similar to the one of [21, Corollary 5.5]: condition (ii) implies that for any

x1,x1,x2,x2,x3[cmin,cmax]

with x1<x1 and x2<x2 it holds that

-<-g2(x2)-g1(x2)x2-x2ϕ(x3)β-αg1(x1)-g1(x1)x1-x1<.

Therefore, ϕ is bounded, and the one-sided derivative of g1 is bounded from below by supx3ϕ(x3)/(β-α), while the one-sided derivative of g2 is bounded from below by -infx3ϕ(x3)/(β-α).

(ii) For any c, if we replace ϕ by ϕ^:xϕ(x)+cx, g1 by g^1:xg1(x)+cx/(β-α), and g2 by g^2:xg2(x)-cx/(β-α) in formula (3.3) for S, then S does not change. Also, ϕ^ is (strictly) convex if and only if ϕ is (strictly) convex. Furthermore, conditions (ii) and (iii) of Theorem 3.3 hold for (ϕ,g1,g2) if and only if they hold for (ϕ^,g^1,g^2). By part (i) of the proposition, ϕ is bounded. Therefore, we can assume without loss of generality that

-infx[cmin,cmax]ϕ(x)=supx[cmin,cmax]ϕ(x)=λ>0

since ϕ is non-constant. Then the argument follows by setting S~=λβ-αS. ∎

Example 3.6.

Proposition 3.5 in combination with Theorem 3.3 yields a straightforward recipe to generate (strictly) consistent scoring functions for (VaRα,VaRβ,RVaRα,β). The main degree of flexibility is the choice of ϕ. For practical purposes, it can be easier to start with the choice of ϕ, which should be a (strictly) increasing and bounded function. A rich source for such functions is the class of (strictly increasing) cumulative distribution functions, which can easily be scaled to have an infimum of -(β-α) and a supremum of β-α. Then ϕ can be obtained by integrating ϕ. The simplest choice for g1 and g2 is the identity, i.e., g1(x1)=x1 and g2(x2)=x2. The only remaining degree of flexibility is then to add consistent scoring functions for VaRα or for VaRβ. Table 1 contains some examples for choices of ϕ. For illustrative purposes, let us discuss the score S1 from Table 1 more closely. Just as in the case of S3, but less obviously so, the corresponding ϕ is motivated by a distribution function. In this case, it is the logistic distribution ex3/(1+ex3). Proper translation and scaling according to Proposition 3.5 leads to

ϕ(x3)=(β-α)(2ex31+ex3-1)=(β-α)ex3-1ex3+1=(β-α)tanh(x32).

An antiderivative of ϕ is given by ϕ(x3)=(β-α)(2log(ex3+1)-x3). Therefore, upon choosing a(y)=2y, the explicit form of S1 reads

S1(x1,x2,x3,y)=(𝟙{yx1}-α)x1+𝟙{y>x1}y+(𝟙{yx2}-β)x2
+𝟙{y>x2}y+2(β-α)(x3ex3ex3+1-log(ex3+1))
+ex3-1ex3+1((𝟙{yx2}-β)x2-𝟙{yx2}y-(𝟙{yx1}-α)x1+𝟙{yx1}y).

The particular choice of a(y)=2y can be beneficial with regard to integrability conditions: With this choice, S1 is F-integrable if and only if the right tail of F is integrable, i.e., if 0ydF(y)<. In a risk management context with our sign convention, the right tail corresponds to the gains, which are commonly less heavy-tailed than the losses. While ϕ appearing in S2 can easily be integrated with an antiderivative of

ϕ(x3)=(β-α)(2π)(x3arctan(x3)-log(x32+1)2),

the antiderivative of ϕ for S3 has no closed form solution, therefore requiring numerical integration. The scoring function S4, where ϕ is an increasing piecewise linear function which is strictly increasing only on [c1,c2], is in the spirit of the Huber loss [32, p. 79]. It is only strictly consistent on [c1,c2]3, but remains consistent for all of 3.

Table 1

Examples of scoring functions. In all cases we choose g1(x1)=x1 and g2(x2)=x2. The parametersc1,c2 satisfy c1<c2, and Φ is the cumulative distribution function of a standard normal law.

Scoring functionϕ(x3)
S1(β-α)tanh(x3/2)
S2(β-α)(2/π)arctan(x3)
S3(β-α)(2Φ(x3)-1)
S4(β-α)(-𝟙{x3<c1}+𝟙{x3>c2}+𝟙{c1x3c2}2(x3-(c1+c2)/2)/(c2-c1))

Striving for a full characterization of the class of strictly consistent scoring functions for

T=(VaRα,VaRβ,RVaRα,β),

we shall next establish the counterpart of Theorem 3.3, providing necessary conditions for the strict consistency. The main tool to derive such necessary conditions is Osband’s principle, originating from the seminal dissertation of Osband [44]; see also [27] for an accessible intuition. We use the precise technical formulation of [21, Theorem 3.2]. It is no wonder that necessary conditions for strictly -consistent scores for T can only be obtained for action domains 𝖠3 such that the surjectivity condition 𝖠={T(F):F} holds. By invoking inequality (2.1), any such action domain is necessarily a subset of

𝖠0:={(x1,x2,x3)3x1x3x2},

which we therefore call the maximal sensible action domain. Issuing forecasts for T outside of 𝖠0, thus violating (2.1), would be irrational, comparable to, say, negative variance forecasts. Still, the scoring functions of the form (3.3) allow for the evaluation of forecasts violating (2.1). Besides the surjectivity assumption and further richness assumptions on the class of distributions , we need to impose smoothness conditions on the expected score as to exploit first-order conditions stemming from the minimization problem of strict consistency; see Section A for the detailed technical formulations and [21] for a discussion of these conditions.

We introduce the class contαα of distributions in α which are continuously differentiable (and therefore also in (α)). For any 𝖠3, we denote the projections on the r-th component by

𝖠r:={xrthere exists (z1,z2,z3)𝖠,zr=xr},r{1,2,3}.

For any x3𝖠3 and m{1,2}, let

𝖠m,x3:={xmthere exists (z1,z2,z3)𝖠,zm=xm,z3=x3}.

Theorem 3.7.

Let FFcontα, 0<α<β<1, T=(VaRα,VaRβ,RVaRα,β):FAA0, and let V=(V1,V2,V3) be defined at (3.1). If Assumptions (V1) and (F1) hold and (V1,V2) satisfies Assumption (V4), then any strictly F-consistent scoring function S:A×RR for T that satisfies Assumptions (VS1) and (S2) is necessarily of the form (3.3) almost everywhere, where the functions Gr,x3:Ar,x3R, r{1,2}, x3A3, in (3.4) and (3.5) are strictly increasing and ϕ:A3R is strictly convex.

Proof.

First note that V satisfies Assumption (V3) on contα. Let F with derivative f and let xint(𝖠). Then one obtains

V¯3(x,F)=x3+1β-α(x2(F(x2)-β)-x1(F(x1)-α)-x1x2yf(y)dy).

The partial derivatives of V are given by

1V¯1(x,F)=f(x1),
2V¯2(x,F)=f(x2),
1V¯3(x,F)=-F(x1)-αβ-α,
2V¯3(x,F)=F(x2)-ββ-α,
3V¯3(x,F)=1,

and rV¯1(x,F) and mV¯2(x,F) vanish for r{2,3} and m{1,3}. Applying [21, Theorem 3.2] yields the existence of continuously differentiable functions hlm:int(𝖠), l,m{1,2,3}, such that

mS¯(x,F)=i=13hmi(x)V¯i(x,F)

for m{1,2,3}. Since we assume that S¯(,F) is twice continuously differentiable for any F, the second-order partial derivatives need to commute. Let t=T(F). Then 12S¯(t,F)=21S¯(t,F) is equivalent to

h21(t)f(t1)=h12(t)f(t2).

This needs to hold for all F. The variation in the densities implied by Assumption (V4) in combination with the surjectivity of T yields that h12h210 on int(𝖠). Similarly, evaluating 13S¯(x,F)=31S¯(x,F) and 23S¯(x,F)=32S¯(x,F) at x=t=T(F) yields

h13(t)=h31(t)f(t1),h23(t)=h32(t)f(t2).

By using again Assumption (V4) as well as the surjectivity of T, this implies that

h13h31h23h320.

So we are left with characterizing hmm for m{1,2,3}. Note that Assumption (V1) implies that for any x=(x1,x2,x3)int(𝖠) there are two distributions F1,F2 such that

(F1(x1)-α,F1(x2)-β)and(F2(x1)-α,F2(x2)-β)

are linearly independent. Then the requirement that

12S¯(x,F)=1h22(x)(F(x2)-β)=2h11(x)(F(x1)-α)=21S¯(x,F)

for all xint(𝖠) and for all F implies that 1h222h110. Starting with 13S¯(x,F)=31S¯(x,F) implies that

1h33V¯3(x,F)=(3h11(x)+h33(x)β-α)V¯1(x,F).

Again, Assumption (V1) implies that there are F1,F2 such that

(V¯1(x,F1),V¯3(x,F1))and(V¯1(x,F2),V¯3(x,F2))

are linearly independent. Hence, we obtain that 1h330 and 3h11-h33/(β-α). With the same argumentation and starting from 23S¯(x,F)=32S¯(x,F), one can show that 2h330 and 3h22h33/(β-α). This means there exist functions

η1:{(x1,x3)2there exists (z1,z2,z3)int(𝖠),x1=z1,x3=z3},
η2:{(x2,x3)2there exists (z1,z2,z3)int(𝖠),x2=z2,x3=z3},
η3:int(𝖠)3,

and some zint(𝖠)3 such that for any x=(x1,x2,x3)int(𝖠) it holds that

h33(x)=η3(x3),
h11(x)=η1(x1,x3)=-1β-αzx3η3(z)dz+ζ1(x1),
h22(x)=η2(x2,x3)=1β-αzx3η3(z)dz+ζ2(x2),

where ζr:int(𝖠)r, r{1,2}. Due to the fact that any component of T is mixture-continuous[1] and since is convex and T surjective, the projection int(𝖠)3 is an open interval. Hence, [min(z,x3),max(z,x3)]int(𝖠)3. Due to Assumptions (V3) and (S2), [21, Theorem 3.2] implies that η1,η2,η3 are locally Lipschitz continuous.

The above calculations imply that the Hessian of the expected score, i.e., 2S¯(x,F), at its minimizer x=t=T(F), is a diagonal matrix with entries η1(t1,t3)f(t1), η2(t2,t3)f(t2) and η3(t3). As a second-order condition, 2S¯(t,F) must be positive semi-definite. By invoking the surjectivity of T once again, this shows that η1,η2,η30. More to the point, invoking the continuous differentiability of the expected score and the fact that S is strictly -consistent for T, one obtains that for any F with t=T(F) and for any v3, v0, there exists an ε>0 such that ddsS¯(t+sv,F) is negative for all s(-ε,0), zero for s=0, and positive for all s(ε,0). For v=e3=(0,0,1), this means that for any F with t=T(F) there is an ε>0 such that ddsS¯(t+se3,F)=η3(t3+s)s has the same sign as s for all s(-ε,ε). Therefore, η3(t3+s)>0 for all s(-ε,ε){0}. Using the surjectivity of T and invoking a compactness argument, η3 attains a 0 only finitely many times on any compact interval. Recall that int(𝖠)3 is an open interval. Hence, it can be approximated by an increasing sequence of compact intervals. Therefore, η3-1({0}) is at most countable, and therefore a Lebesgue null set. With similar arguments, one can show that for any x3int(𝖠)3 the sets

{x1there exists (z1,z2,z3)int(𝖠),x1=z1,x3=z3,η1(x1,x3)=0},
{x2[x3,)there exists (z1,z2,z3)int(𝖠),x2=z2,x3=z3,η2(x2,x3)=0}

are at most countable, and therefore also Lebesgue null sets.

Finally, using [23, Proposition 1 in the supplement] (recognizing that V is locally bounded), one obtains that S is almost everywhere of the form (3.3). Moreover, it holds almost everywhere that ϕ′′=η3 and gm=ζm for m{1,2}. Hence, ϕ is strictly convex and the functions at (3.4) and (3.5) are strictly increasing. ∎

Combining Theorems 3.3 and 3.7, one can show that the scoring functions given at (3.3) are essentially the only strictly consistent scoring functions for the triplet (VaRα,VaRβ,RVaRα,β) on the action domain

𝖠={(x1,x2,x3)3cminx1x3x2cmax}.

Corollary 3.8.

Let

𝖠={(x1,x2,x3)3cminx1x3x2cmax}

for some -cmin<cmax. Under the conditions of Theorem 3.7, a scoring function S:A×RR is strictly F-consistent for T=(VaRα,VaRβ,RVaRα,β), 0<α<β<1, if and only if it is of the form (3.3) almost everywhere satisfying conditions (i)(iii). Moreover, the function ϕ:[cmin,cmax]R is necessarily bounded.

Proof.

For the proof it suffices to show that for r{1,2}, Gr,x3 defined in (3.4) and (3.5) is not only increasing on 𝖠r,x3 for any x3𝖠3, but on 𝖠r=[cmin,cmax]. For x3[cmin,cmax]=𝖠3, we have 𝖠1,x3=[cmin,x3] and 𝖠2,x3=[x3,cmax]. Let x3𝖠3 and x1,x1𝖠1 with x1<x1. If x1,x1𝖠1,x3, there is nothing to show. If however x3<x1, then x1,x1𝖠1,x1. This means that

0g1(x1)-g1(x1)-(x1-x1)ϕ(x1)β-α
g1(x1)-g1(x1)-(x1-x1)ϕ(x3)β-α,

where the second inequality stems from the fact that ϕ is increasing. If the function G1,x1 is strictly increasing, then the first inequality is strict. The argument for G2,x3 works analogously. ∎

Remark 3.9.

Note the structural difference of Theorems 3.3 and 3.7 to [25, Theorem 1], [6, Proposition 4.14] and in particular [21, Theorem 5.2 and Corollary 5.5]. Our functional of interest, RVaRα,β with 0<α<β<1, is not a minimum of an expected scoring function – or Bayes risk –, but a difference of minima of two scoring functions. Indeed, while ESβ(F)=-1βS¯β(VaRβ(F),F), we have that

RVaRα,β(F)=-1β-α(S¯β(VaRβ(F),F)-S¯α(VaRα(F),F)).

This structural difference is reflected in the minus sign appearing at (3.4). In particular, it means that the functions g1 and g2 cannot identically vanish if we want to ensure strict consistency of S, whereas the corresponding function in [21, Theorem 5.2] may well be set to zero. [25, Theorem 2] generalizes our results and presents an elicitability result of any linear combination of Bayes risks.

4 Translation invariance and homogeneity

There are many choices for the functions g1, g2 and ϕ appearing in the formula for the scoring function S at (3.3). Often, these choices can be limited by imposing secondary desirable criteria on S, e.g., acknowledging that T=(VaRα,VaRβ,RVaRα,β) is translation equivariant (meaning that T(FY+z)=T(FY)+z for any constant z) and positively homogeneous of degree 1 (meaning that T(FcY)=cT(FY) for any c>0), it would make sense if the forecast ranking were also invariant under a joint translation of the forecasts and the observations on the one hand, and joint scaling of the forecasts and the observations on the other hand. This would require translation invariance of the score differences on the one hand, i.e.,

S(x1+z,x2+z,x3+z,y+z)-S(x1+z,x2+z,x3+z,y+z)=S(x1,x2,x3,y)-S(x1,x2,x3,y)

for all (x1,x2,x3),(x1,x2,x3)𝖠 and y,z. On the other hand, it would require positively homogeneous score differences, that is, there is some b such that

S(cx1,cx2,cx3,cy)-S(cx1,cx2,cx3,cy)=cb(S(x1,x2,x3,y)-S(x1,x2,x3,y))

for all (x1,x2,x3),(x1,x2,x3)𝖠, y and for all c>0. While translation invariance seems to be particularly important when RVaR is used as a location parameter, i.e., when α=1-β<12, corresponding to the α-trimmed mean, positively homogeneous score differences are relevant in a risk management context: the forecast ranking should not depend on the unit in which the risk measures and the gains and losses are reported, be it in, say Euros or in Euro Cents. We also refer to [45, 43, 22] for further motivations. This section establishes that, unfortunately, there are no strictly consistent scoring functions for (VaRα,VaRβ,RVaRα,β) which admit translation invariant or positively homogeneous score differences under practically relevant settings.

If one is interested in scoring functions with an action domain of the form

𝖠={x3cminx1x3x2cmax}

possessing the additional property of translation invariant score differences, the only sensible choice is cmin=-, cmax=, amounting to the maximal action domain 𝖠0. Similarly, for scoring functions with positively homogeneous score differences, the most interesting choices for action domains are

𝖠=𝖠0,
𝖠=𝖠0+={(x1,x2,x3)30x1x3x2},
𝖠=𝖠0-={(x1,x2,x3)3x1x3x20}.

Proposition 4.1 (Translation invariance).

Under the conditions of Theorem 3.7 there are no strictly F-consistent scoring functions for (VaRα,VaRβ,RVaRα,β), 0<α<β<1, on A0 with translation invariant score differences.

Proof.

By using Theorem 3.7, any strictly -consistent scoring function for the functional

T=(VaRα,VaRβ,RVaRα,β)

must be of the form (3.3), where in particular ϕ is strictly convex, twice differentiable and ϕ is bounded. Assume that S has translation invariant score differences. That means that the function

Ψ:×𝖠0×𝖠0×

defined by

Ψ(z,x,x,y)=S(x1+z,x2+z,x3+z,y+z)-S(x1+z,x2+z,x3+z,y+z)
-S(x1,x2,x3,y)+S(x1,x2,x3,y)

vanishes. Then, for all x𝖠0 and for all z,y,

0=ddx3Ψ(z,x,x,y)=(ϕ′′(x3+z)-ϕ′′(x3))(x3+1β-α(Sβ(x2,y)-Sα(x1,y))).

Therefore, ϕ′′ needs to be constant. Since ϕ is convex that means that ϕ(x3)=dx3+d with d>0. But since 𝖠3=, ϕ is unbounded, which is a contradiction. ∎

The proof of Proposition 4.1 closely follows the one of [22, Proposition 4.10]. The fact that the latter assertion entails a positive result has the following background: The strictly consistent scoring function for (VaRα,ESα) given in [22, Proposition 4.10] works only on a very restricted action domain. To guarantee strict consistency on such an action domain, one would need a refinement of Theorem 3.3 in the spirit of [23, Proposition 2 of the supplement]. However, since such a positive result on a quite restricted action domain is practically irrelevant, we dispense with such a refinement and only state the relevant negative result here.

Proposition 4.2 (Positive homogeneity).

Under the conditions of Theorem 3.7 there are no strictly F-consistent scoring functions for (VaRα,VaRβ,RVaRα,β), 0<α<β<1, on A{A0,A0+,A0-} with positively homogeneous score differences.

Proof.

By using Theorem 3.7, any strictly -consistent scoring function for the functional

T=(VaRα,VaRβ,RVaRα,β)

must be of the form (3.3), where in particular ϕ is strictly convex, twice differentiable and ϕ is bounded. Assume that S has positively homogeneous score differences of some degree b. That means that the function Ψ:(0,)×𝖠×𝖠× defined by

Ψ(c,x,x,y)=S(cx,cy)-S(cx,cy)-cbS(x,y)+cbS(x,y)

vanishes. Therefore, for all x𝖠, for all y and all c>0,

(4.1)0=ddx3Ψ(z,x,x,y)=(c2ϕ′′(cx3)-cbϕ′′(x3))(x3+1β-α(Sβ(x2,y)-Sα(x1,y))).

For the sake of brevity, we only consider the case 𝖠=𝖠0-, the other cases being similar. Equation (4.1) implies that ϕ′′(-x3)=ϕ′′(-1)x3b-2 for any x3>0. Due to the strict convexity of ϕ, we need that ϕ′′(-1)>0. However, for b1 we have infx3>0ϕ(-x3)=-, and for b1 we have supx3>0ϕ(-x3)=. Hence, ϕ cannot be bounded. ∎

Remark 4.3.

The negative result of Proposition 4.2 should be compared with the results of Nolde and Ziegel [43, Theorem C.3] characterizing homogeneous strictly consistent scoring functions for the pair (VaRβ,ESβ). Since they use a different sign convention for VaR and ES than we do in this paper, their choice of the action domain ×(0,) corresponds to our choice 𝖠0-. When interpreting RVaRα,β as a risk measure, negative values of RVaR are the more interesting and relevant ones, using our sign convention. Inspecting the proofs of Proposition 4.2 and of Proposition 3.5 (i) one makes the following observation: for b1, Nolde and Ziegel [43] state an impossibility result for their choice of action domain. In fact, the problem occurring in our context is that ϕ is not bounded from below. In Proposition 3.5, this property is implied by the fact that the function G2,x3 at (3.5) is increasing. And it is exactly such a condition that is also present for strictly consistent scoring functions for the pair (VaRβ,ESβ); see [21, Theorem 5.2]. On the other hand, the complication for b<1 stems from the fact that ϕ is not bounded from above. This condition is related to the monotonicity of G1,x3 at (3.4). Such a condition is not present for strictly consistent scoring functions for the pair (VaRβ,ESβ). Correspondingly, there can be homogeneous and strictly consistent scoring functions for b<1 for this pair [43], while this is not possible for the triplet (VaRα,VaRβ,RVaRα,β).

5 Mixture representation of scoring functions

When forecasts are compared and ranked with respect to consistent scoring functions, one has to be aware that in the presence of non-nested information sets, model mis-specification and/or finite samples, the ranking may depend on the chosen consistent scoring function [46]. In the specific case of (VaRα,VaRβ,RVaRα,β), the forecast ranking may depend on the specific choice for the functions g1, g2, and ϕ appearing in Theorem 3.3. A possible remedy to this problem is to compare forecasts simultaneously with respect to all consistent scoring functions in terms of Murphy diagrams as introduced by Ehm, Gneiting, Jordan and Krüger [12]. Murphy diagrams are based on the fact that the class of all consistent scoring functions can be characterized as a class of mixtures of elementary scoring functions that depend on a low-dimensional parameter. The following theorem provides such a mixture representation for the scoring functions at (3.3). The applicability is illustrated in Section 6. Recall that Sα(x,y)=(𝟙{yx}-α)x-𝟙{yx}y.

Theorem 5.1.

Let 0<α<β<1. Any scoring function

S:[cmin,cmax]3×

of the form (3.3) with a:RR chosen such that S(y,y,y,y)=0 can be written as

(5.1)S(x1,x2,x3,y)=Lv1(x1,y)dH1(v)+Lv2(x2,y)dH2(v)+Lv3(x1,x2,x3,y)dH3(v),

where

Lv1(x1,y)=(𝟙{yx1}-α)(𝟙{vx1}-𝟙{vy}),
Lv2(x2,y)=(𝟙{yx2}-β)(𝟙{vx2}-𝟙{vy}),
Lv3(x1,x2,x3,y)=1β-α(𝟙{v>x3}(Sα(x1,y)+αy)+𝟙{vx3}(Sβ(x2,y)+βy))+(𝟙{vx3}-𝟙{vy})v,

and H1, H2 are locally finite measures on [cmin,cmax] and H3 is a finite measure on [cmin,cmax]. If H3 puts positive mass on all open intervals, then S is strictly consistent. Conversely, for any choice of measures H1,H2,H3 with the above restrictions, we obtain a scoring function of the form (3.3).

Proof.

An increasing function h:[cmin,cmax] can always be written as

(5.2)h(x)=(𝟙{vx}-𝟙{vz})dH(v)+C,x[cmin,cmax],

for some locally finite measure H and some z[cmin,cmax], C. The function h is strictly increasing if and only if H is strictly positive, i.e., it puts positive mass on all open non-empty intervals. Furthermore, the one-sided derivatives of h are bounded below by λ>0 if and only if H(A)λ(A) for all Borel sets A[cmin,cmax], where is the Lebesgue measure on .

Using the arguments from Proposition 3.5, it is no loss of generality to show the assertion for a score S such that λ(β-α)=-infxϕ(x)=supxϕ(x) and the one-sided derivatives of g1, g2 are bounded from below by λ>0.

Then there is a measure H3 on [cmin,cmax] such that H3([cmin,cmax])=2λ(β-α), which is strictly positive if and only if ϕ is strictly convex, such that for all x3[cmin,cmax] we have

ϕ(x3)=𝟙{vx3}dH3(v)-λ(β-α)=(𝟙{vx3}-12)dH3(v).

Using Fubini’s theorem, we find that

ϕ(x3)-ϕ(y)=(𝟙{wx3}-𝟙{wy})ϕ(w)dw
=(𝟙{wx3}-𝟙{wy})(𝟙{vw}-12)dH3(v)dw
=(𝟙{wx3}-𝟙{wy})𝟙{vw}dwdH3(v)-12(x3-y)dH3(v)
=𝟙{vx3}(x3-v)-𝟙{vy}(y-v)-12(x3-y)dH3(v).

By using (3.3), (5.2) and Proposition 3.5, it is straightforward to check that a scoring function of the form (3.3) can be written as in (5.1) with Lv3 replaced by

L~v3(x1,x2,x3,y)=(𝟙{vx3}-12)(x3+1β-α(Sβ(x2,y)-Sα(x1,y)))-12|x3-v|+12|y-v|,

and locally finite measures H~1, H~2 on [cmin,cmax] instead of H1, H2 such that H~i(A)λ(A) for i=1,2, and for all Borel sets A and the measure H3. We can write H~i=Hi+λ, i=1,2, for some locally finite measures Hi, i=1,2. Integrating vLv1 with respect to λ, we obtain the function λ(Sα(x1,y)+αy), and analogously for Lv2. Using that H3([cmin,cmax])=2λ(β-α) yields the claim with

Lv3(x1,x2,x3,y)=12(β-α)(Sβ(x2,y)+βy+Sα(x1,y)+αy)
+(𝟙{vx3}-12)(x3+1β-α(Sβ(x2,y)-Sα(x1,y)))-12|x3-v|+12|y-v|,

which is equal to the formula given in the statement of the theorem. The scoring functions Lv1 and Lv2 are consistent for VaR at level α and β, respectively. The scoring function Lv3 is of the form (3.3) with

g1(x)=g2(x)=x2β-2αandϕ(x)=|x-v|2,

which renders it a consistent scoring function for (VaRα,VaRβ,RVaRα,β). The converse statement follows by direct computations. ∎

6 Simulations

This simulation study illustrates the usage of consistent scoring functions for the triplet

(VaRα,VaRβ,RVaRα,β)

when comparing the predictive performances of different forecasts for this triplet, e.g., in the context of comparative backtests [43]. We use the scoring functions given in Table 1 and discussed in Example 3.6. The only modification is that in the cases of S1,S2,S3 we additionally scale the functions ϕ (and therefore also ϕ), working with ϕ~(x3)=ϕ((β-α)x3). This choice has performed better in our simulations. We illustrate the discrimination ability of the suggested scoring functions with a slightly extended version of a simulation example of [28] which has also been considered in [24]. It features a cross-sectional setup. Similar simulation studies can also be performed in an autoregressive time series framework.

Figure 1 Murphy diagrams for α=1-β=0.1{\alpha=1-\beta=0.1}. Plots of
expected elementary scores Lv1{L_{v}^{1}}, Lv2{L_{v}^{2}}, Lv3{L_{v}^{3}} in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8{\sigma=0.3,0.5,0.8} from bottom to top.
Figure 1 Murphy diagrams for α=1-β=0.1{\alpha=1-\beta=0.1}. Plots of
expected elementary scores Lv1{L_{v}^{1}}, Lv2{L_{v}^{2}}, Lv3{L_{v}^{3}} in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8{\sigma=0.3,0.5,0.8} from bottom to top.
Figure 1 Murphy diagrams for α=1-β=0.1{\alpha=1-\beta=0.1}. Plots of
expected elementary scores Lv1{L_{v}^{1}}, Lv2{L_{v}^{2}}, Lv3{L_{v}^{3}} in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8{\sigma=0.3,0.5,0.8} from bottom to top.

Figure 1

Murphy diagrams for α=1-β=0.1. Plots of expected elementary scores Lv1, Lv2, Lv3 in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8 from bottom to top.

Figure 2 Murphy diagrams for α=0.01{\alpha=0.01}, β=0.05{\beta=0.05}. Plots of expected elementary scores Lv1{L_{v}^{1}}, Lv2{L_{v}^{2}}, Lv3{L_{v}^{3}} in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8{\sigma=0.3,0.5,0.8} from bottom to top.
Figure 2 Murphy diagrams for α=0.01{\alpha=0.01}, β=0.05{\beta=0.05}. Plots of expected elementary scores Lv1{L_{v}^{1}}, Lv2{L_{v}^{2}}, Lv3{L_{v}^{3}} in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8{\sigma=0.3,0.5,0.8} from bottom to top.
Figure 2 Murphy diagrams for α=0.01{\alpha=0.01}, β=0.05{\beta=0.05}. Plots of expected elementary scores Lv1{L_{v}^{1}}, Lv2{L_{v}^{2}}, Lv3{L_{v}^{3}} in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8{\sigma=0.3,0.5,0.8} from bottom to top.

Figure 2

Murphy diagrams for α=0.01, β=0.05. Plots of expected elementary scores Lv1, Lv2, Lv3 in terms of v for the three forecasters described in the text. For the second forecaster, the curves correspond to σ=0.3,0.5,0.8 from bottom to top.

Let us first introduce the data generating process. To this end, let (Wt,Zt,ut) be an i.i.d. sequence with a centered Gaussian distribution and diagonal covariance matrix with diagonal entries (1,σ2,1). The variables Wt and Zt will play the role of explanatory variables, and ut is an unobservable error term. Let Yt=Wt+ut be our sequence of observations. Therefore, Yt𝒩(0,2) and Yt|Wt𝒩(Wt,1), while Zt is completely uninformative since it is independent of Yt. Suppose we have three different forecasters who provide point forecasts, aiming at correctly specifying T=(VaRα,VaRβ,RVaRα,β) of the (conditional) distribution of Yt. The first forecaster has access to the explanatory variables Wt,Zt and issues correctly specified conditional risk measure forecasts

T^t(1)=T(FYt|Wt,Zt)=T(FYt|Wt)
=(Wt+Φ-1(α),Wt+Φ-1(β),Wt-1β-α(φ(Φ-1(β))-φ(Φ-1(α))))

for the time point t, where φ and Φ denote the density and quantile function of the standard normal distribution, respectively. The second forecaster also has access to Wt and Zt. However, they use a wrong model, issuing (correct) forecasts for Y~t=Wt+Zt+ut rather than for Yt. That means

T^t(2)=T(FY~t|Wt,Zt)=(T^1,t(1)+Zt,T^2,t(1)+Zt,T^3,t(1)+Zt).

The third forecaster is uninformed and makes correct unconditional predictions:

T^t(3)=T(FYt)=(2Φ-1(α),2Φ-1(β),-2β-α(φ(Φ-1(β))-φ(Φ-1(α)))).

Applying the very definition of (strict) consistency, it holds for any (strictly) consistent scoring function for T that

𝔼(S(T^t(1),Yt)Wt,Zt)𝔼(S(T^t(2),Yt)Wt,Zt)

almost surely with a strict inequality in case of strict consistency. Therefore, also

𝔼(S(T^t(1),Yt))𝔼(S(T^t(2),Yt)).

That means, forecaster 1 should be (strictly) preferred to forecaster 2 under any (strictly) consistent scoring function. Similarly, forecaster 1 should (strictly) outperform forecaster 3 with respect to any (strictly) consistent scoring functions, due to increasing information sets; invoking [31]. Indeed, we have

𝔼(S(T^t(1),Yt)Wt)𝔼(S(T^t(3),Yt)Wt)

almost surely such that 𝔼(S(T^t(1),Yt))𝔼(S(T^t(3),Yt)) follows with strict inequalities when S is strictly consistent. When comparing forecasters 2 and 3, it is not a priory clear which forecaster is preferred. It will generally depend on the choice of the (strictly) consistent scoring function and on the size of the variance σ2. Recalling that the limiting case of σ20 yields forecaster 1, forecaster 2 should be preferred for small σ2. Their performance should deteriorate as σ2 increases.

Figures 1 and 2 provide Murphy diagrams of all forecasters computed from a sample of size N=100 000, providing a good approximation of the population level. They are in line with our theoretical considerations above concerning the ranking of the three forecasts.

We also consider a setup which is closer to the stylistic situation in comparative backtests in a risk management context. To this end, we compare the predictive performances using Diebold–Mariano tests [10] based on the scoring functions in Table 1 (scaled as explained previously). We consider samples of size N=250 and repeat our experiment 10 000 times. In the left panel of Table 2, we consider the case that α=1-β=0.1 where RVaRα,β is a trimmed mean. We report the empirical ratio of rejections of the null hypothesis that forecaster i outperforms forecaster j, i,j{1,2,3}, ij, evaluated in terms of the score S at significance level 0.05. That is, we consider the null hypothesis 𝔼(S(T^t(i),Yt))𝔼(S(T^t(j),Yt)) for all t=1,,N, or in short, ij. Analogously, in the right panel of Table 2, we consider the case that α,β are both close to zero, that is, α=0.01 and β=0.05, which is a setting that is relevant if RVaRα,β is used as a risk measure. For the scoring function S4, we have experimented a bit with the values c1 and c2 and report the results for the choices that worked best in our experiments. A systematic study on how to choose these two parameters goes beyond the scope of the present paper.

Table 2

Power of Diebold–Mariano tests at significance level 0.05 for the scoring functions in Table 1 (suitably scaled) in the case that α=1-β=0.1 (left panel), and α=0.01, β=0.05 (right panel). In the first case we chose -c1=c2=12 for the scoring function S4, and c1=-5, c2=1 in the second case. The null hypothesis ij means that 𝔼(S(T^t(i),Yt))𝔼(S(T^t(j),Yt)) for all t=1,,N for the scoring function specified in the column label. We chose σ2=0.52 for the forecaster 2.

H0S1S2S3S4
120000
210.8640.8640.8730.956
130000
311.0001.0001.0001.000
230000
320.9990.9990.9900.996
H0S1S2S3S4
120000
210.6750.6710.6700.522
130000
310.9920.9920.9940.817
230000.002
320.7400.7420.7680.258

For the situation of the left panel of Table 2 concerning α=1-β=0.1, we can see that forecaster 1 (2) outperforms forecaster 3 with a power of 1 (almost 1) for all scoring functions used. For a comparison of forecaster 1 and forecaster 2, the situation is more interesting: Forecaster 1 outperforms forecaster 2 with regard to all scoring functions considered. The power of the tests (and the associated discrimination ability of the scoring functions) is very similar for S1,S2 and S3 (around 0.864 to 0.873). On the other hand, S4 achieves a considerably higher power of 0.956. The situation described in the right panel of Table 2 considering the parameter choice α=0.01 and β=0.05 leads to a different situation. The most obvious observation is that the power is lower than in the symmetric situation depicted in the left panel for all null hypotheses, respectively. This intuitively makes sense since differences in the tail behavior are more challenging to detect in comparison to differences in the behavior of the central region of the distribution. Second, we can see that the power of the scores S1, S2 and S3 is again very similar for all situations, whereas the score S4 performs apparently worse. This can be seen most strikingly for the null 32: the power of the scores S1, S2 and S3 is between 0.740 and 0.768, whereas S4 yields a power of only 0.258. However, as mentioned above, for a comparison between forecaster 2 and 3, it is also not possible to establish a general ranking for all consistent scoring functions. In line with this, the dependence of the ranking on the choice of the score is reflected in the difference in power. A more detailed study and comparison of other scoring functions and other situations is deferred to future work.

7 Implications for regression

After illustrating the usage of consistent scoring functions in forecast comparison and comparative backtesting in Section 6, we would like to outline how one can implement our results about the elicitability of the triplet (VaRα,VaRβ,RVaRα,β), 0<α<β<1, in a regression context. Then we would like to contrast our ansatz to other suggestions for regression of the α-trimmed mean (which can be generalized to RVaRα,β). The most common alternative approaches in the literature on robust statistics are the trimmed least squares approach and a two-step estimation procedure using the Huber skipped mean.

7.1 A joint regression framework for (VaRα,VaRβ,RVaRα,β)

Let (Wt,Yt)t be a time series with the usual notation that Yt denotes some real-valued response variable and Wt is a d-dimensional vector of regressors. Let Θk be some parameter space and let M:d×Θ3 be a parametric model for T=(VaRα,VaRβ,RVaRα,β), 0<α<β<1. We assume a correct model specification, that is, we assume that there is a unique θ0Θ such that

(7.1)T(FYt|Wt)=M(Wt,θ0)-a.s. for all t,

where FYt|Wt denotes the conditional distribution of Yt given Wt. That means, M(Wt,θ0) models jointly the conditional VaRα, VaRβ and the conditional RVaRα,β. Let S be a strictly consistent scoring function of the form (3.3) and suppose the sequence (Wt,Yt)t satisfies certain mixing conditions [54, Corollary 3.48] (of which a special case is independence). Then one obtains under additional moment conditions that, as n,

1nt=1nS(M(Wt,θ),Yt)-1nt=1n𝔼[S(M(Wt,θ),Yt)]0-a.s.

It is essentially this law of large numbers result which allows for consistent parameter estimation with the empirical M-estimator θ^n=argminθΘn-1t=1nS(M(Wt,θ),Yt); see, e.g., [50, 33, 43, 11] for details.

In summary, we can see that the complication of this procedure is that one needs to model the components VaRα and VaRβ, even if one is only interested in RVaRα,β. The advantage is that one can substantially deviate from an i.i.d. assumption on the data generating process. One can deal with serially dependent, though mixing, and non-stationary data. One only needs the semiparametric stationarity specified through equation (7.1).

7.2 Trimmed least squares

Most proposals for M-estimation and regression for RVaRα,β in the field of robust statistics focus on the α-trimmed mean, α(0,12), corresponding to RVaRα,1-α. But they can often be extended to the general case 0<α<β<1 in a straightforward way. When this is the case, we describe the procedure in this more general manner. A majority of the proposals in the literature are commonly referred to as a trimmed least squares (TLS) approach. However, strictly speaking, TLS actually subsumes different, though closely related estimation procedures.

The first one was coined by Koenker and Bassett [35] – cf. [49] – and constitutes a two-step M-estimator: in a first step, the α- and β-quantile are determined via usual M-estimation. Then all values below the former and above the latter are omitted and RVaRα,β is computed with an ordinary least squares approach. One can also express this procedure using order-statistics. By using the notation from Section 7.1, an M-estimator for RVaRα,β is given by

argminz1ni=[nα][nβ](z-Y(i))2.

Here, Y(1)Y(n) is the order-statistics of the sample Y1,,Yn. While this procedure seems to work for a simplistic regression model (ignoring the regressors Wt and only modelling the intercept part), it is not clear how to use it in a more interesting regression context, where one is actually interested in the conditional distribution of Yt given Wt rather than the unconditional distribution of Yt. Moreover, since this approach uses the order-statistics of the entire sample Y1,,Yn to implicitly estimate the α- and β-quantile, it requires that these quantiles be constant in time. Hence, heteroscedasticity (in time) can lead to problems, even if RVaRα,β is constant in time.

A second approach is described, for example, in [47, 48] and relies on order-statistics of the squared residuals. It only seems to work for the α-trimmed mean. To be more precise, and again using the notation from above, let m:d×Θ be a one-dimensional parametric model. Again, one assumes that there is a unique correctly specified model parameter θ0Θ such that

(7.2)RVaRα,1-α(FYt|Wt)=m(Wt,θ0)-a.s. for all t.

For each θΘ, define the residuals εt(θ):=Yt-m(Wt,θ) and the absolute residuals rt(θ):=|εt(θ)|. Define the order-statistics of the absolute residuals 0r(1)(θ)r(n)(θ) for a sample of size n. Then an M-estimator is defined via

θ^n=argminθΘ1ni=1[n(1-2α)]r(i)2(θ).

While this procedure appears to be fairly similar to an ordinary least squares procedure with the respective computational advantages, one should recall that the trimming crucially depends on the choice of the parameter θ. That means even if the model m is linear in the parameter θ, one generally yields a non-convex objective function with several local minima. Interestingly, the trimming takes place only for residuals with large modulus. If the error distribution is symmetric, this procedure yields a consistent estimator for θ0 in an i.i.d. setting. If one wants to relax the assumption on the error distribution and is interested in modelling RVaRα,β for general 0<α<β<1 in (7.2), one could come up with the following ad-hoc procedure: Consider the order-statistics of the residualsε(1)(θ)ε(n)(θ). Then define an M-estimator via

θ^n=argminθΘ1ni=[nα][nβ]|ε(i)(θ)|2.

This procedure takes into account the asymmetric nature of trimming when dealing with β1-α, or β=1-α and an asymmetric error distribution. However, as outlined above, this procedure can lead to problems in the presence of heteroscedasticity or general non-stationarity of the error distribution if the conditional VaRα and VaRβ of Yt given Wt depend on Wt. We would like to point out that, at the cost of additionally modelling the α- and β-quantile, the procedure using our strictly consistent scoring functions for the triplet (VaRα,VaRβ,RVaRα,β) described in Section 7.1 does not rely on the usage of order-statistics and it can in general deal with heteroscedasticity. The only degree of “stationarity” is required through (7.1). Especially, stationarity is deemed too strong an assumption in the context of financial data; see [9].

Finally, we would like to remark that there are further procedures belonging to the field of TLS. For instance, Atkinson and Cheng [4] propose an adaptive procedure where the trimming parameter is data driven; see also [7]. However, we see no apparent way how to use such procedures if one is interested in predefined trimming parameters α and β.

7.3 Connections to Huber loss and Huber skipped mean

In his seminal paper, Huber [32] introduced the famous Huber lossS(x,y)=ρ(x-y), where ρ(t)=12t2 for |t|k and ρ(t)=k|t|-12k2 for |t|>k. Huber argues that “the corresponding [M-]estimator is related to Winsorizing” [32, p. 79]. What obtained significantly less attention – maybe due to its lack of convexity – is another loss function he considers on the same page of the paper, which is defined as S(x,y)=ρ(x-y) for ρ(t)=12t2 for |t|k and ρ(t)=12k2 for |t|>k. He writes about it: “the corresponding [M-]estimator is a trimmed mean” (ibidem).

One could define an asymmetric version of the latter loss function by using Sk1,k2(x,y)=ρk1,k2(x-y) with

ρk1,k2(t)={12k12,t<k1,12t2,k1t<k2,12k22,tk2.

Assuming that F is continuous with density f for the sake of the simplicity of the argument, the corresponding first-order condition for a minimum of the expected score S¯k1,k2(x,F) is equivalent to

x=1F(k2-x)-F(k1-x)k1-xk2-xyf(y)dy.

Now, a suggestion similar to [47, p. 876] is to consider this loss with k1=VaRβ(F) and k2=VaRα(F) stemming from some pre-estimate. However, one can see that the first order-condition is generally not solved by RVaRα,β(F). Again, if one is interested in M-estimation for the trimmed mean or, more generally, RVaR, one should use the scoring functions introduced in this paper at (3.3).

Funding statement: Tobias Fissler is grateful to the Department of Mathematics at Imperial College London who funded his fellowship during which most of the work of this paper has been done. Johanna Ziegel is grateful for financial support from the Swiss National Science Foundation.

A Appendix

We present a list of assumptions used in Section 3. For more details about their interpretations and implications, please see [21] where they were originally introduced.

Assumption (V1).

is convex and for every xint(𝖠) there are F1,,Fk+1 such that

0int(conv({V¯(x,F1),,V¯(x,Fk+1)})).

Note that if V:𝖠×k is a strict -identification function for T:𝖠 which satisfies Assumption (V1), then for each xint(𝖠) there is an F such that T(F)=x.

Assumption (V3).

The map V¯(,F) is continuously differentiable for every F.

Assumption (V4).

Let Assumption (V3) hold. For all r{1,,k} and for all tint(𝖠)T(), there are F1,F2T-1({t}) such that

lV¯l(t,F1)=lV¯l(t,F2)for all l{1,,k}{r},
rV¯r(t,F1)rV¯r(t,F2).

Assumption (F1).

For every y, there exists a sequence (Fn)n of distributions Fn that converges weakly to the Dirac-measure δy such that the support of Fn is contained in a compact set K for all n.

Assumption (VS1).

Suppose that the complement of the set

C:={(x,y)𝖠×V(x,) and S(x,) are continuous at the point y}

has (k+d)-dimensional Lebesgue measure zero.

Assumption (S2).

For every F, the function S¯(,F) is continuously differentiable and the gradient is locally Lipschitz continuous. Furthermore, S¯(,F) is twice continuously differentiable at t=T(F)int(𝖠).

Acknowledgements

We would like to thank Timo Dimitriadis and Anthony C. Atkinson for insightful discussions about the topic, and Ruodu Wang, Rafael Frongillo, Tilmann Gneiting, Jana Hlavinová, and Michal Sorocin for helpful suggestions which improved an earlier version of this paper.

References

[1] C. Acerbi and B. Székely, Backtesting expected shortfall, Risk Mag. (2014), 1–33. Search in Google Scholar

[2] C. Acerbi and B. Székely, General properties of backtestable statistics, preprint (2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2905109. Search in Google Scholar

[3] P. Artzner, F. Delbaen, J.-M. Eber and D. Heath, Coherent measures of risk, Math. Finance 9 (1999), no. 3, 203–228. Search in Google Scholar

[4] A. C. Atkinson and T.-C. Cheng, Computing least trimmed squares regression with the forward search, Statist. Comput. 9 (1999), no. 4, 251–263. Search in Google Scholar

[5] S. Barendse, Efficiently weighted estimation of tail and interquartile expectations, preprint (2020), https://dx.doi.org/10.2139/ssrn.2937665. Search in Google Scholar

[6] J. R. Brehmer, Elicitability and its application in risk management, Master’s thesis, University of Mannheim, 2017. Search in Google Scholar

[7] A. Cerioli, M. Riani, A. C. Atkinson and A. Corbellini, The power of monitoring: How to make the most of a contaminated multivariate sample, Stat. Methods Appl. 27 (2018), no. 4, 559–587. Search in Google Scholar

[8] R. Cont, R. Deguest and G. Scandolo, Robustness and sensitivity analysis of risk measurement procedures, Quant. Finance 10 (2010), no. 6, 593–606. Search in Google Scholar

[9] M. H. A. Davis, Verification of internal risk measure estimates, Stat. Risk Model. 33 (2016), no. 3–4, 67–93. Search in Google Scholar

[10] F. X. Diebold and R. S. Mariano, Comparing predictive accuracy, J. Bus. Econom. Statist. 13 (1995), 253–263. Search in Google Scholar

[11] T. Dimitriadis, T. Fissler and J. F. Ziegel, The efficiency gap, preprint (2020), https://arxiv.org/abs/2010.14146. Search in Google Scholar

[12] W. Ehm, T. Gneiting, A. Jordan and F. Krüger, Of quantiles and expectiles: Consistent scoring functions, Choquet representations and forecast rankings, J. R. Stat. Soc. Ser. B. Stat. Methodol. 78 (2016), no. 3, 505–562. Search in Google Scholar

[13] P. Embrechts, H. Liu, T. Mao and R. Wang, Quantile-based risk sharing with heterogeneous beliefs, Math. Program. 181 (2020), no. 2, 319–347. Search in Google Scholar

[14] P. Embrechts, H. Liu and R. Wang, Quantile-based risk sharing, Oper. Res. 66 (2018), no. 4, 936–949. Search in Google Scholar

[15] P. Embrechts, G. Puccetti, L. Rüschendorf, R. Wang and A. Beleraj, An academic response to Basel 3.5, Risks 2 (2014), 25–48. Search in Google Scholar

[16] P. Embrechts, B. Wang and R. Wang, Aggregation-robustness and model uncertainty of regulatory risk measures, Finance Stoch. 19 (2015), no. 4, 763–790. Search in Google Scholar

[17] S. Emmer, M. Kratz and D. Tasche, What is the best risk measure in practice? A comparison of standard risk measures, J. Risk 8 (2015), 31–60. Search in Google Scholar

[18] J. Engelberg, C. F. Manski and J. Williams, Comparing the point predictions and subjective probability distributions of professional forecasters, J. Bus. Econom. Statist. 27 (2009), no. 1, 30–41. Search in Google Scholar

[19] T. Fissler, On higher order elicitability and some limit theorems on the poisson and Wiener space, PhD thesis, University of Bern, 2017. Search in Google Scholar

[20] T. Fissler, R. Frongillo, J. Hlavinová and B. Rudloff, Forecast evaluation of quantiles, prediction intervals, and other set-valued functionals, Electron. J. Stat. 15 (2021), no. 1, 1034–1084. Search in Google Scholar

[21] T. Fissler and J. F. Ziegel, Higher order elicitability and Osband’s principle, Ann. Statist. 44 (2016), no. 4, 1680–1707. Search in Google Scholar

[22] T. Fissler and J. F. Ziegel, Order-sensitivity and equivariance of scoring functions, Electron. J. Stat. 13 (2019), no. 1, 1166–1211. Search in Google Scholar

[23] T. Fissler and J. F. Ziegel, Correction note: Higher order elicitability and Osband’s principle, Ann. Statist. 49 (2021), no. 1, 614–614. Search in Google Scholar

[24] T. Fissler, J. F. Ziegel and T. Gneiting, Expected shortfall is jointly elicitable with value-at-risk: Implications for backtesting, Risk Mag. (2016), 58–61. Search in Google Scholar

[25] R. Frongillo and I. Kash, Elicitation complexity of statistical properties, Biometrika (2020), 10.1093/biomet/asaa093. Search in Google Scholar

[26] R. Giacomini and H. White, Tests of conditional predictive ability, Econometrica 74 (2006), no. 6, 1545–1578. Search in Google Scholar

[27] T. Gneiting, Making and evaluating point forecasts, J. Amer. Statist. Assoc. 106 (2011), no. 494, 746–762. Search in Google Scholar

[28] T. Gneiting, F. Balabdaoui and A. E. Raftery, Probabilistic forecasts, calibration and sharpness, J. R. Stat. Soc. Ser. B Stat. Methodol. 69 (2007), no. 2, 243–268. Search in Google Scholar

[29] T. Gneiting and A. E. Raftery, Strictly proper scoring rules, prediction, and estimation, J. Amer. Statist. Assoc. 102 (2007), no. 477, 359–378. Search in Google Scholar

[30] F. R. Hampel, A general qualitative definition of robustness, Ann. Math. Statist. 42 (1971), 1887–1896. Search in Google Scholar

[31] H. Holzmann and M. Eulert, The role of the information set for forecasting—with applications to risk management, Ann. Appl. Stat. 8 (2014), no. 1, 595–621. Search in Google Scholar

[32] P. J. Huber, Robust estimation of a location parameter, Ann. Math. Statist. 35 (1964), 73–101. Search in Google Scholar

[33] P. J. Huber and E. M. Ronchetti, Robust Statistics, 2nd ed., John Wiley & Sons, Hoboken, 2009. Search in Google Scholar

[34] R. Koenker, Quantile Regression, Cambridge University, Cambridge, 2005. Search in Google Scholar

[35] R. Koenker and G. Bassett, Jr., Regression quantiles, Econometrica 46 (1978), no. 1, 33–50. Search in Google Scholar

[36] S. Kou, X. Peng and C. C. Heyde, External risk measures and Basel accords, Math. Oper. Res. 38 (2013), no. 3, 393–417. Search in Google Scholar

[37] V. Krätschmer, A. Schied and H. Zähle, Qualitative and infinitesimal robustness of tail-dependent statistical functionals, J. Multivariate Anal. 103 (2012), 35–47. Search in Google Scholar

[38] V. Krätschmer, A. Schied and H. Zähle, Comparative and qualitative robustness for law-invariant risk measures, Finance Stoch. 18 (2014), no. 2, 271–295. Search in Google Scholar

[39] N. Lambert, D. M. Pennock and Y. Shoham, Eliciting properties of probability distributions, Proceedings of the 9th ACM Conference on Electronic Commerce, ACM, New York (2008), 129–138. Search in Google Scholar

[40] G. Lugosi and S. Mendelson, Robust multivariate mean estimation: the optimality of trimmed mean, Ann. Statist. 49 (2021), no. 1, 393–410. Search in Google Scholar

[41] A. H. Murphy and H. Daan, Forecast evaluation, Probability, Statistics and Decision Making in the Atmospheric Sciences, Westview Press, Boulder (1985), 379–437. Search in Google Scholar

[42] W. K. Newey and J. L. Powell, Asymmetric least squares estimation and testing, Econometrica 55 (1987), no. 4, 819–847. Search in Google Scholar

[43] N. Nolde and J. F. Ziegel, Elicitability and backtesting: Perspectives for banking regulation, Ann. Appl. Stat. 11 (2017), no. 4, 1833–1874. Search in Google Scholar

[44] K. H. Osband, Providing incentives for better cost forecasting, PhD thesis, University of California, Berkeley, 1985. Search in Google Scholar

[45] A. J. Patton, Data-based ranking of realised volatility estimators, J. Econometrics 161 (2011), no. 2, 284–303. Search in Google Scholar

[46] A. J. Patton, Comparing possibly misspecified forecasts, J. Bus. Econom. Statist. 38 (2020), no. 4, 796–809. Search in Google Scholar

[47] P. Rousseeuw, Least median of squares regression, J. Amer. Statist. Assoc. 79 (1984), no. 388, 871–880. Search in Google Scholar

[48] P. Rousseeuw, Multivariate estimation with high breakdown point, Mathematical Statistics and Applications, Reidel, Dordrecht (1985), 283–297. Search in Google Scholar

[49] D. Ruppert and R. J. Carroll, Trimmed least squares estimation in the linear model, J. Amer. Statist. Assoc. 75 (1980), no. 372, 828–838. Search in Google Scholar

[50] A. W. van der Vaart, Asymptotic Statistics, Camb. Ser. Stat. Probab. Math. 3, Cambridge University, Cambridge, 1998. Search in Google Scholar

[51] R. Wang and Y. Wei, Risk functionals with convex level sets, Math. Finance 30 (2020), no. 4, 1337–1367. Search in Google Scholar

[52] S. Wang, Insurance pricing and increased limits ratemaking by proportional hazards transforms, Insurance Math. Econom. 17 (1995), no. 1, 43–54. Search in Google Scholar

[53] S. Weber, Distribution-invariant risk measures, information, and dynamic consistency, Math. Finance 16 (2006), no. 2, 419–441. Search in Google Scholar

[54] H. White, Asymptotic Theory for Econometricians, Academic Press, San Diego, 2001. Search in Google Scholar

[55] M. E. Yaari, The dual theory of choice under risk, Econometrica 55 (1987), no. 1, 95–115. Search in Google Scholar

[56] H. Zähle, A definition of qualitative robustness for general point estimators, and examples, J. Multivariate Anal. 143 (2016), 12–31. Search in Google Scholar

[57] J. F. Ziegel, Coherence and elicitability, Math. Finance 26 (2016), no. 4, 901–918. Search in Google Scholar

[58] Bank for International Settlements, Consultative Document: Fundamental review of the trading book: Outstanding issues, 2014. Search in Google Scholar

Received: 2020-12-03
Revised: 2021-06-24
Accepted: 2021-08-13
Published Online: 2021-09-25

© 2021 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.