We provide an exact simulation algorithm for bivariate Archimax copulas, including instances with negative association. In contrast to existing simulation approaches, the feasibility of our algorithm is directly linked to the availability of an exact simulation algorithm for the probability measure described by the derivative of the parameterizing Pickands dependence function. We demonstrate that this hypothesis is satisfied in many cases of interest and, in particular, it is satisfied for piece-wise constant Pickands dependence functions, which can approximate the general case to a given level of desired accuracy. Finally, the algorithm can be leveraged to an exact simulation algorithm for bivariate copulas associated with max-infinitely divisible random vectors whose exponent measure has norm-symmetric survival function, so-called reciprocal Archimax copulas.
We recall that a bivariate copula is the restriction to of a joint distribution function of a random vector with the property that both and are uniformly distributed on the interval , i.e., and for . There exists a lot of research on copulas, since the distribution function of an arbitrary bivariate random vector can always be written as for some copula due to Sklar’s theorem, so that copulas are an analytic tool to separate dependence and marginals. Similarly, the survival function of can always be written as for some copula , called survival copula. If are continuous, the copula and survival copula of are even unique, see [5,17] for popular textbooks on the topic.
The present article deals with the specific family of bivariate Archimax copulas that have been introduced in  with the intention to create a large parametric family of copulas with desirable properties in the context of extreme-value theory. For detailed background on dependence properties we also refer to , and inference for Archimax copulas is investigated in . Bivariate Archimax copulas are parameterized in terms of a pair of a continuous and convex function with and , and a norm on with normalization that is orthant–monotonic, which means that . Intuitively, this means that its unit circle is axis-symmetric so that its unit simplex lies between the ones corresponding to and . It is well-known from [1,2] that there exists a random vector on the positive orthant whose survival function is given by , . The unique survival copula of is called Archimax copula, given by
In Section 5, we will further provide an exact simulation algorithm for reciprocal Archimax copulas, which have the algebraic form
where is the same as above, and is a convex function with (just like ) and (unlike ). The nomenclature originates from the algebraic reminiscence with Archimax copulas in (1) and was introduced in  for the special case . These arise as copulas of bivariate max-infinitely divisible random vectors whose exponent measure has -symmetric survival function.
The function is called Archimedean generator, since the subfamily of copulas of the form coincides with Archimedean copulas, see  for background on the matter. Similarly, will be called a generator for the reciprocal Archimax copula, since the special case coincides with bivariate reciprocal Archimedean copulas introduced in . The norm is called a stable tail dependence function, and the function for is called Pickands dependence function. (Note that is fully determined by via .) The nomenclature originates from the field of multivariate extreme-value theory, since the subfamily of copulas of the form coincides with bivariate extreme-value copulas, see  for background on the matter.
Regarding the strength of association between the components and there are two different regimes. If the function happens to be the Laplace transform of a positive random variable , we have non-negative association, while we can have negative association in the complementary case, when is a so-called strict Archimedean generator (not a Laplace transform). The case of positive association is significantly better understood, because in this case the survival function equals that of a scale mixture of a bivariate min-stable exponential distribution, which is a well-investigated concept of positive association, see [9, Chapter 6, p. 169 ff]. In particular, for random vectors with survival function , or equivalently for extreme-value copulas , there exist exact simulation algorithms due to , which are feasible for many . Our intention is to provide an algorithm that is equally general with respect to the choice of , but is not restricted to Laplace transforms , but works also for strict generators , in particular for the important special case .
We recap the existing literature that forms the basis for our derivation. On the one hand, there is the following decomposition result due to .
(Charpentier et al. ) Let be an orthant–monotonic norm on subject to the normalization , and let be a continuous and convex function with and . Then there exists a random vector with survival function and an independent positive random variable with the property , whose distribution is uniquely determined by the function . Furthermore, the random vector has survival function , , and survival copula . Thus, the random vector has distribution function .
In words, Theorem 1.1 states that every Archimax copula arises as scale mixture of the special Archimax copula with generator , which equals the Archimax copula with minimal association for a fixed . When simulating with survival copula , this allows us to perform two independent subsequent simulation steps: first simulation of and second simulation of with survival copula given in terms of by
On the other hand, in the original reference  the authors propose an exact simulation algorithm, which we are going to refine. According to Theorem 1.1, we may focus on the simulation of , which corresponds to the special case , and in which the method of  can be formulated as in the following theorem.
(Capéraà et al. ) Assume that the second derivative of exists and is continuous everywhere on . Let and be three independent random variables, where and
Further denote the density of by (it exists by the assumption of existing ) and define the threshold function
Under these assumptions, the random vector from Theorem 1.1 admits the stochastic representation
Our proposed algorithm below provides concrete implementation advice for simulation from . For the simulation of we use an alternative approach than Theorem 1.2, in particular we do not require to exist, but we do make use of the distributional statement (3) in our derivation. The main idea is to use the well-known representation
which defines a one-to-one relationship between the parameterizing norm and the distribution of a random variable with . This representation of in terms of is called Pickands representation in extreme-value theory. The last equality in (4) shows that the right-hand version of the derivative , which is well-defined everywhere by convexity of and which we simply denote by throughout, essentially determines the distribution function of and, if existing, the second derivative equals the density of . Whereas the analytical treatment of extreme-value copulas in terms of is traditional, the article  provides some convincing arguments as to why a treatment in terms of the cdf of (i.e., essentially ) can sometimes be more convenient. The appearance of and in (3) above thus hints at the possibility to reformulate the simulation algorithm in terms of . In fact, we manage to provide a simulation algorithm for that is exact and feasible whenever the following pre-requisites are met:
(Exact evaluation of ) We can evaluate exactly and efficiently for arbitrary .
(Exact evaluation of ) We can evaluate exactly and efficiently for arbitrary .
(Exact simulation of ) The random variable can be simulated exactly and efficiently.
Assumptions 1 and 2 are clearly mild, in particular as they are minimal requirements in order to have hope for simulation according to (3) or the inverse Rosenblatt transformation method via partial derivatives to work at all. The decisive question the thoughtful reader should raise at this point is: “Why is it useful to re-formulate the known model (3) in terms of ?” To this end, we point out that due to the aforementioned Pickands representation the random variable plays a central role in extreme-value theory, in particular the law of is feasible very often, as we will demonstrate below in terms of a comprehensive survey of models. For instance,  provides two simulation algorithms for extreme-value copulas precisely under Assumption 3. While the algorithms of  are restricted to extreme-value copulas but treat the general case of -dimensional extreme-value copulas also for , the algorithm we develop is restricted to the bivariate case only but can be leveraged to an algorithm for arbitrary bivariate Archimax copulas. The -dimensional generalization remains an interesting open problem, to date only solved for the special case , see .
The remainder of the article is organized as follows. Section 2 derives our simulation algorithm, while Section 3 demonstrates it by providing examples. Section 4 discusses some interesting aspects of the particular case when is symmetric, and Section 5 treats the situation of bivariate reciprocal Archimax copulas, in particular indicating that our simulation algorithm for bivariate Archimax copulas immediately implies an exact simulation algorithm for bivariate reciprocal Archimax copulas as well. Section 6 finally concludes.
2 Exact simulation from in terms of
We derive an alternative stochastic representation for the random vector in Theorem 1.1, which serves as convenient basis for simulation under our Assumptions 1–3. One problem with the stochastic representation of Theorem 1.2 is that it requires to exist, which is restrictive. Our method will resolve this problem. The other problem is that the probability law of is rather complicated to simulate in many cases. Our proposed method replaces the requirement to simulate with the requirement to simulate from the Pickands representation (4), and we demonstrate that this is feasible in many cases.
For the sake of clarity, we recall that the survival function of is given by , and the distribution function of equals
which is itself a copula (the survival copula of ) and . Our proposed method to simulate requires the exact simulation of an absolutely continuous random variable on whose density depends on a realization of from the Pickands representation (4). As a first auxiliary step, the following lemma states the explicit form of this density. It furthermore provides the basis for an exact and efficient simulation from via the so-called rejection–acceptance sampling method, see [12, p. 234–235] for background. The idea of rejection–acceptance sampling for a (possibly complicated) density is to find an easy-to-simulate density , called comparison density, and a finite constant such that . Given this, one may iteratively simulate variates and independent until finally , and then return the resulting sample satisfying this condition as a sample from .
(Auxiliary density and associated comparison density) Let be an orthant–monotonic norm on which is normalized to satisfy , and fix . With the positive constant
defines a probability density on . Furthermore, we have that , where is a piece-wise polynomial probability density on that is given by
with normalizing constant .
We observe that the function
is a continuous distribution function on for arbitrary with and . To verify this, we need to check that is non-decreasing, which follows from the identity
and the bounds as well as . We further observe easily with similar estimates, using again the similarity of the definition of and , that the density of is bounded from above by for given in (6).□
A random variable with density can be simulated via the inversion method, see [12, p. 234–235] for background, since its cumulative distribution function
can be inverted in closed form according to
for . Consequently, Lemma 2.1 implies that a random variable with density can be simulated exactly via the rejection–acceptance method under Assumptions 1 and 2. Algorithm 1 summarizes the resulting simulation method for based on this idea.
|Algorithm 1. Simulation of via rejection–acceptance|
|1:||procedure SimulateH( )|
|4:||Simulate uniform on|
|10:||has comparison density given in (6)|
Based on the availability of an efficient simulation of via Algorithm 1, the following theorem is the main result of the present article.
(Exact simulation of ) Let be an orthant–monotonic norm on subject to the normalization . Let and from the Pickands representation (4) of be independent random variables. Given (and independent of ), let be a random variable with density , simulated according to Algorithm 1. The random vector with survival function and distribution function has the stochastic representation
It is sufficient to prove the claim for absolutely continuous . Why? An arbitrary (possibly with atoms) with may be approximated arbitrarily well in distribution with a sequence of absolutely continuous random variables satisfying . These correspond to a sequence of stable tail dependence functions that converge point-wise, and this implies that the respective distributional statement also holds in the limit associated with . Note that this logic relies on the fact that convergence in distribution stands in one-to-one relation with the point-wise convergence , resp. .
Under the assumption of absolute continuity of , is differentiable and it is clear that has a singular component on the -simplex , has zero mass outside the simplex and has a density inside the simplex. Consequently, we may compute this density of inside the simplex by taking the partial derivative with respect to both components. To this end, we denote by the density of the random vector for arbitrary but fixed , and obtain for with
Denoting by the mass that allocates on the -simplex, by its singular and by its continuous part, we have the decomposition . We obtain for from the above expression
Furthermore, we obtain the probability by plugging in , which yields
Note that the last equality is stated only for later reference, and implicitly these computations rely on the fact that almost surely. This already proves the first part of our claimed statement.
For the second part, from Theorem 1.2 we obtain that
as well as
In the last expression we recognize the distribution function from the proof of Lemma 2.1, so that this finally implies
which establishes the claimed statement, when comparing with (3).□
As mentioned in Section 1, the simulation of an arbitrary bivariate Archimax copula requires simulation from , which we have now established, and, independently, simulation of a random radius , whose probability distribution is determined uniquely by the Williamson-2-transform . We first note that the distribution function of is given by the Williamson-inversion formula
where is meant to be the right-hand version of the (by convexity almost everywhere existing) derivative of . Regarding examples for feasible , we refer to . Generally speaking, rejection–acceptance sampling of based on (8) is feasible whenever exists and is analytically feasible and if the (then existing) density
can be bounded from above by a scalar multiple of an easy-to-simulate density. In contrast, we end this section by providing a simulation algorithm for the special case
where is a probability measure on , under the following assumption:
(Requirement to simulate with generator (9)) The probability measure can be simulated exactly.
A generator of the form (9) always implies . There are examples for generators of that form which are not analytically tractable on first glimpse, in particular for which it is not straightforward to apply direct acceptance-rejection sampling based on the density , an example is provided below.
is a so-called Bernstein function satisfying , in particular is concave, so that equals the Williamson-2-transform of some random variable taking values in , i.e., is a proper Archimedean generator. Every that arises in this way can be simulated under Assumption 4. Why? We observe that
For fixed , the function is the distribution function of a random variable on . Furthermore, simulation of is efficient and exact via acceptance-rejection, since takes values in , has an atom at 1 with probability and a density on that is bounded from above by . Summing up, in order to simulate , the following steps need to be carried out:
Simulate , using Assumption 4.
If , return .
Otherwise, repeatedly simulate independent, until finally , and then return .
which corresponds to the measure
The associated random variable with probability distribution has density given by , where denotes the density of and has Beta distribution with density proportional to . The simulation of is thus straightforward as a mixture of logarithms of Beta distributions, see [12, p. 243] for advice on how to simulate . Apparently, computing the second derivative of and developing a rejection–acceptance sampling method for in this example appears to be rather inconvenient, whereas the presented method to simulate and its associated Archimax copulas only requires to evaluate (in order to transform margins to uniforms), which is easy.
3 Examples for feasible
We review some popular families of stable tail dependence functions , for which the simulation of the associated is viable, thereby demonstrating that our algorithm is feasible in many interesting cases.
3.1 corresponding to conditionally iid extreme-value copulas
Many (but not all, see ) parametric families of symmetric stable tail dependence functions can be considered jointly under one common umbrella, by observing that they are of the particular form
where equals the distribution function of a non-negative random variable with unit mean. In probabilistic terms, this means that
where are iid random variables. These particular stable tail dependence functions are studied in , where it is shown that a random vector with survival function is conditionally iid. The probability distribution of in this case can be simulated by
where , Bernoulli and are independent random variables. For many popular models, the distribution as well as its so-called size bias can be simulated efficiently. We provide some examples below. Note in particular that for both of the following two examples, the Hüsler-Reiss example and the Galambos example, simulation algorithms based on partial derivatives require numerical root search algorithms, while the presented algorithm is exact.
(Hüsler-Reiss family) With a parameter we consider the norm
where denotes the cdf of a standard normally distributed random variable. This is of the form with a log-normal distribution. The random variables and can be simulated according to
The required derivative of is available in closed form, given by
where equals the density of .
(Galambos family) With a parameter we consider the norm
This is of the form with a Weibull distribution. (Note that in [10, Example 2] there is a typo, mistaking for .) The random variable can be simulated exactly via the inversion method and according to
where denotes the Gamma distribution with density proportional to . The required derivative of is available in closed form, given by
(Cuadras-Augé family) With a parameter we consider the norm
This is of the form with , meaning that . Furthermore, the associated size bias equals a constant in this case. The required derivative is piece-wise constant, taking the value on and the value on . The case of general piece-wise constant is further discussed in Section 3.2.
(Gumbel family ) When for some , then with associated Fréchet distribution function . In particular, may be simulated exactly via the inversion method, and the size bias may be simulated according to
where denotes the Gamma distribution with density proportional to . The required derivative is known in closed form, to wit
In [13,11] it is further explained how to simulate associated with convex mixtures of stable tail dependence functions of the type , and how this can be used to construct interesting and tractable new families. The following is an example in this spirit.
(Holmgren-Liouville family) Let be some random variable taking values in . Then the function
is a so-called Bernstein function. Assume further that is either absolutely continuous or discrete. In the absolutely continuous case we denote by its density, and in the discrete case we denote by the function , thus treating both the absolutely continuous case and the discrete case jointly with our notation. Similarly, we consider a random variable with associated function
This function is bounded from above by
Consequently, we can simulate exactly via acceptance-rejection, whenever we can simulate , see [12, p. 235–237]. Now we consider
which is an orthant–monotonic norm with , hence a stable tail dependence function. This norm arises as a convex probability mixture of norms of the form with , when randomizing the power , as demonstrated in [13, Example 3.6]. The associated random variable can be simulated as follows:
Simulate according to the aforementioned rejection–acceptance method (requires simulation of ).
Simulate from the continuous density
which can be done via acceptance-rejection with comparison density .
Simulate iid from the distribution function , which is straightforward by the inversion method.
Apply formula (10) in order to simulate from the simulated and a Bernoulli variable .
where denotes the beta function. This example originates from a random vector with survival function that is defined by , where are iid standard exponential random variables and, independently, is a so-called Holmgren-Liouville subordinator. The latter is a non-decreasing stochastic process with long-range dependence properties. In particular, are iid conditioned on . For the sake of completeness, we point out that is explicitly given by
In the particular special case with having a beta distribution, the derivative of in (11) is given in terms of the Digamma function , which is pre-implemented in most software packages just like the Gamma function and Beta function , to wit
3.2 Finite-valued , resp. piece-wise constant
The Pickands dependence function is piece-wise linear if and only if the associated random variable is discrete. We may approximate an arbitrary (from above due to convexity) by the function that takes the identical values on the grid for and and which is piece-wise linear in between these grid points. The latter approximation corresponds to some finite-valued . Since the corresponding approximation converges point-wise to by construction, converges in distribution to , and also the associated copulas approximate point-wise. From this perspective, the case of finite-valued is of utmost importance.
Regarding the implementation, we recommend to apply the strategy proposed in , that is to work on the level of . To this end, we consider a Pickands dependence function that is piece-wise constant between given values , with arbitrary but fixed. According to the methodology in , we parameterize this function in terms of the vectors and subject to the conditions
Intuitively, we define piece-wise constant with for , . With , is given by
With the convenient notation , the associated random variable has distribution function . The probability mass function is given by
An arbitrary may now be approximated by a piece-wise constant one of the above form with by computing the parameters iteratively from the relation
Since exact simulation of the finite-valued is possible, like the evaluation of and , our algorithm is feasible. Figure 1 visualizes points simulated from when is from the Hüsler-Reiss family in Example 3.1.
4 The specific case of symmetric norms
From the proof of Theorem 2.2 we recall the decomposition of the survival copula in (5) of the Archimax copula into the parts on the -simplex and inside the -simplex. To wit, the distribution function of the random vector in Theorem 2.2 admits the decomposition
where denotes the probability mass on the -simplex, denotes the cdf of a singular distribution concentrated on the -simplex and denotes the cdf of a probability distribution concentrated inside the -simplex. Note that is only absolutely continuous, if the associated random variable from the Pickands representation (4) of is absolutely continuous, such as assumed in the proof of Theorem 2.2. In the general case, it has singular parts concentrated on straight lines corresponding to atoms of , as can be seen in Figure 1.
In the special case of symmetric , that is if for arbitrary , there is one phenomenon that we find interesting enough to discuss in the present context. The distribution function equals the distribution function of a counter-monotonic random vector, meaning that
with marginal distribution functions and , respectively. A stochastic representation for is hence well-known to be given by for . Since we know that this random vector is concentrated on the -unit simplex, the norm determines the implicit relationship between and via the equation
so that determines and vice versa. Now if is symmetric, it follows from exchangeability of that . The distribution function is then uniquely and implicitly determined by via the equation
We furthermore see that
which provides an alternative way to describe the unit simplex of in terms of the distribution function . Summarizing, we obtain a correspondence between symmetric and convex and continuous distribution functions on . (An arbitrary distribution function on that is convex is always continuous on , but in general may have a jump at 1, which we exclude by saying that is continuous on all of .)
(Correspondence between symmetric and ) For a symmetric stable tail dependence function the distribution function is continuous and convex on . Conversely, any continuous and convex distribution function on implicitly defines a symmetric stable tail dependence function , which is uniquely determined by its unit simplex .
Continuity of is immediate, since a jump would imply a non-empty interval on which was constant, contradicting the fact that is the closed curve that determines the -unit simplex. To verify convexity, assume the converse, that is assume is not convex. Then we necessarily find two distinct and such that