Sklar’s Theorem (Sklar 1959) states that every multivariate cumulative probability distribution function *F* with marginals ${F}_{1},\dots ,{F}_{n}$ may be written as $F({x}_{1},\dots ,{x}_{n})=C({F}_{1}({x}_{1}),{F}_{2}({x}_{2}),\dots ,{F}_{n}({x}_{n})),$(1)

for some appropriate *n*-dimensional copula *C*. In terms of the joint probability density function *f*, for an absolutely continuous *F* with strictly increasing continuous marginals ${F}_{1},\dots ,{F}_{n}$, we have $f({x}_{1},\dots ,{x}_{n})={c}_{12\dots n}({F}_{1}({x}_{1}),\dots ,{F}_{n}({x}_{n}))\cdot {f}_{1}({x}_{1})\dots {f}_{n}({x}_{n}).$(2)

Consider now, for example, a trivariate random vector $X=({X}_{1},{X}_{2},{X}_{3})$. Its density can be factorized as $f({x}_{1},{x}_{2},{x}_{3})=f({x}_{3})\cdot f({x}_{2}|{x}_{3})\cdot f({x}_{1}|{x}_{2},{x}_{3}).$(3)

According to eq. (2), we can write $f({x}_{2}|{x}_{3})={c}_{23}({F}_{2}({x}_{2}),{F}_{3}({x}_{3}))\cdot {f}_{2}({x}_{2}).$(4)

Similarly, it is possible to decompose the conditional density of ${X}_{1}$ given ${X}_{2}$ and ${X}_{3}$ as $f({x}_{1}|{x}_{2},{x}_{3})={c}_{13|2}({F}_{1|2}({x}_{1}|{x}_{2}),{F}_{3|2}({x}_{3}|{x}_{2}))\cdot f({x}_{1}|{x}_{2}).$(5)

Now, decomposing $f({x}_{1}|{x}_{2})$ in eq. (5) further, we have $f({x}_{1}|{x}_{2},{x}_{3})={c}_{13|2}({F}_{1|2}({x}_{1}|{x}_{2}),{F}_{3|2}({x}_{3}|{x}_{2}))\cdot {c}_{12}({F}_{1}({x}_{1}),{F}_{2}({x}_{2}))\cdot {f}_{1}({x}_{1}).$(6)

Finally, from eqs. (4) and (6), the joint density function for the trivariate case can be written as $\begin{array}{ll}f({x}_{1},{x}_{2},{x}_{3})& ={f}_{1}({x}_{1})\cdot {f}_{2}({x}_{2})\cdot {f}_{3}({x}_{3})\cdot {c}_{12}({F}_{1}({x}_{1}),{F}_{2}({x}_{2}))\cdot \\ \cdot {c}_{23}({F}_{2}({x}_{2}),{F}_{3}({x}_{3}))\cdot {c}_{13|2}({F}_{1|2}({x}_{1}|{x}_{2}),{F}_{3|2}({x}_{3}|{x}_{2})).\end{array}$(7)

That is, the trivariate density can be factorized as a product of the marginals, two bivariate copulas, ${c}_{12}$ and ${c}_{23}$, and a third copula ${c}_{13|2}$ named conditional because its arguments are conditional distributions.

The previous results for the trivariate case can be generalized for an *n*-dimensional vector, using the following formula: $f(x|\mathit{\upsilon})={c}_{x{\upsilon}_{j}|{\mathit{\upsilon}}_{-j}}(F(x|{\mathit{\upsilon}}_{-j}),F({\upsilon}_{j}|{\mathit{\upsilon}}_{-j}))\cdot f(x|{\mathit{\upsilon}}_{-j}),$(8)

for a vector $\mathit{\upsilon}$ with dimension *d*. Here ${\upsilon}_{j}$ is an arbitrarily chosen component of $\mathit{\upsilon}$ and ${\mathit{\upsilon}}_{-j}$ corresponds to the vector $\mathit{\upsilon}$ excluding this component. It follows that the multivariate density function with dimension *n* can be decomposed into its marginal densities and a set of iteratively conditioned bivariate copulas.

The pair-copula decomposition of a multivariate density involves marginal conditional distributions of the form $F(x|\mathit{\upsilon})$, computed using a formula of Joe (1996): $F(x|\mathit{\upsilon})=\frac{\mathrm{\partial}{C}_{x,{\upsilon}_{j}|{\mathit{\upsilon}}_{-j}}(F(x|{\mathit{\upsilon}}_{-j}),\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}F({\upsilon}_{j}|{\mathit{\upsilon}}_{-j}))}{\mathrm{\partial}F({\upsilon}_{j}|{\mathit{\upsilon}}_{-j})}.$(9)

As the number of variables grows, the different possibilities of decomposition in pair-copulas also increase. To organize these possibilities, Bedford and Cooke (2001, 2002) introduced a graphical model called regular vine (R-vine). The R-vines are a sequence of nested trees that facilitate the identification of the needed pairs of variables and their corresponding set of conditioning variables (we refer the reader to Bedford and Cooke 2001, 2002, for more details on general R-vines, and to Dißmann et al. 2013, for inference of R-vines). Two boundary cases, popularized by Aas et al. (2009), are the canonical vine (C-vine) and the drawable vine (D-vine). Canonical vines resemble factor models, with a particular variable playing the role of pivot (factor) in every tree. Because there is no economic reason to think that a factor structure should be relevant in our data, we will focus our attention on the D-vine.

An *n*-dimensional D-vine consists of $n-1$ hierarchical trees (or levels), with path structures in their sequences and increasing conditional sets, and $n(n-1)/2$ edges corresponding to a pair-copula (for a more detailed description, see Aas et al. 2009). Define the index sets ${\upsilon}_{ij}=\{i+1,\dots ,i+j-1\}$, with ${\upsilon}_{i1}=\varnothing $, and ${w}_{ij}=\{i,{\upsilon}_{ij},i+j\}$, for $1\le i\le n-j,1\le j\le n-1$. Let $\mathit{\alpha}$ and $\mathit{\theta}$ denote the parameters of the marginals and the *n*-dimensional copula, respectively, and ${\mathit{\theta}}_{i,i+j|{\upsilon}_{ij}}$ be the parameters of the copula density ${c}_{i,i+j|{\upsilon}_{ij}}$. Finally, define ${\mathit{\theta}}_{i\to i+j}=\{{\mathit{\theta}}_{s,s+t|{\upsilon}_{st}}:(s,s+t)\in {w}_{ij}\}$, with ${\mathit{\theta}}_{i\to i}=\varnothing $, and ${\mathit{\theta}}_{j}=\{{\mathit{\theta}}_{s,s+t|{\upsilon}_{st}}:|{\upsilon}_{st}|=j-1\}$, where $|\cdot |$ denotes the cardinality, i.e. ${\mathit{\theta}}_{j}$ gathers all parameters at level *j* of the structure. The density $f({x}_{1},\dots ,{x}_{n};\mathit{\alpha},\mathit{\theta})$ associated with a D-vine may be written as^{1}$\begin{array}{l}f({x}_{1},\dots ,{x}_{n};\mathit{\alpha},\mathit{\theta})=\\ \prod _{k=1}^{n}f({x}_{k};{\mathit{\alpha}}_{k})\\ \cdot \prod _{j=1}^{n-1}\prod _{i=1}^{n-j}{c}_{i,i+j|{\upsilon}_{ij}}({F}_{i|{\upsilon}_{ij}}({x}_{i}|{\mathit{x}}_{{\upsilon}_{ij}};{\mathit{\alpha}}_{{w}_{i,j-1}},{\mathit{\theta}}_{i\to i+j-1}),\\ {F}_{i+j|{\upsilon}_{ij}}({x}_{i+j}|{\mathit{x}}_{{\upsilon}_{ij}};{\mathit{\alpha}}_{{w}_{i+1,j-1}},{\mathit{\theta}}_{i+1\to i+j});{\mathit{\theta}}_{i,i+j|{\upsilon}_{ij}}),\end{array}$(10)

where index *j* identifies the trees, whereas *i* runs over the edges in each tree. The whole decomposition is given by the $n(n-1)/2$ pair-copulas and the marginal densities of each variable.

Figure D.1 in the online Appendix depicts a five-dimensional D-vine. A simple manner of decomposing the density $f({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5};\mathit{\alpha},\mathit{\theta})$ is by multiplying the edges of the nested set of trees and the marginal densities *f*(·), as indicated below $\begin{array}{c}{f}_{1}({x}_{1};{\mathit{\alpha}}_{1})\cdot {f}_{2}({x}_{2};{\mathit{\alpha}}_{2})\cdot {f}_{3}({x}_{3};{\mathit{\alpha}}_{3})\cdot {f}_{4}({x}_{4};{\mathit{\alpha}}_{4})\cdot {f}_{5}({x}_{5};{\mathit{\alpha}}_{5})\hfill \\ \cdot {c}_{12}({F}_{1}({x}_{1};{\mathit{\alpha}}_{1}),{F}_{2}({x}_{2};{\mathit{\alpha}}_{2});{\mathit{\theta}}_{12})\cdot {c}_{23}({F}_{2}({x}_{2};{\mathit{\alpha}}_{2}),{F}_{3}({x}_{3};{\mathit{\alpha}}_{3});{\mathit{\theta}}_{23})\\ \cdot {c}_{34}({F}_{3}({x}_{3};{\mathit{\alpha}}_{3}),{F}_{4}({x}_{4};{\mathit{\alpha}}_{4});{\mathit{\theta}}_{34})\cdot {c}_{45}({F}_{4}({x}_{4};{\mathit{\alpha}}_{4}),{F}_{5}({x}_{5};{\mathit{\alpha}}_{5});{\mathit{\theta}}_{45})\\ \cdot {c}_{13|2}({F}_{1|2}({x}_{1}|{x}_{2};{\mathit{\alpha}}_{1},{\mathit{\alpha}}_{2},{\mathit{\theta}}_{12}),{F}_{3|2}({x}_{3}|{x}_{2};{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\theta}}_{23});{\mathit{\theta}}_{13|2})\\ \cdot {c}_{24|3}({F}_{2|3}({x}_{2}|{x}_{3};{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\theta}}_{23}),{F}_{4|3}({x}_{4}|{x}_{3};{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\theta}}_{34});{\mathit{\theta}}_{24|3})\\ \cdot {c}_{35|4}({F}_{3|4}({x}_{3}|{x}_{4};{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\theta}}_{34}),{F}_{5|4}({x}_{5}|{x}_{4};{\mathit{\alpha}}_{4},{\mathit{\alpha}}_{5},{\mathit{\theta}}_{45});{\mathit{\theta}}_{35|4})\\ \cdot {c}_{14|23}({F}_{1|23}({x}_{1}|{x}_{2},{x}_{3};{\mathit{\alpha}}_{1},{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\theta}}_{12},{\mathit{\theta}}_{23},{\mathit{\theta}}_{13|2}),\\ {F}_{4|23}({x}_{4}|{x}_{2},{x}_{3};{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\theta}}_{23},{\mathit{\theta}}_{34},{\mathit{\theta}}_{24|3});{\mathit{\theta}}_{14|23})\\ \cdot {c}_{25|34}({F}_{2|34}({x}_{2}|{x}_{3},{x}_{4};{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\theta}}_{23},{\mathit{\theta}}_{34},{\mathit{\theta}}_{24|3}),\\ {F}_{5|34}({x}_{5}|{x}_{3},{x}_{4};{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\alpha}}_{5},{\mathit{\theta}}_{34},{\mathit{\theta}}_{45},{\mathit{\theta}}_{35|4});{\mathit{\theta}}_{25|34})\\ \cdot {c}_{15|234}({F}_{1|234}({x}_{1}|{x}_{2},{x}_{3},{x}_{4};{\mathit{\alpha}}_{1},{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\theta}}_{12},{\mathit{\theta}}_{23},{\mathit{\theta}}_{34},{\mathit{\theta}}_{13|2},{\mathit{\theta}}_{24|3},{\mathit{\theta}}_{14|23}),\\ \hfill {F}_{5|234}({x}_{5}|{x}_{2},{x}_{3},{x}_{4};{\mathit{\alpha}}_{2},{\mathit{\alpha}}_{3},{\mathit{\alpha}}_{4},{\mathit{\alpha}}_{5},{\mathit{\theta}}_{23},{\mathit{\theta}}_{34},{\mathit{\theta}}_{45},{\mathit{\theta}}_{24|3},{\mathit{\theta}}_{35|4},{\mathit{\theta}}_{25|34});{\mathit{\theta}}_{15|234}).\end{array}$(11)

## 2.1.1 Copula-based Dependence Measures and Tail Dependence in Regular Vine Copulas

Because copulas describe the dependence structure among random variables, it is natural to think of dependence measures expressible in terms of the copula function. The Kendall’s tau and the tail dependence^{2} are useful copula-based dependence measures.

The Kendall’s tau is defined as the difference between the probability of concordance and the probability of discordance. Let (*X*, *Y*) be a vector of continuous random variables, then the population version of Kendall’s tau for *X* and *Y* is given by $\begin{array}{rcl}\tau ={\tau}_{X,Y}& =& P[({X}_{1}-{X}_{2})({Y}_{1}-{Y}_{2})>0]-P[({X}_{1}-{X}_{2})({Y}_{1}-{Y}_{2})<0]\\ & =& 4{\int}_{0}^{1}{\int}_{0}^{1}C(u,v)dC(u,v)-1,\end{array}$

where *C* is the copula of *X* and *Y*.

Tail dependence measures the dependence in extreme values, for this reason it is an important measure for risk management. If the limit $\underset{\epsilon \to 0}{lim}Pr\left[{U}_{1}\le \epsilon |{U}_{2}\le \epsilon \right]=\underset{\epsilon \to 0}{lim}Pr\left[{U}_{2}\le \epsilon |{U}_{1}\le \epsilon \right]=\underset{\epsilon \to 0}{lim}C(\epsilon ,\epsilon )/\epsilon ={\lambda}_{L}$

exists, then the copula *C* has lower tail dependence if ${\lambda}_{L}\in (0,1]$ and no lower tail dependence if ${\lambda}_{L}=0$. Similarly, if the limit $\underset{\delta \to 1}{lim}Pr\left[{U}_{1}>\delta |{U}_{2}>\delta \right]=\underset{\delta \to 1}{lim}Pr\left[{U}_{2}>\delta |{U}_{1}>\delta \right]=\underset{\delta \to 1}{lim}\left(1-2\delta +C(\delta ,\delta )\right)/(1-\delta )={\lambda}_{U}$

exists, then the copula *C* has upper tail dependence if ${\lambda}_{U}\in (0,1]$ and no upper tail dependence if ${\lambda}_{U}=0$. In other words, the lower (upper) tail dependence is the probability that one variable takes an extremely large negative (positive) value, given that the other variable took an extremely large negative (positive) value.

Recently, Joe, Li, and Nikoloulopoulos (2010) have found interesting results concerning tail dependence in vine copulas. They have a main theorem which states that if the supports of the pair-copulas in a vine are the entire $(0,1{)}^{2}$ and all the pair-copulas in level 1 have lower (upper) tail dependence, then the vine copula *C* has lower (upper) tail dependence. If a copula *C* has multivariate lower (upper) tail dependence, then all bivariate and lower-dimensional margins have lower (upper) tail dependence. Another important finding is concerned with tail asymmetry of the vine copulas. They show that vine copulas can have different upper and lower tail dependence for each bivariate margin when asymmetric bivariate copulas with upper/lower tail dependence are used in level 1 of the vine.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.