Show Summary Details
More options …

# Journal of Causal Inference

Ed. by Imai, Kosuke / Pearl, Judea / Petersen, Maya Liv / Sekhon, Jasjeet / van der Laan, Mark J.

More options …
Volume 3, Issue 1

# On the Intersection Property of Conditional Independence and its Application to Causal Discovery

Jonas Peters
Published Online: 2014-10-28 | DOI: https://doi.org/10.1515/jci-2014-0015

## Abstract

This work investigates the intersection property of conditional independence. It states that for random variables $A,B,C$ and X we have that $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }A\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}B,C$ and $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}B\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}A,C$ implies $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}\left(A,B\right)\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}C$. Here, “$\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }$” stands for statistical independence. Under the assumption that the joint distribution has a density that is continuous in $A,B$ and C, we provide necessary and sufficient conditions under which the intersection property holds. The result has direct applications to causal inference: it leads to strictly weaker conditions under which the graphical structure becomes identifiable from the joint distribution of an additive noise model.

## 1 Introduction

This paper investigates the intersection property of conditional independence. For continuous random variables $A,B,C$ and X this property states that $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}B,C$ and $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}B\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}A,C$ implies $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}\left(A,B\right)\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}C$. Here, “$\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }$” stands for statistical independence and “$\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}$” for statistical dependence (see Section 1.2 for precise definitions). The intersection property does not necessarily hold if the joint distribution does not have a density (e.g. Dawid [1]). Dawid [2] provides measure-theoretic necessary and sufficient conditions for the intersection property. In this work we assume the existence of a density (A0), see below.

It is well known that the intersection property holds if the joint distribution has a strictly positive density (e.g. Pearl [3], 1.1.5). Proposition 1 shows that if the density is not strictly positive, a weaker condition than the intersection property still holds. Corollary 1 states necessary and sufficient conditions for the intersection property. The result about strictly positive densities is contained as a special case. Drton et al. ([4], exercise 6.6) and Fink [5] develop analogous results for the discrete case.

In the remainder of this introduction we discuss the paper’s main contribution (Section 1.1) and introduce the required notation (Section 1.2).

## 1.1 Main contributions

In Section 3 we provide a sufficient and necessary condition on the density for the intersection property to hold (Corollary 1). This result is of interest in itself since the developed condition is weaker than strict positivity.

Studying the intersection property has direct applications to causal inference. Inferring causal relationships is a major challenge in science. In the last decades considerable effort has been made in order to learn causal statements from observational data. As a first step, causal discovery methods therefore estimate graphs from observational data and attach a causal meaning to these graphs (the terminology of causal inference is introduced in Section 4.1). Some causal discovery methods based on structural equation models (SEMs) require the intersection property for identification; they therefore rely on the strict positivity of the density. This is satisfied if the noise variables have full support, for example. Using the new characterization of the intersection property we can now replace the condition of strict positivity. In fact, we show in Section 4 that noise variables with a path-connected support are sufficient for identifiability of the graph (Proposition 3). This is already known for linear SEMs [6] but not for non-linear models. As an alternative, we provide a condition that excludes a specific kind of constant functions and leads to identifiability, too (Proposition 4).

In Section 2, we provide an example of an SEM that violates the intersection property. Its corresponding graph is not identifiable from the joint distribution. In correspondence to the theoretical results of this work, some noise densities in the example do not have a path-connected support and the functions are partially constant. We are not aware of any causal discovery method that is able to infer the correct graph or the correct Markov equivalence class; the example therefore shows current limits of causal inference techniques. It is non-generic in the case that it violates all sufficient assumptions mentioned in Section 4.

All proofs are provided in Appendix A.

## 1.2 Conditional independence and the intersection property

We now formally introduce the concept of conditional independence in the presence of densities and the intersection property. Let therefore $A,\phantom{\rule{thinmathspace}{0ex}}B,\phantom{\rule{thinmathspace}{0ex}}C$ and X be (possibly multi-dimensional) random variables that take values in metric spaces $\mathcal{A},\phantom{\rule{thinmathspace}{0ex}}\mathcal{B},\phantom{\rule{thinmathspace}{0ex}}\mathcal{C}$ and $\mathcal{X}$, respectively. We first introduce assumptions regarding the existence of a density and some of its properties that appear in different parts of this paper.

• (A0)

The distribution is absolutely continuous with respect to a product measure of a metric space. We denote the density by $p\left(\cdot \right)$. This can be a probability mass function or a probability density function, for example.

• (A1)

The density $\left(a,b,c\right)\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}p\left(a,b,c\right)$ is continuous. If there is no variable C (or C is deterministic), then $\left(a,b\right)↦p\left(a,b\right)$ is continuous.

• (A2)

For each c with $p\left(c\right)>0$ the set $\mathrm{s}\mathrm{u}\mathrm{p}{\mathrm{p}}_{c}\left(A,B\right):=\left\{\left(a,b\right)\phantom{\rule{1pt}{0ex}}:\phantom{\rule{1pt}{0ex}}p\left(a,b,c\right)>0\right\}$ contains only one path-connected component (see Section 3).

• (A2′)

The density $p\left(\cdot \right)$ is strictly positive.

Condition (A2′) implies (A2). We assume (A0) throughout the whole work.

In this paper we work with the following definition of conditional independence.

Definition 1 (Conditional (In)dependence). We call X independent of A conditional on B and write $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\text{\hspace{0.17em}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B$ if and only if

$p\left(x,a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b\right)p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b\right)$(1)

for all $x,a,b$ such that $p\left(b\right)>0$. Otherwise, X and A are dependent conditional on B and we write $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B$.

The intersection property of conditional independence is defined as follows (e.g. Pearl [3], 1.1.5).

Definition 2 (Intersection Property). We say that the joint distribution of $X,A,B,C$ satisfies the intersection property if

$X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B,C\text{\hspace{0.17em}}and\text{\hspace{0.17em}}X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}B\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}A,C\phantom{\rule{thinmathspace}{0ex}}⇒\phantom{\rule{thinmathspace}{0ex}}X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}\left(A,B\right)\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}C\phantom{\rule{thickmathspace}{0ex}}.$(2)

The intersection property (2) has been proven to hold for strictly positive densities (e.g. Pearl [3], 1.1.5). The other direction “$⇐$” is known as the “weak union” of conditional independence [3].

## 2 Counterexample

We now give an example of a distribution that does not satisfy the intersection property (2). Since the joint distribution has a continuous density, the example shows that the intersection property requires further restrictions on the density apart from its existence. We will later use the same idea to prove Proposition 2 that shows the necessity of our new condition.

Example 1. Consider a so-called additive noise model (ANM; see Section 4.1) for random variables $X,A,B$: $\begin{array}{rl}& A={N}_{A}\phantom{\rule{1pt}{0ex}},\\ & B=A+{N}_{B}\phantom{\rule{1pt}{0ex}},\\ & X=f\left(B\right)+{N}_{X}\phantom{\rule{1pt}{0ex}},\end{array}$(3)where ${N}_{A},\phantom{\rule{thinmathspace}{0ex}}{N}_{B},\phantom{\rule{thinmathspace}{0ex}}{N}_{X}$ are jointly independent, have continuous densities and satisfy $\mathrm{s}\mathrm{u}\mathrm{p}\mathrm{p}\left({N}_{A}\right):=\left\{n\phantom{\rule{thickmathspace}{0ex}}:\phantom{\rule{thickmathspace}{0ex}}{p}_{{N}_{A}}\left(n\right)>0\right\}=\left(-2;-1\right)\cup \left(1;2\right)$ and $\mathrm{s}\mathrm{u}\mathrm{p}\mathrm{p}\left({N}_{B}\right)=\mathrm{s}\mathrm{u}\mathrm{p}\mathrm{p}\left({N}_{X}\right)=\left(-0.3;0.3\right)$. Let the function f be of the form $f\left(b\right)=\left\{\begin{array}{l}+10if\text{ }b\text{\hspace{0.17em}}>\text{\hspace{0.17em}}0.5,\\ 0\text{\hspace{0.17em}}if\text{ }b\text{\hspace{0.17em}}<-0.5,\\ g\left(b\right)else,\end{array}$(4)where the function g can be chosen to make f arbitrarily smooth. Some parts of this structural equation model (SEM) are summarized in Figure 1. The distribution satisfies $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B$ and $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}B\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}A$ but $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}A$ and $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}B$. The (intuitive) reason for this as follows: we see $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B$ from eq. (3). Further, if we know that A (or B) is positive, X has to take values close to ten and thus $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}A$ ($X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}B$); but when knowing that A is positive, the knowledge of B does not provide any additional information about X ($X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }B\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}A$). This means that the intersection property is violated. A formal proof is provided in the more general setting of Proposition 2. Within each component, however, that is if we consider the areas $A,B>0$ and $A,B<0$ separately, we do have the independence statement $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\left(A,B\right)$; therefore the intersection property holds “locally”. This observation will be formalized as the weak intersection property in Proposition 1.

It will turn out to be important that the two path-connected components of the support of A and B cannot be connected by an axis-parallel line. This motivates the notation introduced in Section 3. Remark 1 in Section 4 discusses the causal interpretation of Example 1.

Figure 1

Example 1. The plot on the left-hand side shows the support of variables A and B in black. The function f takes values ten and zero in the areas filled with dark grey and light grey, respectively. The ANM (3) corresponds to the top graph on the right-hand side but the distribution can also be generated by an ANM with the bottom graph, this is explained in Remark 1.

## 3 Necessary and sufficient condition for the intersection property

This section characterizes the intersection property in terms of the joint density over the corresponding random variables. In particular, we state a weak intersection property (Proposition 1) that leads to a necessary and sufficient condition for the classical intersection property, see Corollary 1.

We will see that the intersection property fails in Example 1 because of the two “separated” components in Figure 1. In order to formulate our results we first require the notion of path-connectedness. A continuous mapping $\mathrm{\lambda }:\left[0,1\right]\to \mathcal{X}$ into a metric space $\mathcal{X}$ is called a path between $\mathrm{\lambda }\left(0\right)$ and $\mathrm{\lambda }\left(1\right)$ in $\mathcal{X}$. A subset $\mathcal{S}\subseteq \mathcal{X}$ is called path-connected if every pair of points in $\mathcal{S}$ can be connected by a path in $\mathcal{S}$. We can always decompose $\mathcal{X}$ into its (disjoint) path-connected components. 1 The following definition provides a formalization of the intuition that the two components in Figure 1 are “separated”.

Definition 3. (i) For each c with $p\left(c\right)>0$ we define the (not necessarily closed) support of A and B as $\mathrm{s}\mathrm{u}\mathrm{p}{\mathrm{p}}_{c}\left(A,B\right):=\left\{\left(a,b\right)\phantom{\rule{thickmathspace}{0ex}}:\phantom{\rule{thickmathspace}{0ex}}p\left(a,b,c\right)>0\right\}\phantom{\rule{1pt}{0ex}}.$We further write for all sets $M\subseteq \mathcal{A}×\mathcal{B}$ $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left(M\right):=\left\{a\in \mathcal{A}\phantom{\rule{thickmathspace}{0ex}}:\phantom{\rule{thickmathspace}{0ex}}\mathrm{\exists }b\text{\hspace{0.17em}}with\text{\hspace{0.17em}}\left(a,b\right)\in M\right\}\text{\hspace{0.17em}}and$ $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left(M\right):=\left\{b\in \mathcal{B}\phantom{\rule{thickmathspace}{0ex}}:\phantom{\rule{thickmathspace}{0ex}}\mathrm{\exists }a\text{\hspace{0.17em}}with\text{\hspace{0.17em}}\left(a,b\right)\in M\right\}\phantom{\rule{thinmathspace}{0ex}}.$(ii) We denote the path-connected components of $\mathrm{s}\mathrm{u}\mathrm{p}{\mathrm{p}}_{c}\left(A,B\right)$ by ${Z}_{i}^{c}$, $i\in {I}_{Z}^{c}$, with some index set ${I}_{Z}^{c}$. Two path-connected components ${Z}_{{i}_{1}}^{c}$ and ${Z}_{{i}_{2}}^{c}$ are said to be coordinate-wise connected if $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({Z}_{{i}_{1}}^{c}\right)\cap \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({Z}_{{i}_{2}}^{c}\right)\ne \phantom{\rule{thinmathspace}{0ex}}\mathrm{\varnothing }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}or$ $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({Z}_{{i}_{1}}^{c}\right)\cap \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({Z}_{{i}_{2}}^{c}\right)\ne \phantom{\rule{thinmathspace}{0ex}}\mathrm{\varnothing }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{1pt}{0ex}}.$(The intuition is that we can draw an axis-parallel line from ${Z}_{{i}_{1}}^{c}$ to ${Z}_{{i}_{2}}^{c}$.) We then say that ${Z}_{i}^{c}$ and ${Z}_{j}^{c}$ are equivalent if and only if there exists a sequence ${Z}_{i}^{c}={Z}_{{i}_{1}}^{c},\dots ,{Z}_{{i}_{m}}^{c}={Z}_{j}^{c}$ with all neighbours ${Z}_{{i}_{k}}^{c}$ and ${Z}_{{i}_{k+1}}^{c}\left(k=1,\dots ,\phantom{\rule{thinmathspace}{0ex}}m-1\right)$ being coordinate-wise connected. We represent the equivalence classes by the union of all its members. These unions we denote by ${U}_{i}^{c}$, $i\in {I}_{U}^{c}$.

We further introduce a deterministic function ${U}^{c}$ of the variables A and B. We set ${U}^{c}:={u}^{c}\left(A,B\right):=\left\{\begin{array}{cc}i& if\phantom{\rule{thickmathspace}{0ex}}\left(A,B\right)\in {U}_{i}^{c}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\\ 0& if\phantom{\rule{thickmathspace}{0ex}}p\left(A,B,c\right)=0\end{array}.$We have that ${U}^{c}=i$ if and only if $A\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)$ if and only if $B\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{i}^{c}\right)$. Furthermore, the projections $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)$ are disjoint for different i; the same holds for $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{i}^{c}\right)$.

(iii) The case where there is no variable C can be treated as if C was deterministic: $p\left(c\right)=1$ for some c.

In Example 1 there is no variable C. Figure 1 shows the support $\mathrm{s}\mathrm{u}\mathrm{p}{\mathrm{p}}_{c}\left(A,B\right)$ in black. It contains two path-connected components ${Z}_{1}^{c}$ and ${Z}_{2}^{c}$. Since they cannot be connected by axis-parallel lines, they are not equivalent; thus, one of them corresponds to the equivalence class ${U}_{1}^{c}$ and the other to ${U}_{2}^{c}$. Figure 2 shows another example that contains three equivalence classes of path-connected components; again, there is no variable C; we formally introduce a deterministic variable C that always takes the value c.

Figure 2

Each block represents one path-connected component ${Z}_{i}^{c}$ of the support of $p\left(a,b\right)$. All blocks with the same filling are equivalent since they can be connected by axis-parallel lines (see Definition 3). There are three different fillings corresponding to the equivalence classes ${U}_{1}^{c}$, ${U}_{2}^{c}$ and ${U}_{3}^{c}$.

Using Definition 3 we are now able to state the two main results, Propositions 1 and 2. As a direct consequence we obtain Corollary 1 which generalizes the condition of strictly positive densities.

Proposition 1 (Weak Intersection Property). Assume (A0), (A1) and that $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B,C$ and $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}B\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}A,C$. Consider now c with $p\left(c\right)>0$ and the variable ${U}^{c}$ as defined in Definition 3(ii). We then have the weak intersection property:

$X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\left(A,B\right)\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}C=c,{U}^{c}.$

This means that

$p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c,{u}^{c}\left(a,b\right)\right)$(5)

for all $x,a,b$ with $p\left(a,b,c\right)>0$. The values of $A,B$ do not provide additional information if we already know ${U}^{c}={u}^{c}\left(A,B\right)$.

We call this property the weak intersection property for the following reason: if $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\left(A,B\right)\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}C$, then by definition ${u}^{c}\left(a,b\right)=i$ if and only if $\left(a,b\right)\in {U}_{i}^{c}$ and therefore $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c,\left(a,b\right)\in {U}_{{u}^{c}\left(a,b\right)}^{c}\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c,{u}^{c}\left(a,b\right)\right)\phantom{\rule{1pt}{0ex}}.$In this sense, eq. (5) is strictly weaker than $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\left(A,B\right)\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}C$.

Furthermore, Proposition 1 includes the intersection property for positive densities as a special case. If the density is indeed strictly positive, then there is only a single path-connected component ${Z}_{1}^{c}$ and a single equivalence class ${U}_{1}^{c}$. Therefore, ${U}^{c}$ is constant and it follows from eq. (5) and Lemma 1 (see “Proof of Proposition 1” in Appendix A.) that $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\left(A,B\right)\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}C$.

Proposition 2 (Failure of Intersection Property). Assume (A0), (A1) and that there exist two different sets ${U}_{1}^{{c}^{\ast }}\ne {U}_{2}^{{c}^{\ast }}$ for some ${c}^{\ast }$ with $p\left({c}^{\ast }\right)>0$. Then there exists a random variable X such that the intersection property (2) does not hold for the joint distribution of $X,A,B,C$.

As a direct corollary from these two propositions we obtain a characterization of the intersection property in the case of continuous densities.

Corollary 1 (Intersection Property). Assume (A0) and (A1).

The intersection property (2) holds for all variables X if and only if all components ${Z}_{i}^{c}\text{ }are\text{ }equivalent,ie\text{ }there\text{ }is\text{ }only\text{ }one\text{ }set\text{ }{U}_{1}^{c}.$

In particular, this is the case if (A2) holds (there is only one path-connected component) or (A2’) holds (the density is strictly positive).

## 4 Application to causal discovery

We will first introduce some graph notation that we use for formulating the application to causal inference.

## 4.1 Notation and prerequisites

Standard graph definitions can be found in Lauritzen [7], Spirtes et al. [8] and many others. We follow the presentation of Section 1.1 in Peters et al. [9]. A graph $G=\left(\mathbf{V},\mathcal{E}\right)$ contains nodes $\mathbf{V}=\left\{1,\dots ,p\right\}$ (often identified with random variables ${X}_{1},\dots ,{X}_{p}$) and edges $\mathcal{E}\subset \mathbf{V}×\mathbf{V}$ between nodes. A graph ${G}_{1}=\left({\mathbf{V}}_{1},{\mathcal{E}}_{1}\right)$ is called a proper subgraph of G if ${\mathbf{V}}_{1}=\mathbf{V}$ and ${\mathcal{E}}_{1}\subset \mathcal{E}$ with ${\mathcal{E}}_{1}\ne \mathcal{E}$. A node i is called a parent of j if $\left(i,j\right)\in \mathcal{E}$ and a child if $\left(j,i\right)\in \mathcal{E}$. The set of parents of j is denoted by ${\mathbf{P}\mathbf{A}\phantom{\rule{thinmathspace}{0ex}}}_{j}^{G}$, the set of its children by ${\mathbf{C}\mathbf{H}}_{j}^{G}$. Two nodes i and j are adjacent if either $\left(i,j\right)\in \mathcal{E}$ or $\left(j,i\right)\in \mathcal{E}$. We say that there is an undirected edge between two adjacent nodes i and j if $\left(i,j\right)\in \mathcal{E}$ and $\left(j,i\right)\in \mathcal{E}$. An edge between two adjacent nodes is directed if it is not undirected. We then write $i\to j$ for $\left(i,j\right)\in \mathcal{E}$. Three nodes are called a v-structure if one node is a child of the two others that themselves are not adjacent. A path in G is a sequence of (at least two) distinct vertices ${i}_{1},\dots ,{i}_{n}$, such that there is an edge between ${i}_{k}$ and ${i}_{k+1}$ for all $k=1,\dots ,n-1$. If ${i}_{k}\to {i}_{k+1}$ for all k we speak of a directed path from ${i}_{1}$ to ${i}_{n}$ and call ${i}_{n}$ a descendant of ${i}_{1}$. We denote all descendants of $i$ by ${\mathbf{D}\mathbf{E}}_{i}^{G}$ and all non-descendants of $i$, excluding i, by ${\mathbf{N}\mathbf{D}}_{i}^{G}$. In this work, i is neither a descendant nor a non-descendant of itself. G is called a directed acyclic graph (DAG), if all edges are directed and there is no pair of nodes (j, k) such that there are directed paths from j to k and from k to j. In a DAG, a path between ${i}_{1}$ and ${i}_{n}$ is blocked by a set $\mathbf{S}$ (with neither ${i}_{1}$ nor ${i}_{n}$ in this set) whenever there is a node ${i}_{k}$, such that one of the following two possibilities hold: (1) ${i}_{k}\in \mathbf{S}$ and ${i}_{k-1}\to {i}_{k}\to {i}_{k+1}$ or ${i}_{k-1}←{i}_{k}←{i}_{k+1}$ or ${i}_{k-1}←{i}_{k}\to {i}_{k+1}$, or (2) ${i}_{k-1}\to {i}_{k}←{i}_{k+1}$ and neither ${i}_{k}$ nor any of its descendants is in $\mathbf{S}$. We say that two disjoint subsets of vertices $\mathbf{A}$ and $\mathbf{B}$ are d-separated by a third (also disjoint) subset $\mathbf{S}$ if every path between nodes in $\mathbf{A}$ and $\mathbf{B}$ is blocked by $\mathbf{S}$. A joint distribution is said to be Markov with respect to the DAG G if $\mathbf{A},\mathbf{B}$ d-sep. by $\mathbf{C}\phantom{\rule{thickmathspace}{0ex}}⇒\phantom{\rule{thickmathspace}{0ex}}\mathbf{A}\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}\mathbf{B}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\mathbf{C}$ for all disjoint sets $\mathbf{A},\mathbf{B},\mathbf{C}$. It is said to be faithful to the DAG G if $\mathbf{A},\mathbf{B}$ d-sep. by $\mathbf{C}\phantom{\rule{thickmathspace}{0ex}}⇐\phantom{\rule{thickmathspace}{0ex}}\mathbf{A}\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}\mathbf{B}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\mathbf{C}$ for all disjoint sets $\mathbf{A},\mathbf{B},\mathbf{C}$. Finally, a distribution satisfies causal minimality with respect to G if it is Markov with respect to G, but not to any proper subgraph of G.

In order to infer graphs from distributions, one requires assumptions that relate the joint distribution with properties of the graph, which is often assumed to be a DAG. Constraint-based or independence-based methods [3, 8] and some score-based methods [10, 11] assume the Markov condition and faithfulness. These two assumptions make the Markov equivalence class of the correct graph identifiable from the joint distribution, i.e. the skeleton and the v-structures of the graph can be inferred from the joint distribution [12].

Alternatively [6, 9, 13], we can assume an additive noise models (ANMs). In these models, the joint distribution over ${X}_{1},\dots ,{X}_{p}$ is generated by an SEM ${X}_{i}={f}_{i}\left({X}_{\mathbf{P}{\mathbf{A}\phantom{\rule{thinmathspace}{0ex}}}_{i}}\right)+{N}_{i}\phantom{\rule{1pt}{0ex}},\phantom{\rule{1em}{0ex}}i=1,\dots ,p\phantom{\rule{1pt}{0ex}},$(6)with continuous, non-constant functions ${f}_{i}$, additive and jointly independent noise variables ${N}_{i}$ with mean zero and sets $\mathbf{P}{\mathbf{A}}_{i}$ that are the parents of i in a DAG G. To simplify notation, we have identified variable ${X}_{i}$ with its index (or node) i. These models can be shown to satisfy the Markov condition (Pearl [3], theorem 1.4.1); the functions ${f}_{i}$ being non-constant correspond to causal minimality (Peters et al. [9], proposition 17), which is strictly weaker than faithfulness. We now define what we mean by identifiability of the DAG in continuous ANMs. Consider a certain class of SEMs and suppose that the distribution $P=P\left({X}_{1},\dots ,{X}_{p}\right)$ is generated from such an SEM. We say that G is identifiable from P if P cannot be generated by an SEM from the same class but with a different graph $H\ne G$.

Loosely speaking, Peters et al. ([9], theorem 28) prove that $\left(\ast \right)\phantom{\rule{1em}{0ex}}\begin{array}{c}\mathrm{T}\mathrm{h}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{i}\mathrm{f}\mathrm{i}\mathrm{a}\mathrm{b}\mathrm{i}\mathrm{l}\mathrm{i}\mathrm{t}\mathrm{y}\phantom{\rule{thickmathspace}{0ex}}\mathrm{o}\mathrm{f}\phantom{\rule{thickmathspace}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\mathrm{e}\mathrm{l}\phantom{\rule{thickmathspace}{0ex}}\mathrm{c}\mathrm{l}\mathrm{a}\mathrm{s}\mathrm{s}\mathrm{e}\mathrm{s}\phantom{\rule{thickmathspace}{0ex}}\mathrm{e}\mathrm{x}\mathrm{t}\mathrm{e}\mathrm{n}\mathrm{d}\mathrm{s}\phantom{\rule{thickmathspace}{0ex}}\mathrm{f}\mathrm{r}\mathrm{o}\mathrm{m}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{A}\mathrm{G}\mathrm{s}\phantom{\rule{thickmathspace}{0ex}}\mathrm{w}\mathrm{i}\mathrm{t}\mathrm{h}\phantom{\rule{thickmathspace}{0ex}}\mathrm{t}\mathrm{w}\mathrm{o}\phantom{\rule{thickmathspace}{0ex}}\mathrm{n}\mathrm{o}\mathrm{d}\mathrm{e}\mathrm{s}\\ \mathrm{t}\mathrm{o}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{A}\mathrm{G}\mathrm{s}\phantom{\rule{thickmathspace}{0ex}}\mathrm{w}\mathrm{i}\mathrm{t}\mathrm{h}\phantom{\rule{thickmathspace}{0ex}}\mathrm{a}\mathrm{n}\phantom{\rule{thickmathspace}{0ex}}\mathrm{a}\mathrm{r}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{r}\mathrm{a}\mathrm{r}\mathrm{y}\phantom{\rule{thickmathspace}{0ex}}\mathrm{n}\mathrm{u}\mathrm{m}\mathrm{b}\mathrm{e}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}\mathrm{o}\mathrm{f}\phantom{\rule{thickmathspace}{0ex}}\mathrm{v}\mathrm{a}\mathrm{r}\mathrm{i}\mathrm{a}\mathrm{b}\mathrm{l}\mathrm{e}\mathrm{s}.\end{array}$

## 4.2 Intersection property and causal discovery

We first revisit Example 1 and interpret it from a causal point of view.

Remark 1 (Example 1 continued). Example 1 has the following important implication for causal inference. The distribution can be generated by two different DAGs, namely $A\to B\to X$ and $X←A\to B$, see Figure 1. The SEM (3) corresponds to the former DAG. A slightly modified version of eq. (3) where $X=\stackrel{˜}{f}\left(A\right)+{N}_{X}$ replaces the last equation in eq. (3) corresponds to the latter DAG. The distribution satisfies causal minimality with respect to both DAGs. Since it violates faithfulness and the intersection property, we are not aware of any causal inference method that is able to recover the correct graph structure based on observational data only. Recall that Peters et al. [9] assume strictly positive densities in order to assure the intersection property. More precisely, Example 1 shows that lemma 38 in Peters et al. [9], see Appendix B., does not hold anymore when the positivity is violated.

In order to prove $\left(\ast \right)$, Peters et al. [9] require a strictly positive density. This is because the key results used in the proof is proposition 29 which is proved using lemma 38, which itself relies on the intersection property (proposition 29 and lemma 38 are provided in Appendix B.). But since Corollary 1 provides weaker assumption for the intersection property, we are now able to obtain new identifiability results.

Proposition 3. Assume that a joint distribution over ${X}_{1},\dots ,{X}_{p}$ is generated by an ANM (6). Assume further that the noise variables have continuous densities and that the support of each noise variable ${N}_{i}$, $i=1,\dots ,p$ is path-connected. Then, statement $\left(\ast \right)$ holds.

Example 1 violates the assumption of Proposition 3 since the support of A is not path-connected. It satisfies another important property, too: the function f is constant on some intervals. The following proposition shows that this is necessary to violate identifiability.

Proposition 4. Assume that a joint distribution over ${X}_{1},\dots ,{X}_{p}$ is generated by an ANM (6) with graph G. Let us denote the non-descendants of ${X}_{i}$ by ${\mathbf{N}\mathbf{D}}_{i}^{G}$. Assume that the structural equations are non-constant in the following way: for all ${X}_{i}$, for all its parents ${X}_{j}\in \mathbf{P}{\mathbf{A}}_{i}$ and for all ${X}_{\mathbf{C}}\subseteq {\mathbf{N}\mathbf{D}}_{i}^{G}\mathrm{\setminus }\left\{{X}_{j}\right\}$, there are $\left({x}_{j},{x}_{j}^{\mathrm{\prime }},{x}_{k},{x}_{c}\right)$ such that ${f}_{i}\left({x}_{j},{x}_{k}\right)\ne {f}_{i}\left({x}_{j}^{\mathrm{\prime }},{x}_{k}\right)$ and $p\left({x}_{j},{x}_{k},{x}_{c}\right)>0$ and $p\left({x}_{j}^{\prime },{x}_{k},{x}_{c}\right)>0$. Here, ${x}_{k}$ represents the value of all parents of ${X}_{i}$ except ${X}_{j}$. Then for any $\mathbf{P}{\mathbf{A}}_{i}\mathrm{\setminus }\left\{j\right\}\subseteq \mathbf{S}\subseteq {\mathbf{N}\mathbf{D}}_{i}^{G}\mathrm{\setminus }\left\{j\right\}$, it holds that ${X}_{i}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{X}_{j}\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}\mathbf{S}$. Therefore, statement $\left(\ast \right)$ follows.

Proposition 4 provides an alternative way to prove identifiability. The results are summarized in Table 1.

Table 1

This table shows conditions for continuous additive noise models (ANMs) that lead to identi_ability of the directed acyclic graph from the joint distributions. Using the characterization of the intersection property we could weaken the condition of a strictly positive density.

## 5 Conclusions

It is possible to prove the intersection property of conditional independence for variables whose distributions do not have a strictly positive density. A necessary and sufficient condition for the intersection property is that all path-connected components of the support of the density are equivalent, that is, they can be connected by axis-parallel lines. In particular, this condition is satisfied for densities whose support is path-connected. In the general case, the intersection property still holds after conditioning on an equivalence class of path-connected components; we call this the weak intersection property. We believe that the assumption of a density that is continuous in A, $B$ and C can be weakened even further.

This insight has a direct application in causal inference (which is rather of theoretical nature than having implications for practical methods). In the context of continuous ANMs, we relax important conditions for identifiability of the graph from the joint distribution. Furthermore, there is some interest in uniform consistency in causal inference. For linear Gaussian SEMs, for example, the PC algorithm [8] exploits conditional independences, that is, vanishing partial correlations. Zhang and Spirtes [14] prove uniform consistency under the assumption that non-vanishing partial correlations cannot be arbitrarily close to zero (this condition is referred to as “strong faithfulness”). Our work suggests that in order to prove uniform consistency for continuous ANMs, one may need to be “bounded away” from Example 1.

## Acknowledgements

The author thanks the anonymous reviewers for their insightful and constructive comments. He further thanks Thomas Kahle and Mathias Drton for pointing out the link to algebraic statistics for discrete variables. The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no 326496.

## Proof of Proposition 1

We require the following well-known lemma (e.g. Dawid [15]).

Lemma 1. We have $X\phantom{\rule{thinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B$ if and only if $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b\right)$for all $x,a,b$ such that $p\left(a,b\right)>0$.

Proof. (of Proposition 1) To simplify notation we write ${u}^{c}:={u}^{c}\left(a,b\right)$. We have by Lemma 1 $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,c\right)$(7)for all $x,a,b,c$ with $p\left(a,b,c\right)>0$. As the main argument we show that $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\stackrel{˜}{b},c\right)$(8)for all $x,b,\stackrel{˜}{b},c$ with $b,\stackrel{˜}{b}\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{i}^{c}\right)$ for the same i.

Step 1, we prove eq. (8) for $b,\stackrel{˜}{b}\in {Z}_{i}^{c}$. We first show that there is a path $\mathrm{\lambda }:t↦\left(a\left(t\right),b\left(t\right)\right)$, such that $p\left(a\left(t\right),b\left(t\right),c\right)>0$ for all $0\le t\le 1$, and $b\left(0\right)=b$ and $b\left(1\right)=\stackrel{˜}{b}$. Since the interval $\left[0,1\right]$ is compact and $\mathrm{\lambda }$ is continuous, the path $\left\{\left(a\left(t\right),b\left(t\right)\right):0\le t\le 1\right\}$ is compact, too (for notational simplicity we identify the path $\mathrm{\lambda }$ with its image). Define for each point $\left(a\left(t\right),b\left(t\right)\right)$ on the path an open ball with radius small enough such that all $\left(a,b\right)$ in the ball satisfy $p\left(a,b,c\right)>0$ (this is possible because $\left(a,b,c\right)\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}p\left(a,b,c\right)$ is assumed to be continuous). Because these balls are path-connected, they also lie in ${Z}_{i}^{c}$. They form an open cover of the path $\left\{\left(a\left(t\right),b\left(t\right)\right):0\le t\le 1\right\}$, and we can thus choose a finite subset of balls, of size n say, that still provides an open cover of the path. Without loss of generality let $\left(a\left(0\right),b\left(0\right)\right)$ be the centre of ball 1 and $\left(a\left(1\right),b\left(1\right)\right)$ be the centre of ball n. It suffices to show that eq. (8) holds for the centres of two neighbouring balls, say $\left({a}_{1},{b}_{1}\right)$ and $\left({a}_{2},{b}_{2}\right)$. Choose a point $\left({a}^{\ast },{b}^{\ast }\right)$ from the non-empty intersection of those two balls. Since $d\left(\left({a}_{1},{b}_{1}\right),\left({a}^{\ast },{b}_{1}\right)\right) and $d\left(\left({a}_{2},{b}_{2}\right),\left({a}_{2},{b}^{\ast }\right)\right) for the Euclidean metric d, we have that $p\left({a}_{1},{b}_{1},c\right)$, $p\left({a}^{\ast },{b}_{1},c\right)$, $p\left({a}^{\ast },{b}^{\ast },c\right)$, $p\left({a}_{2},{b}^{\ast },c\right)$ and $p\left({a}_{2},{b}_{2},c\right)$ are all greater than zero. Therefore, using eq. (7) several times, $\begin{array}{rl}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{b}_{1},c\right)& =p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{a}_{1},c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{a}^{\ast },c\right)\\ & =p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{b}^{\ast },c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{a}_{2},c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{b}_{2},c\right)\end{array}$This shows eq. (8) for $b,\stackrel{˜}{b}\in {Z}_{i}^{c}$.

Step 2, we prove eq. (8) for $b\in {Z}_{i}^{c}$ and $\stackrel{˜}{b}\in {Z}_{i+1}^{c}$, where ${Z}_{i}^{c}$ and ${Z}_{i+1}^{c}$ are coordinate-wise connected (and thus equivalent). If ${b}^{\ast }\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({Z}_{i}^{c}\right)\cap \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({Z}_{i+1}^{c}\right)$, we know that $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{b}^{\ast },c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\stackrel{˜}{b},c\right)$from the argument given in step 1. If ${a}^{\ast }\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({Z}_{i}^{c}\right)\cap \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({Z}_{i+1}^{c}\right)$, then there is a ${b}_{i},{b}_{i+1}$ such that $\left({a}^{\ast },{b}_{i}\right)\in {Z}_{i}^{c}$ and $\left({a}^{\ast },{b}_{i+1}\right)\in {Z}_{i+1}^{c}$. By eq. (7) and the argument from step 1 we have $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{b}_{i},c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{b}_{i+1},c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\stackrel{˜}{b},c\right).$

We can now combine these two steps in order to prove the original claim from eq. (8). If $b,\stackrel{˜}{b}\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{i}^{c}\right)$ then $b\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({Z}_{1}^{c}\right)$ and $\stackrel{˜}{b}\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({Z}_{n}^{c}\right)$, say. Further, there is a sequence ${Z}_{1}^{c},\dots ,{Z}_{n}^{c}$ with ${Z}_{k}^{c}$ and ${Z}_{k+1}^{c}$ being coordinate-wise connected for $k=1,\dots ,n-1$. Combining steps 1 and 2 proves eq. (8).

Consider now $x,b,c$ such that $p\left(b,c\right)>0$ (which implies $p\left(c\right)>0$) and consider ${u}^{c}=i$, say. Observe further that $p\left(a,c\right)>0$ for $a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)$. We thus have $\begin{array}{rl}p\left(x,{u}^{c}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)& ={\int }_{a}p\left(x,a,{u}^{c}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da={\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)}p\left(x,a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & ={\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)}\frac{p\left(x,a,c\right)p\left(a,c\right)}{p\left(c\right)p\left(a,c\right)}\phantom{\rule{thickmathspace}{0ex}}da\\ & ={\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,c\right)p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & ={\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right),p\left(a,b,c\right)>0}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,c\right)p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right),p\left(a,b,c\right)=0}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,c\right)p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & \stackrel{\left(7\right)}{=}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right){\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right),p\left(a,b,c\right)>0}p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da+{\int }_{{\mathcal{A}}_{b}}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,c\right)p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & =:\left(\mathrm{#}\right)\end{array}$with ${\mathcal{A}}_{b}=\left\{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right):p\left(a,b,c\right)=0\right\}$. It is the case, however, that for all $a\in {\mathcal{A}}_{b}$ there is a $\stackrel{˜}{b}\left(a\right)\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{i}^{c}\right)$ with $p\left(a,\stackrel{˜}{b}\left(a\right),c\right)>0$. But since also $b\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{i}^{c}\right)$ we have $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\stackrel{˜}{b},c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)$ by eq. (8). Ergo, $\begin{array}{rl}\left(\mathrm{#}\right)& =p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right){\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right),p\left(a,b,c\right)>0}p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da+{\int }_{{\mathcal{A}}_{b}}p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,\stackrel{˜}{b}\left(a\right),c\right)p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & =p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right){\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right),p\left(a,b,c\right)>0}p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da+p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right){\int }_{{\mathcal{A}}_{b}}p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & =p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right){\int }_{a\in \mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{A}\left({U}_{i}^{c}\right)}p\left(a\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\phantom{\rule{thickmathspace}{0ex}}da\\ & =p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)\phantom{\rule{thickmathspace}{0ex}}p\left({u}^{c}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c\right)\end{array}$This implies $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c,{u}^{c}\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)\phantom{\rule{thickmathspace}{0ex}}.$Together with eq. (7) this leads to $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b,c,{u}^{c}\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c,{u}^{c}\right)\phantom{\rule{1pt}{0ex}}.$

## Proof of Proposition 2

Proof. Define X according to $X=g\left(C,{U}^{C}\right)+{N}_{X},$where ${N}_{X}\sim \mathcal{U}\left(\left[-0.1,0.1\right]\right)$ is uniformly distributed with ${N}_{X}$ independent of $\left(A,B,C\right)$. Define g according to $g\left(c,{u}^{c}\right)=\left\{\begin{array}{cc}10& \mathrm{i}\mathrm{f}\phantom{\rule{thickmathspace}{0ex}}C={c}^{\ast }\phantom{\rule{thickmathspace}{0ex}}\mathrm{a}\mathrm{n}\mathrm{d}\phantom{\rule{thickmathspace}{0ex}}{u}^{{c}^{\ast }}=1\\ 0& \mathrm{o}\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}\end{array}$Fix a value c with $p\left(c\right)>0$. We then have for all $a,b$ with $p\left(a,b,c\right)>0$ that $p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,b,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}c,{u}^{c}\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}a,c\right)=p\left(x\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,c\right)$because ${U}^{c}$ can be written as a function of A or of B. We therefore have that $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}B,C$ and $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }B\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}A,C$. Depending on whether b is in $\mathrm{p}\mathrm{r}\mathrm{o}{\mathrm{j}}_{B}\left({U}_{1}^{{c}^{\ast }}\right)$ or not we have $p\left(x=0\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,{c}^{\ast }\right)=0$ or $p\left(x=10\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,{c}^{\ast }\right)=0$, respectively. Thus, $p\left(x=10\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,{c}^{\ast }\right)\cdot p\left(x=0\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}b,{c}^{\ast }\right)=0\phantom{\rule{1pt}{0ex}},\mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\mathrm{a}\mathrm{s}$ $p\left(x=10\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{c}^{\ast }\right)\cdot p\left(x=0\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{c}^{\ast }\right)\ne 0\phantom{\rule{1pt}{0ex}}.$This shows that $X\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}B\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}C={c}^{\ast }$. Note that $\left(x,a,b,c\right)↦p\left(x,a,b,c\right)$ is not necessarily continuous, see (A1). □

## Proof of Proposition 3

Proof. Since the true structure corresponds to a DAG, we can find a causal ordering, i.e. a permutation $\mathrm{\pi }:\left\{1,\dots ,p\right\}\to \left\{1,\dots ,p\right\}$ such that $\mathbf{P}{\mathbf{A}}_{\mathrm{\pi }\left(i\right)}\subseteq \left\{\mathrm{\pi }\left(1\right),\dots ,\mathrm{\pi }\left(i-1\right)\right\}\phantom{\rule{1pt}{0ex}}.$In this ordering, $\mathrm{\pi }\left(1\right)$ is a source node and $\mathrm{\pi }\left(p\right)$ is a sink node. We can then rewrite the structural equation model in eq. (6) as ${X}_{\mathrm{\pi }\left(i\right)}={\stackrel{˜}{f}}_{\mathrm{\pi }\left(i\right)}\left({X}_{\mathrm{\pi }\left(1\right)},\dots ,{X}_{\mathrm{\pi }\left(i-1\right)}\right)+{N}_{\mathrm{\pi }\left(i\right)}\phantom{\rule{1pt}{0ex}},$

where the functions ${\stackrel{˜}{f}}_{i}$ are the same as ${f}_{i}$ except they are constant in the additional input arguments.

The density of the random vector $\left({X}_{1},\dots ,{X}_{p}\right)$ has path-connected support by the following argument: consider a one-dimensional random variable N with mean zero and a (possibly multivariate) random vector X both with path-connected support and a continuous function f. Then, the support of the random vector $\left(X,f\left(X\right)+N\right)$ is path-connected, too. Indeed, consider two points $\left({x}_{0},{y}_{0}\right)$ and $\left({x}_{1},{y}_{1}\right)$ from the support of $\left(X,f\left(X\right)+N\right)$. The path can then be constructed by concatenating three sub-paths: (1) the path between $\left({x}_{0},{y}_{0}\right)$ and $\left({x}_{0},f\left({x}_{0}\right)\right)$ (N’s support is path-connected), (2) the path between $\left({x}_{0},f\left({x}_{0}\right)\right)$ and $\left({x}_{1},f\left({x}_{1}\right)\right)$ on the graph of f (which is path-connected due to the continuity of f) and (3) the path between $\left({x}_{1},f\left({x}_{1}\right)\right)$ and $\left({x}_{1},{y}_{1}\right)$, analogously to (1).

Therefore, the intersection property (2) holds for any disjoint sets of variables $X,A,B,C\in \left\{{X}_{1},\dots ,{X}_{p}\right\}$ by Proposition 1. Thus, the statements of lemma 38 and thus proposition 29 from Peters et al. [9] remain correct, which proves $\left(\ast \right)$ for noise variables with continuous densities and path-connected support. □

## Proof of Proposition 4

Proof. The proof is immediate. Since $p\left({x}_{i}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{x}_{j},{x}_{k},{x}_{c}\right)\ne p\left({x}_{i}\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}{x}_{j}^{\prime },{x}_{k},{x}_{c}\right)$ (the means are not the same) the statement follows from Lemma 1.

In this case, lemma 38 might not hold but more importantly proposition 29 does (both from Peters et al. [9]. This proves $\left(\ast \right)$. □

## Technical results for identifiability in additive noise models

We provide the two key results required for proving property $\left(\ast \right)$ in Section 4.1. The intersection property is used to prove the “only if” part of lemma 38, which itself is used to prove proposition 29.

Lemma 38 [9] Consider the random vector $\mathbf{X}$ and assume that the joint distribution has a (strictly) positive density. Then the joint distribution over $\mathbf{X}$ satisfies causal minimality with respect to a DAG G if and only if $\mathrm{\forall }B\in \mathbf{X}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathrm{\forall }A\in {\mathrm{P}\mathrm{A}}_{B}^{G}$ and $\mathrm{\forall }\mathbf{S}\subset \mathbf{X}$ with ${\mathbf{P}\mathbf{A}}_{B}^{G}\mathrm{\setminus }\left\{A\right\}\subseteq \mathbf{S}\subseteq {\mathbf{N}\mathbf{D}}_{B}^{G}\mathrm{\setminus }\left\{A\right\}$ we have that $B\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}/\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}A\phantom{\rule{1pt}{0ex}}|\phantom{\rule{1pt}{0ex}}\mathbf{S}\phantom{\rule{1pt}{0ex}}.$

Proposition 29 [9] Let G and ${G}^{\prime }$ be two different DAGs over variables $\mathbf{X}$. Assume that the joint distribution over $\mathbf{X}$ has a strictly positive density and satisfies the Markov condition and causal minimality with respect to G and ${G}^{\prime }$. Then there are variables $L,Y\in \mathbf{X}$ such that for the sets $\mathbf{Q}:={\mathbf{P}\mathbf{A}}_{L}^{G}\mathrm{\setminus }\left\{Y\right\}$, $\mathbf{R}:={\mathbf{P}\mathbf{A}}_{Y}^{{G}^{\prime }}\mathrm{\setminus }\left\{L\right\}$ and $\mathbf{S}:=\mathbf{Q}\cup \mathbf{R}$ we have $•\phantom{\rule{thinmathspace}{0ex}}Y\to L\text{\hspace{0.17em}}\mathrm{i}\mathrm{n}\text{\hspace{0.17em}}G\text{\hspace{0.17em}}\mathrm{a}\mathrm{n}\mathrm{d}\text{\hspace{0.17em}}L\to Y\phantom{\rule{thinmathspace}{0ex}}\mathrm{i}\mathrm{n}\phantom{\rule{thinmathspace}{0ex}}{G}^{\prime }$ $•\phantom{\rule{thinmathspace}{0ex}}\mathbf{S}\subseteq {\mathbf{N}\mathbf{D}}_{L}^{G}\mathrm{\setminus }\left\{Y\right\}\text{\hspace{0.17em}}\mathrm{a}\mathrm{n}\mathrm{d}\text{\hspace{0.17em}}\mathbf{S}\subseteq {\mathbf{N}\mathbf{D}}_{Y}^{{G}^{\prime }}\mathrm{\setminus }\left\{L\right\}$

## References

• 1.

Dawid AP. Some misleading arguments involving conditional independence. J R Stat Soc Ser B 1979;41:249–52. Google Scholar

• 2.

Dawid AP. Conditional independence for statistical operations. Ann Stat 1980;8:598–617.

• 3.

Pearl J. Causality: models, reasoning, and inference, 2nd ed. New York, NY: Cambridge University Press, 2009. Google Scholar

• 4.

Drton M, Sturmfels B, Sullivant S. Lectures on algebraic statistics. Volume 39 of Oberwolfach Seminars. Basel: Birkhäuser Verlag, 2009. Google Scholar

• 5.

Fink A. The binomial ideal of the intersection axiom for conditional probabilities. J Algebraic Combinatorics 2011;33:455–63.

• 6.

Shimizu S, Hoyer PO, Hyvärinen A, Kerminen AJ. A linear non-Gaussian acyclic model for causal discovery. J Mach Learn Res 2006;7:2003–30. Google Scholar

• 7.

Lauritzen S. Graphical models. New York, NY: Oxford University Press, 1996. Google Scholar

• 8.

Spirtes P, Glymour C, Scheines R. Causation, prediction, and search, 2nd ed. Cambridge, MA: MIT Press, 2000. Google Scholar

• 9.

Peters J, Mooij JM, Janzing D, Schölkopf B. Causal discovery with continuous additive noise models. J Mach Learn Res 2014;15:2009–53. Google Scholar

• 10.

Chickering DM. Optimal structure identification with greedy search. J Mach Learn Res 2002;3:507–54. Google Scholar

• 11.

Heckerman D, Meek C, Cooper G. A Bayesian approach to causal discovery. In Glymour C, Cooper G, editors. Computation, causation, and discovery. Cambridge, MA: MIT Press, 1999:141–65. Google Scholar

• 12.

Verma T, Pearl J. Equivalence and synthesis of causal models. In: P.B. Bonissone and M. Henrion and L.N. Kanal and J.F. Lemmer editors. Proceedings of the 6th annual conference on uncertainty in artificial intelligence (UAI), San Francisco, CA: Morgan Kaufmann, 1991:255–70. Google Scholar

• 13.

Hoyer PO, Janzing D, Mooij JM, Peters J, Schölkopf B. Nonlinear causal discovery with additive noise models. In: D. Koller and D. Schuurmans and Y. Bengio and L. Bottou editors. Advances in neural information processing systems 21 (NIPS), Red Hook, NY: Curran Associates, Inc., 2009:689–696. Google Scholar

• 14.

Zhang J, Spirtes P. Strong faithfulness and uniform consistency in causal inference. In: C. Meek and U. Kjærulff editors. Proceedings of the 19th annual conference on uncertainty in artificial intelligence (UAI), San Francisco, CA: Morgan Kaufmann, 2003:632–9. Google Scholar

• 15.

Dawid AP. Conditional independence in statistical theory. J R Stat Soc Ser B 1979;41:1–31. Google Scholar

## Footnotes

• 1

Formally, path-connected components are equivalence classes of points, where two points are equivalent if there exists a path in $\mathcal{X}$ connecting them. This equivalence should not be confused with the equivalence appearing in Definition 3(ii).

Published Online: 2014-10-28

Published in Print: 2015-03-01

Citation Information: Journal of Causal Inference, Volume 3, Issue 1, Pages 97–108, ISSN (Online) 2193-3685, ISSN (Print) 2193-3677,

Export Citation