Show Summary Details
More options …

# Journal of Causal Inference

Ed. by Imai, Kosuke / Pearl, Judea / Petersen, Maya Liv / Sekhon, Jasjeet / van der Laan, Mark J.

2 Issues per year

Online
ISSN
2193-3685
See all formats and pricing
More options …
Volume 2, Issue 2 (Sep 2014)

# The Deductive Approach to Causal Inference

Judea Pearl
• Corresponding author
• Department of Computer Science, University of California – Los Angeles, Los Angeles, CA, 90095-1596, USA
• Email
• Other articles by this author:
Published Online: 2014-04-05 | DOI: https://doi.org/10.1515/jci-2014-0016

## Abstract

This paper reviews concepts, principles, and tools that have led to a coherent mathematical theory that unifies the graphical, structural, and potential outcome approaches to causal inference. The theory provides solutions to a number of pending problems in causal analysis, including questions of confounding control, policy analysis, mediation, missing data, and the integration of data from diverse studies.

## 1 Introduction

Recent advances in causal inference owe their development to two methodological principles that I call “the deductive approach.” First, a commitment to understanding what reality must be like for a statistical routine to succeed and, second, a commitment to represent reality in terms of data-generating models, rather than distributions of observed variables.

Encoded as nonparametric structural equations, these models have led to a fruitful symbiosis between graphs and counterfactuals and have unified the potential outcome framework of Neyman, Rubin, and Robins with the econometric tradition of Haavelmo, Marschak, and Heckman. In this symbiosis, counterfactuals emerge as natural byproducts of structural equations and serve to formally articulate research questions of interest. Graphical models, on the other hand, are used to encode scientific assumptions in a qualitative (i.e. nonparametric) and transparent language and to identify the logical ramifications of these assumptions, in particular their testable implications.

In Section 2, we define structural causal models (SCMs) and state the two fundamental laws of causal inference: (1) how counterfactuals and probabilities of counterfactuals are deduced from a given SCM and (2) how features of the observed data are shaped by the graphical structure of a SCM.

Section 3 reviews the challenge of identifying causal parameters and presents a complete solution to the problem of nonparametric identification of causal effects. Given data from observational studies and qualitative assumptions in the form of a graph with measured and unmeasured variables, we need to decide algorithmically whether the assumptions are sufficient for identifying causal effects of interest, what covariates should be measured, and what the statistical estimand is of the identified effect.

Section 4 summarizes mathematical results concerning nonparametric mediation, which aims to estimate the extent to which a given effect is mediated by various pathways or mechanisms. A simple set of conditions will be presented for estimating natural direct and indirect effects in nonparametric models.

Section 5 deals with the problem of “generalizability” or “external validity”: under what conditions can we take experimental results from one or several populations and apply them to another population which is potentially different from the rest. A complete solution to this problem will be presented in the form of an algorithm which decides whether a specific causal effect is transportable and, if the answer is affirmative, what measurements need be taken in the various populations and how they ought to be combined.

Finally, Section 6 describes recent work on missing data and shows that, by viewing missing data as a causal inference task, the space of problems can be partitioned into two algorithmically recognized categories: those that permit consistent recovery from missingness and those that do not.

To facilitate clarity and accessibility, the major mathematical results will be highlighted in the form of four “Summary Results” in Sections 36.

## 2 Counterfactuals and SCM

At the center of the structural theory of causation lies a “structural model,” M, consisting of two sets of variables, U and V, and a set F of functions that determine or simulate how values are assigned to each variable ${V}_{i}\in V$. Thus, for example, the equation ${v}_{i}={f}_{i}\left(v,u\right)$describes a physical process by which variable ${V}_{i}$ is assigned the value ${v}_{i}={f}_{i}\left(v,u\right)$ in response to the current values, v and u, of all variables in V and U. Formally, the triplet $$ defines a SCM, and the diagram that captures the relationships among the variables is called the causal graph G (of M). The variables in U are considered “exogenous,” namely, background conditions for which no explanatory mechanism is encoded in model M. Every instantiation $U=u$ of the exogenous variables uniquely determines the values of all variables in V and, hence, if we assign a probability $P\left(u\right)$ to U, it defines a probability function $P\left(v\right)$ on V. The vector $U=u$ can also be interpreted as an experimental “unit” which can stand for an individual subject, agricultural lot or time of day, since it describes all factors needed to make V a deterministic function of U.

The basic counterfactual entity in structural models is the sentence: “Y would be y had X been x in unit (or situation) $U=u$,” denoted ${Y}_{x}\left(u\right)=y$. Letting ${M}_{x}$ stand for a modified version of M, with the equation(s) of set X replaced by $X=x$, the formal definition of the counterfactual ${Y}_{x}\left(u\right)$ reads ${Y}_{x}\left(u\right)\stackrel{\mathrm{\Delta }}{=}{Y}_{{M}_{x}}\left(u\right).$(1)In words, The counterfactual ${Y}_{x}\left(u\right)$ in model M is defined as the solution for Y in the “modified” submodel ${M}_{x}$. Galles and Pearl [1] and Halpern [2] have given a complete axiomatization of structural counterfactuals, embracing both recursive and non-recursive models (see also Pearl [3, Chapter 7]).1

Since the distribution $P\left(u\right)$ induces a well-defined probability on the counterfactual event ${Y}_{x}=y$, it also defines a joint distribution on all Boolean combinations of such events, for instance “${Y}_{x}=y$ AND ${Z}_{{x}^{\prime }}=z$,” which may appear contradictory, if $x\ne {x}^{\prime }$. For example, to answer retrospective questions, such as whether Y would be ${y}_{1}$ if X were ${x}_{1}$, given that in fact Y is ${y}_{0}$ and X is ${x}_{0}$, we need to compute the conditional probability $P\left({Y}_{{x}_{1}}={y}_{1}|Y={y}_{0},X={x}_{0}\right)$ which is well defined once we know the forms of the structural equations and the distribution of the exogenous variables in the model.

In general, the probability of the counterfactual sentence $P\left({Y}_{x}=y|e\right)$, where e is any propositional evidence, can be computed by the three-step process (illustrated in Pearl [3, p. 207]):

• Step 1 (abduction): Update the probability $P\left(u\right)$ to obtain $P\left(u|e\right)$.

• Step 2 (action): Replace the equations determining the variables in set X by $X=x$.

• Step 3 (prediction): Use the modified model to compute the probability of $Y=y$.

In temporal metaphors, Step 1 explains the past $\left(U\right)$ in light of the current evidence e; Step 2 bends the course of history (minimally) to comply with the hypothetical antecedent $X=x$; finally, Step 3 predicts the future $\left(Y\right)$ based on our new understanding of the past and our newly established condition, $X=x$.

## 2.1 The two principles of causal inference

Before describing specific applications of the structural theory, it will be useful to summarize its implications in the form of two “principles,” from which all other results follow.

• Principle 1: “The law of structural counterfactuals.”

• Principle 2: “The law of structural independence.”

The first principle is described in eq. (1) and instructs us how to compute counterfactuals and probabilities of counterfactuals from a structural model. This, together with principle 2 will allow us (Section 3) to determine what assumptions one must make about reality in order to infer probabilities of counterfactuals from either experimental or passive observations.

Principle 2 defines how structural features of the model entail dependencies in the data. Remarkably, regardless of the functional form of the equations in the model and regardless of the distribution of the exogenous variables U, if the latters are mutually independent and the model is recursive, the distribution $P\left(v\right)$ of the endogenous variables must obey certain conditional independence relations, stated roughly as follows: whenever sets X and Y are “separated” by a set Z in the graph, X is independent of Y given Z in the probability. This “separation” condition, called d-separation [5, pp. 16–18] constitutes the link between the causal assumptions encoded in the causal graph (in the form of missing arrows) and the observed data.

Definition 1 (d-separation)

A set S of nodes is said to block a path p if either

1. p contains at least one arrow-emitting node that is in S, or

2. p contains at least one collision node that is outside S and has no descendant in S.

If S blocks all paths from set X to set Y, it is said to “d-separate X and $Y$,” and then, variables X and Y are independent given S, written $X\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}Y|S$.2

D-separation implies conditional independencies for every distribution $P\left(v\right)$ that is compatible with the causal assumptions embedded in the diagram. To illustrate, the diagram in Figure 1(a) implies ${Z}_{1}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}Y|\left(X,{Z}_{3},{W}_{2}\right)$, because the conditioning set $S=\left\{X,{Z}_{3},{W}_{2}\right\}$ blocks all paths between ${Z}_{1}$ and Y. The set $S=\left\{X,{Z}_{3},{W}_{3}\right\}$ however leaves the path $\left({Z}_{1},{Z}_{3},{Z}_{2},{W}_{2},Y\right)$ unblocked (by virtue of the collider at ${Z}_{3}$) and, so, the independence ${Z}_{1}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}Y|\left(X,{Z}_{3},{W}_{3}\right)$ is not implied by the diagram.

Figure 1

(a) Graphical model illustrating d-separation and the back-door criterion. U terms are not shown explicitly. (b) Illustrating the intervention $\text{do(X = x)}.$

## 3 Intervention, identification, and causal calculus

A central question in causal analysis is that of predicting the results of interventions, such as those resulting from treatments or social programs, which we denote by the symbol $do\left(x\right)$ and define using the counterfactual ${Y}_{x}$ as3 $P\left(y|do\left(x\right)\right)\stackrel{\mathrm{\Delta }}{=}P\left({Y}_{x}=y\right)$(2)Figure 2(b) illustrates the submodel ${M}_{x}$ created by the atomic intervention $do\left(x\right)$; it sets the value of X to x and thus removes the influence of ${W}_{1}$ and ${Z}_{3}$ on X. We similarly define the result of conditional interventions by $P\left(y|do\left(x\right),z\right)\stackrel{\mathrm{\Delta }}{=}P\left(y,z|do\left(x\right)\right)/P\left(z|do\left(x\right)=P\left({Y}_{x}=y|{Z}_{x}=z\right)$(3)$P\left(y|do\left(x\right),z\right)$ captures the z-specific effect of X on Y, that is, the effect of setting X to x among those units only for which $Z=z$.

A second important question concerns identification in partially specified models: Given a set A of qualitative causal assumptions, as embodied in the structure of the causal graph, can the controlled (post-intervention) distribution, $P\left(y|do\left(x\right)\right)$, be estimated from the available data which are governed by the pre-intervention distribution $P\left(z,x,y\right)$? In linear parametric settings, the question of identification reduces to asking whether some model parameter, $\mathrm{\beta }$, has a unique solution in terms of the parameters of P (say the population covariance matrix). In the nonparametric formulation, the notion of “has a unique solution” does not directly apply since quantities such as $Q=P\left(y|do\left(x\right)\right)$ have no parametric signature and are defined procedurally by a symbolic operation on the causal model M, as in Figure 1(b). The following definition captures the requirement that Q be estimable from the data:

Definition 2 (Identifiability) [5, p. 77]

A causal query Q is identifiable from data compatible with a causal graph G, if for any two (fully specified) models ${M}_{1}$ and ${M}_{2}$ that satisfy the assumptions in G, we have ${P}_{1}\left(v\right)={P}_{2}\left(v\right)⇒Q\left({M}_{1}\right)=Q\left({M}_{2}\right).$(4)In words, equality in the probabilities ${P}_{1}\left(v\right)$ and ${P}_{2}\left(v\right)$ induced by models ${M}_{1}$ and ${M}_{2}$, respectively, entails equality in the answers that these two models give to query Q. When this happens, Q depends on P only and should therefore be expressible in terms of the parameters of P.

When a query Q is given in the form of a do-expression, for example $Q=P\left(y|do\left(x\right),z\right)$, its identifiability can be decided systematically using an algebraic procedure known as the do-calculus [6]. It consists of three inference rules that permit us to equate interventional and observational distributions whenever certain d-separation conditions hold in the causal diagram G.

## 3.1 The rules of do-calculus

Let X, Y, Z, and W be arbitrary disjoint sets of nodes in a causal DAG G. We denote by ${G}_{\overline{X}}$ the graph obtained by deleting from G all arrows pointing to nodes in X. Likewise, we denote by ${G}_{\underset{_}{X}}$ the graph obtained by deleting from G all arrows emerging from nodes in X. To represent the deletion of both incoming and outgoing arrows, we use the notation ${G}_{\overline{X}\underset{_}{Z}}$.

The following three rules are valid for every interventional distribution compatible with G.

• Rule 1 (Insertion/deletion of observations): $P\left(y|do\left(x\right),z,w\right)=P\left(y|do\left(x\right),w\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathrm{i}\mathrm{f}\phantom{\rule{thinmathspace}{0ex}}\left(Y\phantom{\rule{1pt}{0ex}}\mathrm{\perp }\phantom{\rule{negativethinmathspace}{0ex}}\mathrm{\perp }\phantom{\rule{1pt}{0ex}}Z|X,W{\right)}_{{G}_{\overline{X}}}$(5)

• Rule 2 (Action/observation exchange): $P\left(y|do\left(x\right),do\left(z\right),w\right)=P\left(y|do\left(x\right),z,w\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathrm{i}\mathrm{f}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\left(Y\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}Z|X,W{\right)}_{{G}_{\overline{X}\underset{_}{Z}}}$(6)

• Rule 3 (Insertion/deletion of actions): $P\left(y|do\left(x\right),do\left(z\right),w\right)=P\left(y|do\left(x\right),w\right)\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\mathrm{i}\mathrm{f}\phantom{\rule{thinmathspace}{0ex}}\left(Y\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}Z|X,W{\right)}_{{G}_{\overline{X}\overline{Z\left(W\right)}}},$(7)

where $Z\left(W\right)$ is the set of Z-nodes that are not ancestors of any W-node in ${G}_{\overline{X}}$.

To establish identifiability of a causal query Q, one needs to repeatedly apply the rules of do-calculus to Q, until an expression is obtained which no longer contains a do-operator4; this renders it estimable from nonexperimental data. The do-calculus was proven to be complete for queries in the form $Q=P\left(y|do\left(x\right),z\right)$ [7, 8], which means that if Q cannot be reduced to probabilities of observables by repeated application of these three rules, such a reduction does not exist, i.e. the query is not estimable from observational studies without strengthening the assumptions.

## 3.1.1 Covariate selection: the back-door criterion

Consider an observational study where we wish to find the effect of treatment $\left(X\right)$ on outcome $\left(Y\right)$, and assume that the factors deemed relevant to the problem are structured as in Figure 1(a); some are affecting the outcome, some are affecting the treatment, and some are affecting both treatment and response. Some of these factors may be unmeasurable, such as genetic trait or lifestyle, while others are measurable, such as gender, age, and salary level. Our problem is to select a subset of these factors for measurement and adjustment such that if we compare treated vs untreated subjects having the same values of the selected factors, we get the correct treatment effect in that subpopulation of subjects. Such a set of factors is called a “sufficient set,” “admissible set,” or a set “appropriate for adjustment” [911]. The following criterion, named “back-door” [12], provides a graphical method of selecting such a set of factors for adjustment.

Definition 3 (admissible sets – the back-door criterion)

A set S is admissible (or “sufficient”) for estimating the causal effect of X on Y if two conditions hold:

1. No element of S is a descendant of X.

2. The elements of S “block” all “back-door” paths from X to Y – namely, all paths that end with an arrow pointing to X.

Based on this criterion we see, for example that, in Figure 1, the sets $\left\{{Z}_{1},{Z}_{2},{Z}_{3}\right\},$ $\left\{{Z}_{1},{Z}_{3}\right\}$, $\left\{{W}_{1},{Z}_{3}\right\}$, and $\left\{{W}_{2},{Z}_{3}\right\}$ are each sufficient for adjustment, because each blocks all back-door paths between X and Y. The set $\left\{{Z}_{3}\right\}$, however, is not sufficient for adjustment, because it does not block the path $X←{W}_{1}←{Z}_{1}\to {Z}_{3}←{Z}_{2}\to {W}_{2}\to Y$.

The intuition behind the back-door criterion is as follows. The back-door paths in the diagram carry spurious associations from X to Y, while the paths directed along the arrows from X to Y carry causative associations. Blocking the former paths (by conditioning on S) ensures that the measured association between X and Y is purely causal, namely, it correctly represents the target quantity: the causal effect of X on Y. Conditions for relaxing restriction 1 are given in Pearl [3, p. 338], Shpitser et al. [13], and Pearl and Paz [14].5 The implication of finding a sufficient set, S, is that stratifying on S is guaranteed to remove all confounding bias relative to the causal effect of X on Y. In other words, it renders the causal effect of X on Y identifiable, via the adjustment formula6 $P\left(Y=y|do\left(X=x\right)\right)=\sum _{s}P\left(Y=y|X=x,S=s\right)P\left(S=s\right)$(8)Since all factors on the right-hand side of the equation are estimable (e.g. by regression) from pre-interventional data, the causal effect can likewise be estimated from such data without bias. Moreover, the back-door criterion implies the independence $X\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{Y}_{x}|S$, also known as “conditional ignorability” [16] and, provides therefore the scientific basis for most inferences in the potential outcome framework.

The back-door criterion allows us to write eq. (8) by inspection, after selecting a sufficient set, S, from the diagram. The selection criterion can be applied systematically to diagrams of any size and shape, thus freeing analysts from judging whether “X is conditionally ignorable given S,” a formidable mental task required in the potential-response framework. The criterion also enables the analyst to search for an optimal set of covariates – namely, a set, S, that minimizes measurement cost or sampling variability [17].

Summary Result 1 (Identification of Interventional Expressions) Given a causal graph G containing both measured and unmeasured variables, the consistent estimability of any expression of the form $Q=P\left({y}_{1},{y}_{2},\dots ,{y}_{m}|do\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right),{z}_{1},{z}_{2},\dots ,{z}_{k}\right)$can be decided in polynomial time. If Q is estimable, then its estimand can be derived in polynomial time. Furthermore, the algorithm is complete.

The results stated in Summary Result 1 were developed in several stages over the past 20 years [6, 8, 12, 18]. Bareinboim and Pearl [19] extended the identifiability of Q to combinations of observational and experimental studies.

## 4 Mediation analysis

The nonparametric structural model for a typical mediation problem takes the form: $t={f}_{T}\left({u}_{T}\right)\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}m={f}_{M}\left(t,{u}_{M}\right)\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}y={f}_{Y}\left(t,m,{u}_{Y}\right)$(9)where T (treatment), M (mediator), and Y (outcome) are discrete or continuous random variables, ${f}_{T},{f}_{M}$, and ${f}_{{}_{Y}}$ are arbitrary functions, and ${U}_{T},{U}_{M},\phantom{\rule{thickmathspace}{0ex}}\mathrm{a}\mathrm{n}\mathrm{d}\phantom{\rule{thickmathspace}{0ex}}{U}_{Y}$ represent, respectively, omitted factors that influence $T,M,$ and Y. The triplet $U=\left({U}_{T},{U}_{M},{U}_{Y}\right)$ is a random vector that accounts for all variations between individuals. It is sometimes called “unit,” for it offers a complete characterization of a subject’s behavior as reflected in $T,M,$ and Y. The distribution of U, denoted $P\left(U=u\right)$, uniquely determines the distribution $P\left(t,m,y\right)$ of the observed variables through the three functions in eq. (9).

Figure 2

(a) The basic nonparametric mediation model, with no confounding. (b) A confounded mediation model in which dependence exists between ${U}_{M}$ and $\left({U}_{T},{U}_{Y}\right).$

In Figure 2(a), the omitted factors are assumed to be arbitrarily distributed but mutually independent, written ${U}_{T}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{U}_{M}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{U}_{Y}$. In Figure 2(b), the dashed arcs connecting ${U}_{T}$ and ${U}_{M}$ (as well as ${U}_{M}$ and ${U}_{T}$) encode the understanding that the factors in question may be dependent.

## 4.1 Natural direct and indirect effects

Using the structural model of eq. (9), four types of effects can be defined for the transition from $T=0$ to $T=1$7:

## 4.1.1 Total effect

$\begin{array}{rl}TE& =E\left\{{f}_{Y}\left[1,{f}_{M}\left(1,{u}_{M}\right),{u}_{Y}\right]-{f}_{Y}\left[0,{f}_{M}\left(0,{u}_{M}\right),{u}_{Y}\right]\right\}\\ & =E\left[{Y}_{1}-{Y}_{0}\right]\\ & =E\left[Y|do\left(T=1\right)\right]-E\left[Y|do\left(T=0\right)\right]\end{array}$(10)

TE measures the expected increase in Y as the treatment changes from $T=0$ to $T=1$, while the mediator is allowed to track the change in T as dictated by the function ${f}_{M}$.

## 4.1.2 Controlled direct effect

$\begin{array}{rl}CDE\left(m\right)& =E\left\{{f}_{Y}\left[1,M=m,{u}_{Y}\right]-{f}_{Y}\left[0,M=m,{u}_{Y}\right]\right\}\\ & =E\left[{Y}_{1,m}-{Y}_{0,m}\right]\\ & =E\left[Y|do\left(T=1,M=m\right)\right]-E\left[Y|do\left(T=0,M=m\right)\right]\end{array}$(11)

CDE measures the expected increase in Y as the treatment changes from $T=0$ to $T=1$, while the mediator is set to a specified level $M=m$ uniformly over the entire population.

## 4.1.3 Natural direct effect8

$\begin{array}{rl}NDE& =E\left\{{f}_{Y}\left[1,{f}_{M}\left(0,{u}_{M}\right),{u}_{T}\right]-{f}_{Y}\left[0,{f}_{M}\left(0,{u}_{M}\right),{u}_{T}\right]\right\}\\ & =E\left[{Y}_{1,{M}_{0}}-{Y}_{0,{M}_{0}}\right]\end{array}$(12)

NDE measures the expected increase in Y as the treatment changes from $T=0$ to $T=1$, while the mediator is set to whatever value it would have attained (for each individual) prior to the change, i.e. under $T=0$.

## 4.1.4 Natural indirect effect

$\begin{array}{rl}NIE& =E\left\{{f}_{Y}\left[0,{f}_{M}\left(1,{u}_{M}\right),{u}_{Y}\right]-{f}_{Y}\left[0,{f}_{M}\left(0,{u}_{M}\right),{u}_{Y}\right]\right\}\\ & =E\left[{Y}_{0,{M}_{1}}-{Y}_{0,{M}_{0}}\right]\end{array}$(13)

NIE measures the expected increase in Y when the treatment is held constant, at $T=0$, and M changes to whatever value it would have attained (for each individual) under $T=1$. It captures, therefore, the portion of the effect which can be explained by mediation alone, while disabling the capacity of Y responds to X.

We note that, in general, the total effect can be decomposed as $TE=NDE-NI{E}_{r}$(14)where $NI{E}_{r}$ stands for the natural indirect effect under the reverse transition, from $T=1$ to $T=0$. This implies that NIE is identifiable whenever NDE and TE are identifiable. In linear systems, where reversal of transitions amounts to negating the signs of their effects, we have the standard additive formula, $TE=NDE+NIE$.

We further note that TE and $CDE\left(m\right)$ are do-expressions and can, therefore, be estimated from experimental data; not so NDE and NIE. Since Summary Result 1 assures us that the identifiability of any do-expression can be determined by an effective algorithm, we will regard the identifiability of TE and $CDE\left(m\right)$ as solved problems and will focus our attention on NDE and NIE.

## 4.2 Sufficient conditions for identifying natural effects

The following is a set of assumptions or conditions, marked A-1 to A-4, that are sufficient for identifying both direct and indirect natural effects. Each condition is communicated by a verbal description followed by its formal expression. The full set of assumptions is then followed by its graphical representation.

## 4.2.1 Assumption set A [20]

There exists a set W of measured covariates such that:

• A-1No member of W is affected by treatment.

• A-2W deconfounds the mediator–outcome relationship (holding T constant), i.e. $\left[{M}_{t}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{Y}_{{t}^{\prime },m}|W\right]$

• A-3The W-specific effect of the treatment on the mediator is identifiable by some means. $\left[P\left(m|do\left(t\right),w\right)\phantom{\rule{1pt}{0ex}}\mathrm{i}\mathrm{s}\phantom{\rule{thickmathspace}{0ex}}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{i}\mathrm{f}\mathrm{i}\mathrm{a}\mathrm{b}\mathrm{l}\mathrm{e}\phantom{\rule{1pt}{0ex}}\right]$

• A-4The W-specific joint effect of {treatment + mediator} on the outcome is identifiable by some means. $\left[P\left(y|do\left(t,m\right),w\right)\phantom{\rule{1pt}{0ex}}\mathrm{i}\mathrm{s}\phantom{\rule{thickmathspace}{0ex}}\mathrm{i}\mathrm{d}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{i}\mathrm{f}\mathrm{i}\mathrm{a}\mathrm{b}\mathrm{l}\mathrm{e}\phantom{\rule{1pt}{0ex}}\right]$

## 4.2.2 Graphical version of assumption set A

There exists a set W of measured covariates such that:

• AG-1No member of W is a descendant of T.

• AG-2W blocks all back-door paths from M to Y (not traversing $T\to M$ and $T\to Y$).

• AG-3The W-specific effect of T on M is identifiable (using Summary Result 1 and possibly using experiments or auxiliary variables).

• AG-4The W-specific joint effect of $\left\{T,M\right\}$ on Y is identifiable (using Summary Result 1 and possibly using experiments or auxiliary variables).

Summary Result 2 (Identification of natural effects)

When conditions A-1 and A-2 hold, the natural direct effect is experimentally identifiable and is given by $NDE=\sum _{m}\sum _{w}\left[E\left(Y|do\left(T=1,M=m\right)\right),W=w\right)-E\left(Y|do\left(T=0,M=m\right),W=w\right)\right]$ $P\left(M=m|do\left(T=0\right),W=w\right)P\left(W=w\right)$(15)The identifiability of the do-expressions in eq. (15) is guaranteed by conditions A-3 and A-4 and can be determined by Summary Result 1.

In the non-confounding case (Figure 2(a)), NDE reduces to the mediation formula: $NDE=\sum _{m}\left[E\left(Y|T=1,M=m\right)-E\left(Y|T=0,M=m\right)\right]P\left(M=m|T=0\right).$(16)

Corollary 1 If conditions A-1 and A-2 are satisfied by a set W that also deconfounds the relationships in A-3 and A-4, then the do-expressions in eq. (15) are reducible to conditional expectations, and the natural direct effect becomes9: $NDE=\sum _{m}\sum _{w}\left[E\left(Y|T=1,M=m,W=w\right)-E\left(Y|T=0,M=m,W=w\right)\right]$ $P\left(M=m|T=0,W=w\right)P\left(W=w\right).$(17)It is interesting to compare assumptions A-1 to A-4 to those often cited in the literature, which are based on “sequential ignorability” [15], the dominant inferential tool in the potential outcome framework.

## 4.2.3 Assumption set B (Sequential ignorability)

There exists a set W of measured covariates such that:

• B-1W and T deconfound the mediator–outcome relationship. $\left[{Y}_{{t}^{\prime },m}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{M}_{t}|T,W\right]$

• B-2W deconfounds the treatment–{mediator, outcome} relationship. $\left[T\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}\left({Y}_{{t}^{\prime },m},{M}_{t}\right)|W\right]$

Assumption set A differs from assumption set B on two main provisions. First, A-3 and A-4 permit the identification of these causal effects by any means, while B-1 and B-2 insist that identification be accomplished by adjustment for W only. Second, whereas A-3 and A-4 auxiliary covariates to be invoked in the identification of the causal effects needed, B requires that the same set W satisfy all conditions simultaneously. Due to these two provisions, assumption set A significantly broadens the class of problems in which the natural effects are identifiable [23]. Shpitser [24] further provides complete algorithms for identifying natural direct and indirect effects and extends these results to path-specific effects with multiple treatments and multiple outcomes.

## 5 External validity and transportability

In applications requiring identification, the role of the do-calculus is to remove the do-operator from the query expression. We now discuss a totally different application, to decide if experimental findings from environment $\mathrm{\pi }$ can be transported to a new, potentially different environment, ${\mathrm{\pi }}^{\ast }$, where only passive observations can be performed. This problem, labeled “transportability” in Pearl and Bareinboim [25], is at the heart of every scientific investigation since, invariably, experiments performed in one environment (or population) are intended to be used elsewhere, where conditions may differ.

To formalize problems of this sort, a graphical representation called “selection diagrams” was devised (Figure 3) which encodes knowledge about differences and commonalities between populations. A selection diagram is a causal diagram annotated with new variables, called S-nodes, which point to the mechanisms where discrepancies between the two populations are suspected to take place.

Figure 3

Selection diagrams depicting differences in populations. In (a), the two populations differ in age distributions. In (b), the populations differs in how reading skills (Z) depends on age (an unmeasured variable, represented by the hollow circle) and the age distributions are the same. In (c), the populations differ in how Z depends on X. Dashed arcs (e.g. $X<\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}-\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}>Y$) represent the presence of latent variables affecting both X and Y.

The task of deciding if transportability is feasible now reduces to a syntactic problem of separating (using the do-calculus) the do-operator from the S-variables in the query expression $P\left(y|do\left(x\right),z,s\right)$.

Theorem 1 [25] Let D be the selection diagram characterizing two populations, $\mathrm{\pi }$ and ${\mathrm{\pi }}^{\ast }$, and S a set of selection variables in D. The relation $R={P}^{\ast }\left(y|do\left(x\right),z\right)$ is transportable from $\mathrm{\pi }$ and ${\mathrm{\pi }}^{\ast }$ if and only if the expression $P\left(y|do\left(x\right),z,s\right)$ is reducible, using the rules of do-calculus, to an expression in which S appears only as a conditioning variable in do-free terms.

While Theorem 1 does not specify the sequence of rules leading to the needed reduction (if such exists), a complete and effective graphical procedure was devised by Bareinboim and Pearl [26], which also synthesizes a transport formula whenever possible. Each transport formula determines what information need to be extracted from the experimental and observational studies and how they ought to be combined to yield an unbiased estimate of the relation $R=P\left(y|do\left(x\right),s\right)$ in the target population ${\mathrm{\pi }}^{\ast }$. For example, the transport formulas induced by the three models in Figure 3 are given by:

• (a)

$P\left(y|do\left(x\right),s\right)={\sum }_{z}P\left(y|do\left(x\right),z\right)P\left(z|s\right)$

• (b)

$P\left(y|do\left(x\right),s\right)=P\left(y|do\left(x\right)\right)$

• (c)

$P\left(y|do\left(x\right),s\right)={\sum }_{z}P\left(y|do\left(x\right),z\right)P\left(z|x,s\right)$

Each of these formulas satisfies Theorem 1, and each describes a different procedure of pooling information from $\mathrm{\pi }$ and ${\mathrm{\pi }}^{\ast }$.

For example, (c) states that to estimate the causal effect of X on Y in the target population ${\mathrm{\pi }}^{\ast }$, $P\left(y|do\left(x\right),z,s\right)$, we must estimate the z-specific effect $P\left(y|do\left(x\right),z\right)$ in the source population $\mathrm{\pi }$ and average it over z, weighted by $P\left(z|x,s\right)$, i.e. the conditional probability $P\left(z|x\right)$ estimated at the target population ${\mathrm{\pi }}^{\ast }$. The derivation of this formula follows by writing $P\left(y|do\left(x\right),s\right)=\sum _{z}P\left(y|do\left(x\right),z,s\right)P\left(z|do\left(x\right),s\right)$and noting that Rule 1 of do-calculus authorizes the removal of s from the first term (since $Y\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}S|Z$ holds in ${G}_{\overline{X}}$) and Rule 2 authorizes the replacement of $do\left(x\right)$ with x in the second term (since the independence $Z\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}X$ holds in ${G}_{\underset{_}{X}}$.)

A generalization of transportability theory to multi-environment has led to a principled solution to estimability problems in “Meta Analysis.” “Meta Analysis” is a data fusion problem aimed at combining results from many experimental and observational studies, each conducted on a different population and under a different set of conditions, so as to synthesize an aggregate measure of effect size that is “better,” in some sense, than any one study in isolation. This fusion problem has received enormous attention in the health and social sciences, and is typically handled by “averaging out” differences (e.g. using inverse-variance weighting).

Using multiple “selection diagrams” to encode commonalities among studies, Bareinboim and Pearl [27] “synthesized” an estimator that is guaranteed to provide unbiased estimate of the desired quantity based on information that each study share with the target environment. Remarkably, a consistent estimator may be constructed from multiple sources even in cases where it is not constructable from any one source in isolation.

Summary Result 3 (Meta transportability) [27]

Nonparametric transportability of experimental findings from multiple environments can be determined in polynomial time, provided suspected differences are encoded in selection diagrams.

When transportability is feasible, a transport formula can be derived in polynomial time which specifies what information needs to be extracted from each environment to synthesize a consistent estimate for the target environment.

The algorithm is complete, i.e. when it fails, transportability is infeasible.

## 6 Missing data from causal inference perspectives

Most practical methods of dealing with missing data are based on the theoretical work of Rubin [28] and Little and Rubin [29] who formulated conditions under which the damage of missingness would be minimized. However, the theoretical guarantees provided by this theory are rather weak, and the taxonomy of missing data problems rather coarse.

Specifically, Rubin’s theory divides problems into three categories: Missing Completely At Random (MCAR), Missing At Random (MAR), and Missing Not At Random (MNAR). Performance guarantees and some testability results are available for MCAR and MAR, while the vast space of MNAR problems has remained relatively unexplored.

Viewing missingness from a causal perspective evokes the following questions:

• Q1.

What must the world be like for a given statistical procedure to produce satisfactory results?

• Q2.

Can we tell from the postulated world whether any method exists that produces consistent estimates of the parameters of interest?

• Q3.

Can we tell from data whether the postulated world should be rejected?

To answer these questions the user must articulate features of the problem in some formal language and capture both the inter-relationships among the variables of interest and the missingness process. In particular, the model should identify those variables that are responsible for values missing in another.

The graph in Figure 4(a) depicts a typical missingness process, where missingness in Z is explained by X and Y, which are fully observed. Taking such a graph, G, as a representation of reality, we define two properties relative to a partially observed dataset D.

Definition 4 (Recoverability)

A probabilistic relationship Q is said to be recoverable in G if there exists a consistent estimate $\stackrel{ˆ}{Q}$ of Q for any dataset D generated by G. In other words, in the limit of large samples, the estimator should produce an estimate of Q as if no data were missing.

Definition 5 (Testability)

A missingness model G is said to be testable if any of its implications is refutable by data with the same sets of fully and partially observed variables.

Figure 4

(a) Graph describing a MAR missingness process. X and Y are fully observed variables, Z is partially observed and ${Z}^{\ast }$ is a proxy for Z. ${R}_{z}$ is a binary variable that acts as a switch: ${Z}^{\ast }=Z$ when ${R}_{z}=0$ and ${Z}^{\ast }=m$ when ${R}_{z}=1$. (b) Graph representing a MNAR process. (The proxies ${Z}^{\ast },{X}^{\ast }$, and ${Y}^{\ast }$ are not shown.)

Figure 5

Recoverability of the joint distribution in MCAR, MAR, and MNAR. Joint distributions are recoverable in areas marked (S) and (M) and proven to be non-recoverable in area (N).

While some recoverability and testability results are known for MCAR and MAR [30, 31], the theory of structural models permits us to extend these results to the entire class of MNAR problems, namely, the class of problems in which at least one missingness mechanism (${R}_{z}$) is triggered by variables that are themselves victims of missingness (e.g. X and Y in Figure 4(b)). The results of this analysis are summarized in Figure 5 which partitions the class of MNAR problems into three major regions with respect to recoverability of the joint distribution.

• 1.

M (Markovian${}^{+}$) – Graphs with no latent variables, no variable that is a parent of its missingness mechanism and no missingness mechanism that is an ancestor of another missingness mechanism.

• 2.

S (Sequential-MAR) – Graphs for which there exists an ordering ${X}_{1},{X}_{2},\dots ,{X}_{n}$ of the variables such that for every i we have: ${X}_{i}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}\left({R}_{{X}_{i}},{R}_{{Y}_{i}}\right)|{Y}_{i}$ where ${Y}_{i}\subseteq \left\{{X}_{i+1},\dots ,{X}_{n}\right\}$. Such sequences yield the estimand: $P\left(X\right)={\prod }_{i}P\left({X}_{i}|{Y}_{i},{R}_{{x}_{i}}=0,{R}_{{y}_{i}}=0\right)$, in which every term in this product is estimable from the data.

• 3.

N (Proven to be Non-recoverable) – Graphs in which there exists a pair $\left(X,{R}_{x}\right)$ such that X and ${R}_{x}$ are connected by an edge or by a path on which every intermediate node is a collider.

The area labeled “O” consists of all other problem structures, and we conjecture this class to be empty. All problems in areas $\left(M\right)$ and $\left(S\right)$ are recoverable.

To illustrate, Figure 4(a) is MAR, because Z is d-separated from ${R}_{z}$ by X and Y which are fully observed. Consequently, $P\left(X,Y,Z\right)$ can be written $P\left(X,Y,Z\right)=P\left(Z|Y,X\right)P\left(X,Y\right)=P\left(Z|Y,X,{R}_{x}=0\right)P\left(X,Y\right)$and the r.h.s. is estimable. Figure 4(b) however is not MAR because all variables that d-separate Z from ${R}_{z}$ are themselves partially observed. It nevertheless allows for the recovery of $P\left(X,Y,Z\right)$ because it complies with the conditions of the Markovian${}^{+}$ class, though not with the Sequential-MAR class, since no admissible ordering exists. However, if X were fully observed, the following decomposition of $P\left(X,Y,Z\right)$ would yield an admissible ordering: $\begin{array}{rl}P\left(X,Y,Z\right)& =P\left(Z|X,Y\right)P\left(Y|X\right)P\left(X\right)\\ & =P\left(Z|X,Y,{R}_{z}=0,{R}_{y}=0\right)P\left(Y|X,{R}_{y}=0\right)P\left(X\right)\end{array}$(18)in which every term is estimable from the data. The licenses to insert the R terms into the expression are provided by the corresponding d-separation conditions in the graph. The same licenses would prevail had ${R}_{z}$ and ${R}_{y}$ been connected by a common latent parent, which would have disqualified the model from being Markovian+, but retain its membership in the Sequential-MAR category.

Note that the order of estimation is critical in the two MNAR examples considered and depends on the structure of the graph; no model-blind estimator exists for the MNAR class [32, 33].

Note that the partition of the MNAR territory into recoverable vs non-recoverable models is query-dependent. For example, some problems permit unbiased estimation of queries such as $P\left(Y|X\right)$ and $P\left(Y\right)$ but not of $P\left(X,Y\right)$. Note further that MCAR and MAR are nested subsets of the “Sequential-MAR” class, all three permit the recoverability of the joint distribution. A version of Sequential-MAR is discussed in Gill and Robins [34] and Zhou et al. [35] but finding a recovering sequence in any given model is a task that requires graphical tools.

Graphical models also permit the partitioning of the MNAR territory into testable vs nontestable models [36]. The former consists of at least one conditional independence claim that can be tested under missingness. Here we note that some testable implications of fully recoverable distributions are not testable under missingness. For example, $P\left(X,Y,Z,{R}_{z}\right)$ is recoverable in Figure 4(a) since the graph is in $\left(M\right)$ (it is also in MAR) and this distribution advertises the conditional independence $Z\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{R}_{z}|XY$. Yet, $Z\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{R}_{z}|XY$ is not testable by any data in which the probability of observing Z is non-zero (for all $x,y$) [33, 37]. Any such data can be construed as if generated by the model in Figure 4(a), where the independence holds. In Figure 4(b) on the other hand, the independence $\left(Z\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{R}_{x}|Y,{R}_{y},{R}_{z}\right)$ is testable, and so is $\left({R}_{z}\phantom{\rule{1pt}{0ex}}╨\phantom{\rule{1pt}{0ex}}{R}_{y}|X,{R}_{x}\right)$.

Summary Result 4 (Recoverability from missing data) [38]

The feasibility of recovering relations from missing data can be determined in polynomial time, provided the missingness process is encoded in a causal diagram that falls in areas $M,S$, or N of Figure 5.

Thus far we dealt with the recoverability of joint and conditional probabilities. Extensions to causal relationships are discussed in Mohan and Pearl [37].

## 7 Conclusions

The unification of the structural, counterfactual, and graphical approaches gave rise to mathematical tools that have helped resolve a wide variety of causal inference problems, including confounding control, policy analysis, and mediation (summarized in Sections 24). These tools have recently been applied to new territories of statistical inference, meta analysis, and missing data and have led to the results summarized in Sections 5 and 6. Additional applications involving selection bias [39, 40], heterogeneity [41], measurement error [42, 43], and bias amplification [44] are discussed in the corresponding citations and have not been described here.

The common threads underlying these developments are the two fundamental principles of causal inference described in Section 2.1. The first provides formal and meaningful semantics for counterfactual sentences; the second deploys a graphical model to convert the qualitative assumptions encoded in the model into statements of conditional independence in the observed data. The latters are subsequently used for testing the model, and for identifying causal and counterfactual relationships from observational and experimental data.

## Acknowledgments

This research was supported in parts by grants from NSF #IIS1249822 and #IIS1302448, and ONR #N00014-13-1-0153 and #N00014-10-1-0933.

## Note

This paper is based on a lecture entitled “The Deductive Approach to Causal Inference” delivered at the Atlantic Conference on Causal Influence, Boston, MA, May 15, 2013, and on a subsequent lecture given at the JSM Meeting, August 6, 2013.

## References

• 1.

Galles D, Pearl J. An axiomatic characterization of causal counterfactuals. Found Sci 1998;3:151–82.

• 2.

Halpern J. Axiomatizing causal reasoning. In: Cooper G, Moral S, editors. Uncertainty in artificial intelligence. San Francisco, CA: Morgan Kaufmann, 1998:202–10. Also, J Artif Intell Res 2000;12:17–37. Google Scholar

• 3.

Pearl J. Causality: models, reasoning, and inference, 2nd ed. New York: Cambridge University Press, 2009. Google Scholar

• 4.

Balke A, Pearl J. Counterfactuals and policy analysis in structural models. In: Besnard P, Hanks S, editors. Uncertainty in artificial intelligence 11. San Francisco, CA: Morgan Kaufmann, 1995:11–18. Google Scholar

• 5.

Pearl J. Causality: models, reasoning, and inference. New York: Cambridge University Press, 2000;2nd ed., 2009. Google Scholar

• 6.

Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669–710.

• 7.

Huang Y, Valtorta M. Pearl’s calculus of intervention is complete. In: Dechter R, Richardson T, editors. Proceedings of the twenty-second conference on uncertainty in artificial intelligence. Corvallis, OR: AUAI Press, 2006:217–24. Google Scholar

• 8.

Shpitser I, Pearl J. Identification of conditional interventional distributions. In: Dechter R, Richardson T, editors. Proceedings of the twenty-second conference on uncertainty in artificial intelligence. Corvallis, OR: AUAI Press, 2006:437–44. Google Scholar

• 9.

Greenland S, Pearl J, Robins J. Causal diagrams for epidemiologic research. Epidemiology 1999;10:37–48.

• 10.

Pearl J. Comment on A.P. Dawid’s, causal inference without counterfactuals. J Am Stat Assoc 2000;95:428–31.Google Scholar

• 11.

Pearl J. Causal inference in statistics: an overview. Stat Surv 2009;3:96–146.

• 12.

Pearl J. Comment: graphical models, causality, and intervention. Stat Sci 1993;8:266–9.

• 13.

Shpitser I, VanderWeele T, Robins J. On the validity of covariate adjustment for estimating causal effects. In: Grunwald P, Spirtes P, editors. Proceedings of the twenty-sixth conference on uncertainty in artificial intelligence. Corvallis, OR: AUAI, 2010:527–36. Google Scholar

• 14.

Pearl J, Paz A. Confounding equivalence in causal inference. J Causal Inference 2014;2:75–94. Google Scholar

• 15.

Imai K, Keele L, Yamamoto T. Identification, inference, and sensitivity analysis for causal mediation effects. Stat Sci 2010;25:51–71.

• 16.

Rosenbaum P, Rubin D. The central role of propensity score in observational studies for causal effects. Biometrika 1983;70:41–55.

• 17.

Tian J, Paz A, Pearl J. Finding minimal separating sets. Technical Report R-254, University of California, Los Angeles, CA, 1998. Google Scholar

• 18.

Tian J, Pearl J. A general identification condition for causal effects. In: Dechter R, Kearns M, Sutton R, editors. Proceedings of the eighteenth national conference on artificial intelligence. Menlo Park, CA: AAAI Press/The MIT Press,2002:567–73. Google Scholar

• 19.

Bareinboim E, Pearl J. Causal inference by surrogate experiments: z-identifiability. In: de Freitas N, Murphy K, editors. Proceedings of the twenty-eighth conference on uncertainty in artificial intelligence, UAI ‘12. Corvalis, OR: AUAI Press, 2012:113–20. Google Scholar

• 20.

Robins J, Greenland S. Identifiability and exchangeability for direct and indirect effects. Epidemiology 1992;3:143–55.

• 21.

Pearl J. Direct and indirect effects. In: Breese J, Koller D, editors. Proceedings of the seventeenth conference on uncertainty in artificial intelligence. San Francisco, CA: Morgan Kaufmann, 2001:411–20. Google Scholar

• 22.

Wang X, Sobel M. New perspectives on causal mediation analysis. In: Morgan S, editor. Handbook of causal analysis for social research. New York, NY: Springer, 2013:215–42. Google Scholar

• 23.

Pearl J. Interpretation and identification in causal mediation analysis. Technical Report R-389. University of California Los Angeles, Computer Science Department, CA, forthcoming, Psychological Methods, 2013. Available at: http://ftp.cs.ucla.edu/pub/stat_ser/r389.pdf

• 24.

Shpitser I. Counterfactual graphical models for longitudinal mediation analysis with unobserved confounding. Cogn Sci 2013;37:1011–35.

• 25.

Pearl J, Bareinboim E. Transportability of causal and statistical relations: a formal approach. In: Burgard W, Roth D, editors. Proceedings of the twenty-fifth AAAI conference on artificial intelligence, Menlo Park, CA: AAAI Press, 2011:247–54. Google Scholar

• 26.

Bareinboim E, Pearl J. Transportability of causal effects: completeness results. In: Hoffman J, Selman B, editors. Proceedings of the twenty-sixth AAAI conference, Toronto, ON, 2012:698–704. Google Scholar

• 27.

Bareinboim E, Pearl J. Meta-transportability of causal effects: A formal approach. In: Carvalno C, Ravikumar P, editors. Proceedings of the sixteenth international conference on artificial intelligence and statistics (AISTATS), Scottsdale, AZ: JMLR, 2013:135–143. Google Scholar

• 28.

Rubin D. Inference and missing data. Biometrika 1976;63:581–92.

• 29.

Little R, Rubin D. Statistical analysis with missing data. New York: Wiley, 2002. Google Scholar

• 30.

Little R. A test of missing completely at random for multivariate data with missing values. J Am Stat Assoc 1988;83:1198–202.

• 31.

Potthoff RF, Tudor GE, Pieper KS, Hasselblad V. Can one assess whether missing data are missing at random in medical studies? Stat Meth Med Res 2006;15:213–34.

• 32.

Pearl J. Linear models: a useful “microscope” for causal analysis. J Causal Inference 2013;1:155–70. Google Scholar

• 33.

Pearl J, Mohan K. Recoverability and testability of missing data: introduction and summary of results. Technical Report R-417. Department of Computer Science, University of California, Los Angeles, CA, 2013. Available at: http://ftp.cs.ucla.edu/pub/stat_ser/r417.pdf.

• 34.

Gill R, Robins J. Sequential models for coarsening and missingness. In: Lin D, Fleming T, editors. Proceedings of the first Seattle symposium on survival analysis, 1997:295–305. Google Scholar

• 35.

Zhou Y, Little RJ, Kalbfleisch JD. Block-conditional missing at random models for missing data. Stat Sci 2010;25:517–32.

• 36.

Mohan K, Pearl J. On the testability of models with missing data. Technical Report R-415. Department of Computer Science, University of California, Los Angeles, CA, 2014. To appear in Proceedings of the seventeenth international conference on artificial intelligence and statistics (AISTATS), Reykjavik, Iceland: JMLR, 2014. Available at: http://ftp.cs.ucla.edu/pub/stat_ser/r415.pdf

• 37.

Mohan K, Pearl J. Graphical models for recovering causal and probabilistic queries from missing data. Technical Report R-425. Department of Computer Science, University of California, Los Angeles, CA, Working Paper, 2014. Google Scholar

• 38.

Mohan K, Pearl J, Tian J. Graphical models for inference with missing data. In: Burges C, Bottou L, Welling M, Ghahramani Z, Weinberger K, editors. Advances in neural information processing systems 26. 2013:1277–85. Available at: http://papers.nips.cc/paper/4899-graphical-models-for-inference-with-missing-data.pdf

• 39.

Bareinboim E, Pearl J. Controlling selection bias in causal inference. In: Lawrence N, Girolami M, editors. Proceedings of the fifteenth international conference on artificial intelligence and statistics (AISTATS). La Palma, Canary Islands: JMLR, 2012:100–8. Google Scholar

• 40.

Pearl J. A solution to a class of selection-bias problems. Technical Report R-405. Department of Computer Science, University of California, Los Angeles, CA, 2012 Available at.: http://ftp.cs.ucla.edu/pub/stat_ser/r405.pdf.

• 41.

Pearl J. Detecting latent heterogeneity. Technical Report R-406. Department of Computer Science, University of California, Los Angeles, CA, 2012. Available at: http://ftp.cs.ucla.edu/pub/stat_ser/r406.pdf

• 42.

Kuroki M, Pearl J. Measurement bias and effect restoration in causal inference. Biometrika, advance access, 2014:doi: 10.1093/biomet/ast066. Google Scholar

• 43.

Pearl J. On measurement bias in causal inference. In: Grunwald P, Spirtes P, editors. Proceedings of the twenty-sixth conference on uncertainty in artificial intelligence. Corvallis, OR: AUAI, 2010:425–32. Google Scholar

• 44.

Pearl J. Invited commentary: understanding bias amplification. Am J Epidemiol 2011;174:1223–7.

Published Online: 2014-04-05

Published in Print: 2014-09-01

The structural definition of counterfactual given in eq. (1) was first introduced in Balke and Pearl [4].

By a “path” we mean consecutive edges in the graph regardless of direction. See Pearl [3, p. 335] for a gentle introduction to d-separation and its proof. In linear models, the independencies implied by d-separation are valid for non-recursive models as well.

An alternative definition of $do\left(x\right)$, invoking population averages only, is given in Pearl [3, p. 24].

Such derivations are illustrated in graphical details in Pearl [3, p. 87].

In particular, the criterion devised by Pearl and Paz [14] simply adds to Condition 2 of Definition 3 the requirement that X and its nondescendants (in Z) separate its descendants (in Z) from Y.

Summations should be replaced by integration when applied to continuous variables, as in Imai et al. [15].

Generalizations to arbitrary reference point, say from $T=t$ to $T={t}^{\prime }$, are straightforward. These definitions apply at the population levels; the unit-level effects are given by the expressions under the expectation. All expectations are taken over the factors ${U}_{M}$ and ${U}_{Y}$.

Natural direct and indirect effects were conceptualized in Robins and Greenland [20] and were formalized using eqs. (12) and (13) in Pearl [21].

Eq. (17) is identical to the one derived by Imai et al. [15] using sequential ignorability (i.e. assumptions B-1 and B-2) and subsequently re-derived by a number of other authors [22].

Citation Information: Journal of Causal Inference, ISSN (Online) 2193-3685, ISSN (Print) 2193-3677,

Export Citation