Many concepts and phenomena in causal analysis were first detected, quantified, and exemplified in linear structural equation models (SEM) before they were understood in full generality and applied to nonparametric problems. Linear SEM’s can serve as a “microscope” for causal analysis; they provide simple and visual representation of the causal assumptions in the model and often enable us to derive close-form expressions for quantities of interest which, in turns, can be used to assess how various aspects of the model affect the phenomenon under investigation. Likewise, linear models can be used to test general hypotheses and to generate counter-examples to over-ambitious conjectures.
Despite their ubiquity, however, techniques for using linear models in that capacity have all but disappeared from the main SEM literature, where they have been replaced by matrix algebra on the one hand, and software packages on the other. Very few analysts today are familiar with traditional methods of path tracing [1–4] which, for small problems, can provide both intuitive insight and easy derivations using elementary algebra.
This note attempts to fill this void by introducing the basic techniques of path analysis to modern researchers, and demonstrating, using simple examples, how concepts and issues in modern causal analysis can be understood and analyzed in SEM. These include: Simpson’s paradox, case–control bias, selection bias, collider bias, reverse regression, bias amplification, near instruments, measurement errors, and more.
2.1 Covariance, regression, and correlation
We start with the standard definition of variance and covariance on a pair of variables X and Y. The variance of X is defined as
The covariance of X and Y is defined as
Associated with the covariance, we define two other measures of association: (1) the regression coefficient and (2) the correlation coefficient . The relationships between the three is given by the following equations:
2.2 Partial correlations and regressions
Many questions in causal analysis concern the change in a relationship between X and Y conditioned on a given set Z of variables. The easiest way to define this change is through the partial regression coefficient which is given by
The partial correlation coefficient can be defined by normalizing :
Note that none of these conditional associations depends on the level z at which we condition variable Z; this is one of the features that makes linear analysis easy to manage and, at the same time, limited in the spectrum of relationships it can capture.
2.3 Path diagrams and structural equation models
A linear structural equation model (SEM) is a system of linear equations among a set V of variables, such that each variable appears on the left hand side of at most one equation. For each equation, the variable on its left hand side is called the dependent variable, and those on the right hand side are called independent or explanatory variables. For example, the equation below
The directionality of this assignment process is captured by a path-diagram, in which the nodes represent variables, and the arrows represent the non-zero coefficients in the equations. The diagram in Figure 1(a) represents the SEM equations of eqs. – and the assumption of zero correlations between the U variables,
The coefficients , and are called path coefficients, or structural parameters and they carry causal information. For example, stands for the change in Y induced by raising X one unit, while keeping all other variables constant.1
The assumption of linearity makes this change invariant to the levels at which we keep those other variables constant, including the error variables; a property called “effect homogeneity.”' Since errors (e.g., ) capture variations among individual units (i.e., subjects, samples, or situations), effect homogeneity amounts to claiming that all units react equally to any treatment, which may exclude applications with profoundly heterogeneous subpopulations.
2.4 Wright’s path-tracing rules
In 1921, the geneticist Sewall Wright developed an ingenious method by which the covariance of any two variables can be determined swiftly, by mere inspection of the diagram [Wright’s method consists of equating the (standardized2) covariance between any pair of variables to the sum of products of path coefficients and error covariances along all d-connected paths between X and Y. A path is d-connected if it does not traverse any collider (i.e., head-to-head arrows, as in ).
For example, in Figure 1(a), the standardized covariance is obtained by summing with the product , thus, yielding , while in Figure 1(b) we get . Note that for the pair (), we get since the path is not d-connected.
The method above is valid for standardized variables, namely variables normalized to have zero mean and unit variance. For non-standardized variables the method need to be modified slightly, multiplying the product associated with a path p by the variance of the variable that acts as the “root” for path p. For example, for Figure 1(a) we have , since X serves as the root for path and Z serves as the root for . In Figure 1(b), however, we get where the double arrow serves as its own root.
2.5 Reading partial correlations from path diagrams
The reduction from partial to pair-wise correlations summarized in eqs. –, when combined with Wright’s path-tracing rules permits us to extend the latter so as to read partial correlations directly from the diagram. For example, to read the partial regression coefficient , we start with a standardized model where all variances are unity (hence, ), and apply eq.  with to get:
To witness, the pair-wise covariances for Figure 1(a) are:
Substituting in (10), we get
Indeed, we know that, for a confounding-free model like Figure 1(a) the direct effect is identifiable and given by the partial regression coefficient . Repeating the same calculation on the model of Figure 1(b) yields:
Armed with the ability to read partial regressions, we are now prepared to demonstrate some peculiarities of causal analysis.
3 The microscope at work: examples and their implications
3.1 Simpson’s paradox
Simpson’s paradox describes a phenomenon whereby an association between two variables reverses sign upon conditioning on a third variable, regardless of the value taken by the latter. The history of this paradox and the reasons it evokes surprise and disbelief are described in Chapter 6 of . The conditions under which association reversal appears in linear models can be seen directly in Figure 1(a). Comparing eqs.  and  we obtain
3.2 Conditioning on intermediaries and their proxies
Conventional wisdom informs us that, in estimating the effect of one variable on another, one should not adjust for any covariate that lies on the pathway between the two . It took decades for epidemiologists to discover that similar prohibition applies to proxies of intermediaries . The amount of bias introduced by such adjustment can be assessed from Figure 2.
Here, the effect of X on Y is simply as is reflected by the regression slope . If we condition on the intermediary Z, the regression slope vanishes, since the equality renders zero in eq. . If we condition on a proxy W of Z, eq.  yields
Speaking of suppressing variations, the model in Figure 3
may carry some surprise. Conditioning on W in this model also suppresses variations in Z, especially for high and, yet, it introduces no bias whatsoever; the partial regression slope is [eq. 10]:
3.3 Case–control bias
In the last section, we explained the bias introduced by conditioning on an intermediate variable (or its proxy) as a restriction on the flow of information between X and Y. This explanation is not entirely satisfactory, as can be seen from the model of Figure 4.
Here, Z is not on the pathway between X and Y, and one might surmise that no bias would be introduced by conditioning on Z, but analysis dictates otherwise. Path tracing combined with eq.  gives:
and yields the bias
3.4 Sample selection bias
The two examples above are special cases of a more general phenomenon called “selection bias” which occurs when samples are preferentially selected to the data set, depending on the values of some variables in the model [12–15]. In Figure 6, for example, if
represents the inclusion in the data set, and exclusion, the selection decision is shown to be a function of both X and Y. Since inclusion () amounts to conditioning on Z, we may ask what the regression of Y on X is in the observed data, , compared with the regression in the entire population, .
Applying our path-tracing analysis in eq.  we get:
Selection bias is symptomatic of a general phenomenon associated with conditioning on collider nodes (Z in our example). The phenomenon involves spurious associations induced between two causes upon observing their common effect, since any information refuting one cause should make the other more probable. It has been known as Berkson Paradox , “explaining away”  or simply “collider bias.”3
3.5 Missing data
In contrast to selection bias, where exclusion removes an entire unit from the dataset, in missing data problems a unit may have each of its variables masked independently of the others [20, p. 89]. Therefore, the diagram representing the missingness process should assign each variable a “switch” , called “missingness mechanism” which determines whether is observed or masked . The arrows pointing to tells us which variables determine whether fires or not . In Figure 7(a), for example, the missingness
Assume we wish to estimate the covariance from partially observed data generated by the model of Figure 7(a); can we obtain an unbiased estimate of ? The question boils down to expressing in terms of the information available to us, namely the values of X and Y that are revealed to us whenever or (or both). If we simply estimate from samples in which both X and Y are observed, that would amount to conditioning on both and which would introduce a bias since the pair is not independent of the pair (owed to the unblocked path from Y to ).
The graph reveals, however, that can nevertheless be estimated bias-free from the information available, using two steps. First, we note that X is independent of its missingness mechanism , since the path from X to is blocked (by the collider at ). Therefore, .4 This means that we can estimate from the samples in which X is observed, regardless of whether Y is missing. Next, we note that the regression slope can be estimated (e.g., using OLS) from samples in which both X and Y are observed. This is because conditioning on and is similar to conditioning on Z in Figure 5, where Z is a proxy of the explanatory variable X.
Putting the two together (using eq. ) we can write:
3.6 The M-bias
The M-bias is another instant of Berkson’s paradox where the conditioning variable, Z, is a pre-treatment covariate, as depicted in Figure 8.
The parameters and represent error covariances and , respectively, which can be generated, for example, by latent variables effecting each of these pairs.
To analyze the size of this bias, we apply eq.  and get:
3.7 Reverse regression
Is it possible that men would earn a higher salary than equally qualified women, and simultaneously, men are more qualified than women doing equally paying job? This counter-intuitive condition can indeed exist, and has given rise to a controversy called “Reverse Regression;”' some sociologists argued that, in salary discrimination cases, we should not compare salaries of equally qualified men and women, but, rather, compare qualifications of equally paid men and women . The phenomenon can be demonstrated in Figure 9.
Let X stand for gender (or age, or socioeconomic background), Y for job earnings and Z for qualification. The partial regression encodes the differential earning of males over females having the same qualifications , while encodes the differential qualification of males over females earning the same salary .
For the model in Figure 9, we have
Surely, for any and we can choose so as to make negative. For example, the combination and yields
Thus, there is no contradiction in finding men earning a higher salary than equally qualified women, and simultaneously, men being more qualified than women doing equally paying job. A negative may be a natural consequence of male-favoring hiring policy (), male-favoring training policy () and qualification-dependent earnings ().
The question of whether standard or reverse regression is more appropriate for proving discrimination is also clear. The equality leaves no room for hesitation, because coincides with the counterfactual definition of “direct effect of gender on hiring had qualification been the same,” which is the court’s definition of discrimination.
The reason the reverse regression appeals to intuition is because it reflects a model in which the employer decides on the qualification needed for a job on the basis of both its salary level and the applicant sex. If this were a plausible model, it would indeed be appropriate to persecute an employer who demands higher qualifications from men as opposed to women. But such a model should place Z as a post-salary variable, for example, .
3.8 Bias amplification
In the model of Figure 10, Z acts as an instrumental variable, since . If U is unobserved, however, Z cannot be distinguished from a confounder, as in Figure 1(a), in the sense that for every set of parameters () in Figure 1(a) one can find a set () for the model in Figure 10 such that the observed covariance matrices of the two models are the same. This indistinguishability, together with the fact that Z may be a strong predictor of X may lure investigators to condition on Z to obtain an unbiased estimate of d . Recent work has shown, however, that such adjustment would amplify the bias created by U [25–27]. The magnitude of this bias and its relation to the pre-conditioning bias, ab, can be computed from the diagram of Figure 10, as follows:
We see the bias created, , is proportional to the pre-existing bias ab and increases with c; the better Z predicts X, the higher the bias. An intuitive explanation of this phenomenon is given in Pearl 
3.9 Near instruments – amplifiers or attenuators?
The model in Figure 11 is indistinguishable from that of Figure 10 when U is unobserved. However, here Z acts both as an instrument and as a confounder. Conditioning on Z is beneficial in blocking the confounding path and harmful in amplifying the baseline bias . The trade-off between these two tendencies can be quantified by computing , yielding
3.10 The butterfly
Another model in which conditioning on Z may have both harmful and beneficial effects is seen in Figure 12.
Here, Z is both a collider and a confounder. Conditioning on Z blocks the confounding path through and and at the same time induces a virtual confounding path through the latent variables that create the covariances and .
This trade-off can be evaluated from our path-tracing formula eq.  which yields
We first note that the pre-conditioning bias
may have positive or negative values even when both and . This refutes folklore wisdom, according to which a variable Z can be exonerated from confounding considerations if it is uncorrelated with both treatment (X) and outcome (Y).
Second, we notice that conditioning on Z may either increase or decrease bias, depending on the structural parameters. This can be seen by comparing eq.  with the post-conditioning bias:
3.11 Measurement error
Assume the confounder U in Figure 13(a) is unobserved but we can measure a proxy Z of U. Can we assess the amount of bias introduced by adjusting for Z instead of U? The answer, again, can be extracted from our path-tracing formula, which yields
As expected, the bias vanishes when approaches unity, indicating a faithful proxy. Moreover, if can be estimated from an external pilot study, the causal effect can be identified [See 30, 31] Remarkably, identical behavior emerges in the model of Figure 13(b) in which Z is a driver of U, rather than a proxy.
The same treatment can be applied to errors in measurements of X or of Y and, in each case, the formula of reveals what model parameters are the ones affecting the resulting bias.
We have demonstrated how path-analytic techniques can illuminate the emergence of several phenomena in causal analysis and how these phenomena depend on the structural features of the model. Although the techniques are limited to linear analysis, hence, restricted to homogeneous populations with no interactions, they can be superior to simulation studies whenever conceptual understanding is of essence, and problem size is manageable.
This research was supported in parts by grants from NSF #IIS-1249822 and ONR #N00014–13–1-0153.
In linear systems, the explanation for the equality in Figure 3 is simple. Conditioning on W does not physically constrain Z, it merely limits the variance of Z in the subpopulation satisfying which was chosen for observations. Given that effect-homogeneity prevails of linear models, we know that the effect of X on Z remains invariant to the level w chosen for observation and therefore this w-specific effect reflects the effect of X on the entire population. This dictates (in a confounding-free model) .
But how can we explain the persistence of this phenomenon in nonparametric models, where we know (e.g., using do-calculus ) that adjustment for W does not have any effect on the resulting estimand? In other words, the equality
will hold in the model of Figure 3 even when the structural equations are nonlinear. Indeed, the independence of W and X, implies
The answer is that adjustment for W involves averaging over W; conditioning on W does not. In other words, whereas the effect of X on Z may vary across strata of W, the average of this effect is none other but the effect over the entire population, that is, , which equals in the non-confounding case.
Symbolically, we have
Wright S. Correlation and causation. J Agric Res 1921;20:557–85.
Duncan O. Introduction to structural equation models. New York: Academic Press, 1975.
Kenny D. Correlation and Causality. New York: Wiley, 1979.
Heise D. Causal analysis. New York: John Wiley and Sons, 1975.
Crámer H. Mathematical methods of statistics. Princeton, NJ: Princeton University Press, 1946.
Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669–710. [Crossref]
Pearl J. Causality: models, reasoning, and inference, 2nd ed. New York: Cambridge University Press, 2009.
Cox D. The planning of experiments. New York: John Wiley and Sons, 1958.
Weinberg C. Toward a clearer definition of confounding. Am J Epidemiol 1993;137:1–8. [PubMed]
Heckman JJ. Sample selection bias as a specification error. Econometrica 1979;47:153–61. [Crossref]
Bareinboim E, Pearl J. Controlling selection bias in causal inference. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS). La Palma, Canary Islands, 2012;100–8.
Daniel RM, Kenward MG, Cousens SN, Stavola BLD. Using causal diagrams to guide analysis in missing data problems. Stat Methods Med Res 2011;21:243–56. [PubMed]
Pearl J. A solution to a class of selection-bias problems. Technical Report, R-405, http://ftp.cs.ucla.edu/pub/stat_ser/r405.pdf, Department of Computer Science, University of California, Los Angeles, CA, 2012.
Berkson J. Limitations of the application of fourfold table analysis to hospital data. Biometrics Bull 1946;2:47–53. [Crossref]
Kim J, Pearl J. A computational model for combined causal and diagnostic reasoning in inference systems. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). Karlsruhe, Germany, 1983.
Little RJ, Rubin DB. Statistical analysis with missing data, Vol. 4. New York: Wiley, 1987.
Pearl J. Myth, confusion, and science in causal analysis. Technical Report, R-348, University of California, Los Angeles, CA. http://ftp.cs.ucla.edu/pub/stat_ser/r348.pdf, 2009.
Mohan K, Pearl J, Tian J. Missing data as a causal inference problem. Technical Report R-410, http://ftp.cs.ucla.edu/pub/stat_ser/r410.pdf, University of California Los Angeles, Computer Science Department, Los Angeles, CA, 2013.
Rosenbaum P. Observational studies, 2nd ed. New York: Springer-Verlag, 2002.
Goldberger A. Reverse regression and salary discrimination. J Hum Resour 1984;19:293–318. [Crossref]
Hirano K, Imbens G. Estimation of causal effects using propensity score weighting: an application to data on right heart catheterization. Health Serv Outcomes Res Methodol 2001;2:259–78.
Bhattacharya J, Vogt W. Do instrumental variables belong in propensity scores? Tech. Rep. NBER Technical Working Paper 343, National Bureau of Economic Research, MA, 2007.
Pearl J. On a class of bias-amplifying variables that endanger effect estimates. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence. AUAI, Corvallis, OR, 417–24. http://ftp.cs.ucla.edu/pub/stat_ser/r356.pdf, 2010.
Wooldridge J. Should instrumental variables be used as matching variables? Technical Report, https://www.msu.edu/ec/faculty/wooldridge/current%20research/treat1r6.pdf, Michigan State University, MI, 2009.
Myers JA, Rassen JA, Gagne JJ, Huybrechts KF, Schneeweiss S, Rothman KJ, Joffe MM, Glynn RJ. Effects of adjusting for instrumental variables on bias and precision of effect estimates. Am J Epidemiol 2011;174:1213–22. [Crossref] [Web of Science] [PubMed]
Pearl J. Invited commentary: understanding bias amplification. Am J Epidemiol 2011 [online]. DOI: 10.1093/aje/kwr352. [Crossref]
Pearl J. On measurement bias in causal inferences. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence. AUAI, Corvallis, OR, 425–32. http://ftp.cs.ucla.edu/pub/stat_ser/r357.pdf, 2010.
Kuroki M, Pearl J. Measurement bias and effect restoration in causal inference. Technical Report, R-366, http://ftp.cs.ucla.edu/pub/stat_ser/r366.pdf, University of California Los Angeles, Computer Science Department, Los Angeles, CA, 2013.
Readers familiar with do-calculus  can interpret as the experimental slope while those familiar with counterfactual logic can write . The latter implies the former, and the two coincide in linear models, where causal effects are homogeneous (i.e., unit-independent.)↩
Standardized parameters refer to systems in which (without loss of generality) all variables are normalized to have zero mean and unit variance, which significantly simplifies the algebra.↩
It has come to my attention recently, and I feel responsibility to make it public, that seasoned reviewers for highly reputable journals reject papers because they are not convinced that such bias can be created; it defies, so they claim, everything they have learned from statistics and economics. A typical resistance to accepting Berkson’s Paradox is articulated in [18, 19].↩
stands for the conditional variance of X given . We take the liberty of treating as any other variable in the linear system, even though it is binary, hence the relationship must be nonlinear. The linear context simplifies the intuition and the results hold in nonparametric systems as well.↩