Jump to ContentJump to Main Navigation
Show Summary Details
In This Section

# Journal of Causal Inference

Ed. by Imai, Kosuke / Pearl, Judea / Petersen, Maya Liv / Sekhon, Jasjeet / van der Laan, Mark J.

2 Issues per year

Online
ISSN
2193-3685
See all formats and pricing
In This Section
Volume 3, Issue 1 (Mar 2015)

# A Boosting Algorithm for Estimating Generalized Propensity Scores with Continuous Treatments

Yeying Zhu
• Corresponding author
• Department of Statistics and Actuarial Science, University of Waterloo, 200 University Ave W, Waterloo, ON N2L 3G1, Canada
• Email:
/ Donna L. Coffman
• The Methodology Center, The Pennsylvania State University, University Park, PA, USA
• Email:
/ Debashis Ghosh
• Department of Statistics and Public Health Sciences, The Pennsylvania State University, University Park, PA, USA
• Email:
Published Online: 2014-08-29 | DOI: https://doi.org/10.1515/jci-2014-0022

## Abstract

In this article, we study the causal inference problem with a continuous treatment variable using propensity score-based methods. For a continuous treatment, the generalized propensity score is defined as the conditional density of the treatment-level given covariates (confounders). The dose–response function is then estimated by inverse probability weighting, where the weights are calculated from the estimated propensity scores. When the dimension of the covariates is large, the traditional nonparametric density estimation suffers from the curse of dimensionality. Some researchers have suggested a two-step estimation procedure by first modeling the mean function. In this study, we suggest a boosting algorithm to estimate the mean function of the treatment given covariates. In boosting, an important tuning parameter is the number of trees to be generated, which essentially determines the trade-off between bias and variance of the causal estimator. We propose a criterion called average absolute correlation coefficient (AACC) to determine the optimal number of trees. Simulation results show that the proposed approach performs better than a simple linear approximation or L2 boosting. The proposed methodology is also illustrated through the Early Dieting in Girls study, which examines the influence of mothers’ overall weight concern on daughters’ dieting behavior.

## 1 Introduction

Much of the literature on propensity scores in causal inference has focused on binary treatments. In the past decade, a few studies (e.g., Lechner [1]; Imai and Van Dyk [2]; Tchernis et al. [3]; Karwa et al. [4]; and McCaffrey et al. [5]) have extended propensity score-based approaches to categorical treatments with more than two levels.

In this article, we consider the problem of causal inference when the treatment is quantitative. Quantitative treatments are very common in practice, such as dosage in biomedical studies [6], number of cigarettes in prevention studies [2] and duration of training in labor studies [7]. In the special case of continuous treatments, a main objective is to estimate the dose–response function. Hirano and Imbens [8] propose a two-step procedure for estimating the dose–response function and suggest a technique called “blocking” to evaluate the balance in the covariates after adjusting for the propensity scores. An alternative approach [9] is based on marginal structural models (MSMs). In MSMs, we specify a response function and employ IPW to consistently estimate the parameters of the function.

A key step in both approaches is to estimate the generalized propensity score, which is defined as the conditional density of the treatment-level given the covariates. Conditional density is usually estimated nonparametrically, such as kernel estimation or local polynomial estimation (e.g., Hall et al. [10]; Fan et al. [11]). When there are a large number of covariates in the study, the nonparametric estimation of the conditional density suffers from the curse of dimensionality. Given the limited literature on this topic, we propose a boosting algorithm to estimate the generalized propensity score. In boosting, the number of trees to be generated is an important tuning parameter, which essentially determines the trade-off between bias and variance of the targeted causal estimator. In the binary treatment case, it has been suggested that the optimal number of trees be determined by minimizing the average standardized absolute mean (ASAM) difference between the treatment group and the control group [12]. The standardized mean difference is also a well-established criterion to assess balance in the potential confounders after weighting. This idea can easily be extended to the categorical treatment case. Similarly, for a continuous treatment, we could divide the treatment into several categories and draw causal inference based on the categorical treatment. However, doing so may introduce subjective bias and information loss. Instead, we aim to develop an innovative criterion that minimizes the correlation between the continuous treatment variable and the covariates after weighting.

This article proceeds as follows. In Section 2, we review the concepts of dose–response function, generalized propensity scores and the ignorability assumption. In Section 3, we propose a boosting method to estimate the generalized propensity scores and propose an innovative criterion to determine the optimal number of trees in boosting. A detailed algorithm is described and the corresponding R code is provided in the Appendix. In Section 4, we compare the proposed methods through simulation studies, and a data analysis application to the Early Dieting in Girls study is presented in Section 5. Some discussion concludes Section 6.

## 2.1 Definition and assumptions

Let Y denote the response of interest, T be the treatment level and $\mathbf{X}$ be a p-dimensional vector of baseline covariates. The observed data can be represented as $\left({Y}_{i},{T}_{i},{\mathbf{X}}_{i}\right)$, $i=1,\dots ,n$, a random sample from $\left(Y,T,\mathbf{X}\right)$. In addition to the observed quantities, we further define ${Y}_{i}\left(t\right)$ as the potential outcome if subject i were assigned to treatment-level t. Here, T is a random variable and t is a specific level of T. The dose–response function we are interested in estimating is $\mathrm{\mu }\left(t\right)=E\left[{Y}_{i}\left(t\right)\right]$, and we assume ${Y}_{i}\left(t\right)$ is well defined for $t\in \mathrm{\tau }$, where $\mathrm{\tau }$ is a continuous domain.

Similar to the binary case, the ignorability assumption is as follows: $f\left(t|Y\left(t\right),\mathbf{X}\right)=f\left(t|\mathbf{X}\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1pt}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{1pt}{0ex}}\phantom{\rule{1em}{0ex}}t\in \mathrm{\tau },$where $f\left(t|\cdot \right)$ refers to the conditional density. That is, the treatment assignment is conditionally independent of the potential outcomes given the covariates. In other words, we assume there are no unmeasured covariates that may jointly influence the treatment assignment and potential outcomes.

Denote the generalized propensity score as $r\left(t,\mathbf{X}\right)\equiv {f}_{T|\mathbf{X}}\left(t|\mathbf{X}\right)$, which is the conditional density of observing the treatment-level t given the covariates [6]. The ignorability assumption implies $f\left(t|Y\left(t\right),r\left(t,\mathbf{X}\right)\right)=f\left(t|r\left(t,\mathbf{X}\right)\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1pt}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{1pt}{0ex}}\phantom{\rule{1em}{0ex}}t\in \mathrm{\tau }.$That is, to adjust for confounding, it is sufficient to condition on the generalized propensity scores instead of conditioning on the vector of covariates, which might be high dimensional.

## 2.2 Estimation based on marginal structural models

Under the ignorability assumption, we focus on the marginal structural model approach to estimate the dose–response function proposed by Robins [9] and Robins et al. [13]. The method works by building a marginal structural model for the potential outcomes. For example, we may assume a linear model: $E\left[Y\left(t\right)\right]={\mathrm{\alpha }}_{0}+{\mathrm{\alpha }}_{1}t.$(1)Model (1) is marginal because it is defined for the expected value of potential outcomes without conditioning on any covariates (which is different from regression models). Based on the observed data $\left({Y}_{i},{T}_{i},{\mathbf{X}}_{i}\right)$, $i=1,\dots ,n$, the parameters in eq. (1) can be consistently estimated by IPW. For the ith subject, the weight is ${w}_{i}=\frac{{f}_{T}\left({T}_{i}\right)}{{f}_{T|\mathbf{X}}\left({T}_{i}|{\mathbf{X}}_{i}\right)}=\frac{r\left({T}_{i}\right)}{r\left({T}_{i},{\mathbf{X}}_{i}\right)},\phantom{\rule{1em}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{1em}{0ex}}i=1,\dots ,n.$(2)There are two important issues related to this approach: (i) the estimation of the inverse probability weights; (ii) the functional form of the outcome model in eq. (1). The first issue is the main topic of this article and will be explored in the next section. Here we briefly discuss the second issue. The consistency result of MSM estimators relies on the correct specification of the outcome model. However, the true form of $E\left[Y\left(t\right)\right]$ is unknown in reality and a flexible model is always preferred. In the data application, we assume a regression spline function [14] for the outcome model: $E\left[Y\left(t\right)\right]={\mathrm{\beta }}_{0}+{\mathrm{\beta }}_{1}t+\cdots +{\mathrm{\beta }}_{p}{t}^{p}+{\mathrm{\beta }}_{p+1}\left(t-{\mathrm{\tau }}_{1}{\right)}_{+}^{p}+\cdots +{\mathrm{\beta }}_{p+K}\left(t-{\mathrm{\tau }}_{K}{\right)}_{+}^{p},$(3)where ${u}_{+}^{p}=\left({u}_{+}{\right)}^{p}$ is a truncated power function and ${u}_{+}=\phantom{\rule{1pt}{0ex}}\mathrm{m}\mathrm{a}\mathrm{x}\phantom{\rule{1pt}{0ex}}\left(0,u\right)$. p is the order of the polynomial function and K is the total number of inner knots. The inner knots are either distributed evenly on $\mathrm{\tau }$ or defined as the equally spaced sample quantiles of ${T}_{i}$, $i=1,\dots ,n$. That is, ${\mathrm{\tau }}_{j}=T\left(\left[100j/\left(K+1\right)\right]\right),\phantom{\rule{1em}{0ex}}j=1,2,\dots ,K,$where ${T}_{\left(i\right)}$ is the ith quantile of ${T}_{1}$, ${T}_{2}$, …, ${T}_{n}$.

To determine the regression spline function, we need to find the optimal p and K. The traditional model selection criteria, such as AIC and BIC, are based on a simple random sample. These criteria can be extended to determine the form of the marginal structural model for a non-randomized sample [15, 16]. Under the assumption that Y is normally distributed, the weighted AIC can be defined as ${\text{AIC}}_{w}=\left(\sum _{i=1}^{n}{w}_{i}\right)\text{ln}\left(\frac{\sum _{i=1}^{n}{w}_{i}{\left({Y}_{i}-{\stackrel{^}{Y}}_{i}\right)}^{2}}{\sum _{i=1}^{n}{w}_{i}}\right)+2l,$(4)where l is the total number of parameters. In eq. (3), $l=K+p+1$. Notice that in this stage, we treat ${w}_{i}^{\prime }s$ as fixed. Similarly, we define the weighted BIC as ${\text{BIC}}_{w}=\left(\sum _{i=1}^{n}{w}_{i}\right)\text{ln}\left(\frac{\sum _{i=1}^{n}{w}_{i}{\left({Y}_{i}-{\stackrel{^}{Y}}_{i}\right)}^{2}}{\sum _{i=1}^{n}{w}_{i}}\right)+\text{ln}\left(\sum _{i=1}^{n}{w}_{i}\right)l.$(5)We will illustrate the specification of the outcome model through the data application in Section 5.3. In the next section, we will focus on the first issue and propose a boosting algorithm for estimating the generalized propensity scores.

## 3.1 Modeling the generalized propensity scores

In the MSM approach, the estimation of ${w}_{i}^{\prime }s$ in eq. (2) is essential. For simplicity, we assume T follows a normal distribution so that $r\left({T}_{i}\right)$ can be easily estimated by normal density. To be noticed, if the normality assumption does not hold for T, we can always employ a nonparametric method, such as Kernel density estimation, to estimate $r\left({T}_{i}\right)$. To estimate $r\left({T}_{i},{\mathbf{X}}_{i}\right)$, a traditional way is to assume $T={\mathbf{X}}^{\prime }\mathrm{\beta }+\mathrm{\epsilon },\phantom{\rule{1em}{0ex}}\mathrm{\epsilon }\sim N\left(0,{\mathrm{\sigma }}^{2}\right).$(6)Then, the estimation of $r\left({T}_{i},{\mathbf{X}}_{i}\right)$ follows two steps [13]:

• 1.

Run a multiple regression of ${T}_{i}$ on ${\mathbf{X}}_{i}$, $i=1,\dots ,n$ and get ${\stackrel{ˆ}{T}}_{i}$ and $\stackrel{ˆ}{\mathrm{\sigma }}$;

• 2.

Calculate the residuals ${\stackrel{ˆ}{\mathrm{\epsilon }}}_{i}={T}_{i}-{\stackrel{ˆ}{T}}_{i}$; $r\left({T}_{i},{\mathbf{X}}_{i}\right)$ can be approximated by $\stackrel{ˆ}{r}\left({T}_{i},{\mathbf{X}}_{i}\right)\approx f\left({\stackrel{ˆ}{\mathrm{\epsilon }}}_{i}\right)\approx \frac{1}{\sqrt{2\mathrm{\pi }}\stackrel{ˆ}{\mathrm{\sigma }}}\mathrm{e}\mathrm{x}\mathrm{p}\left\{-\frac{{{\stackrel{ˆ}{\mathrm{\epsilon }}}_{i}}^{2}}{2{\stackrel{ˆ}{\mathrm{\sigma }}}^{2}}\right\}.$(7)

Because the ignorability assumption is untestable, researchers usually collect a large number of covariates, which means $\mathbf{X}$ is high dimensional. In this case, eq. (6) may not hold. A more general approach is to assume $T=m\left(\mathbf{X}\right)+\mathrm{\epsilon },\phantom{\rule{1em}{0ex}}\mathrm{\epsilon }\sim N\left(0,{\mathrm{\sigma }}^{2}\right).$(8)where $m\left(\mathbf{X}\right)$ is the mean function of T given $\mathbf{X}$. We advocate a machine learning algorithm, boosting, to estimate $m\left(\mathbf{X}\right)$. The advantage of boosting is that it is a nonparametric algorithm that can automatically pick important covariates, nonlinear terms and interaction terms among covariates [12]. It fits an additive model and each component (base learner) is a regression tree. Mathematically, it can be written as: $m\left(\mathbf{X}\right)=\sum _{m=1}^{M}\sum _{j=1}^{{K}_{m}}{c}_{mj}I\left\{\mathbf{X}\in {R}_{mj}\right\},$(9)where M is the total number of trees, ${K}_{m}$ is the number of terminal nodes for the mth tree, ${R}_{mj}$ is the indicator of rectangular region in the feature space spanned by $\mathbf{X}$ and ${c}_{mj}$ is the predicted constant in region ${R}_{mj}$. ${K}_{m}$ and ${R}_{mj}$ are determined by optimizing some nonparametric information criterion, such as Entropy, misclassification rate or Gini Index. ${c}_{mj}$ is simply the average value of ${T}_{i}$ in the training data that fall in the region ${R}_{mj}$. Details about how to construct a tree classifier can be found in Breiman et al. [17].

In boosting, M is an important tuning parameter. It determines the trade-off between bias and variance of the causal estimator. In inverse weighted methods, if subject i receives a weight ${w}_{i}$, it means the subject will be replicated ${w}_{i}-1$ times; that is, there will be ${w}_{i}-1$ replications in the weighted pseudo-sample. In the weighted sample, if the propensity scores are correctly estimated, the treatment assignment and the covariates are supposed to be unconfounded under the ignorability assumption [13]. Consequently, the causal effect can be estimated as in a simple randomized study without confounding. Therefore, a reasonable criterion is to stop the algorithm at the number of trees such that the treatment assignment and the covariates are independent (unconfounded) in the weighted sample. Next, we propose stopping criteria for boosting based on this idea.

## 3.2 Algorithm

We propose four different criteria (summarized in Table 1) for how to measure the degree of independence/correlation between the treatment and each covariate. These criteria are named as “Pearson/polyserial,” “Spearman,” “Kendall” and “distance.” Pearson/polyserial, Spearman and Kendall correlations are commonly used correlation matrices; distance correlation [18, 19] is the most recently proposed and is gaining popularity due to its nice property: it can be defined for two variables of arbitrary dimensions and arbitrary types. Next, we will briefly describe these four correlations.

Table 1

Stopping criteria based on different correlation coefficients

We denote one of the covariates as ${X}_{j}$, for $j=1,2,\dots ,J$, where J is the total number of covariates. If both T and ${X}_{j}$ are normally distributed, the Pearson correlation coefficient will be zero given that T and ${X}_{j}$ are independent. When ${X}_{j}$ is categorical, the Pearson correlation coefficient could be biased [20]. Instead, we should use the polyserial correlation coefficient [21], which essentially assumes that the categorical variable is obtained by classifying an underlying continuous variable $X{}_{j}^{\prime }$. The unknown parameters of ${X}_{j}^{\mathrm{\prime }}$ can be estimated by maximum likelihood. Then, the polyserial correlation is calculated as the Pearson correlation between T and $X{}_{j}^{\prime }$. Spearman and Kendall correlation coefficients are rank-based correlations that can be applied to both continuous and categorical variables. If T and ${X}_{j}$ are independent, we would expect the Spearman and Kendall correlation coefficients to be close to zero. A more flexible measurement of correlation/independence is distance correlation. The distance correlation takes values between zero and one and it equals zero if and only if T and ${X}_{j}$ are independent, regardless of the type of ${X}_{j}$.

In the binary treatment case, to check whether the propensity scores adequately balance the covariates, we calculate the standardized difference in the weighted mean between the treatment group and the control group. If balance is achieved, the difference should be small. In the continuous case, we propose a general algorithm that uses bootstrapping to calculate the weighted correlation coefficient. The procedure requires the following steps for each value of M (number of trees).

• 1.

Calculate $\stackrel{ˆ}{r}\left({T}_{i},{\mathbf{X}}_{i}\right)$ using boosting with M trees. Then, calculate ${w}_{i}=\frac{\stackrel{ˆ}{r}\left({T}_{i}\right)}{\stackrel{ˆ}{r}\left({T}_{i},{\mathbf{X}}_{i}\right)}\phantom{\rule{1em}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{1em}{0ex}}i=1,\dots ,n.$

• 2.

Sample n observations from the original dataset with replacement. Each data point is sampled with the inverse probability weight obtained from the first step. Calculate the corresponding coefficient between T and ${X}_{j}$ on the weighted sample and denote it as ${d}_{ji}$;

• 3.

Repeat Step (2) k times and get ${d}_{j1},{d}_{j2},\dots ,{d}_{jk}$. Calculate the average correlation coefficient between T and ${X}_{j}$, denoted as ${\stackrel{ˉ}{d}}_{j}$;

• 4.

Perform a Fisher transformation on ${\stackrel{ˉ}{d}}_{j}$, i.e., ${z}_{j}=\frac{1}{2}\phantom{\rule{1pt}{0ex}}\mathrm{l}\mathrm{n}\phantom{\rule{1pt}{0ex}}\left(\frac{1+{\stackrel{ˉ}{d}}_{j}}{1-{\stackrel{ˉ}{d}}_{j}}\right).$(10)

• 5.

Average the absolute value of ${z}_{j}$ over all the covariates and get the average absolute correlation coefficient (AACC).

For each value of $M=1,2,\dots ,20,000$, calculate AACC and find the optimal number of trees that lead to the smallest AACC value. The R code for calculating AACC is displayed in Appendix A. An alternative suggestion is to replace Step 5 by calculating the maximum value of the absolute correlation coefficient (MACC) over all the covariates and find the optimal number of trees that lead to the smallest MACC value. After the value of M is determined, the generalized propensity score is estimated by eqs (9) and (7). The Fisher transformation in Step 4 is mainly for the determination of the cut-off value for AACC (MACC). We know that, in the binary treatment case, a well-accepted cut-off value for ASAM is 0.2 [12]. In the continuous treatment case, we set the cut-off value for AACC (MACC) to be 0.1. That is, when $AACC<0.1$ ($MACC<0.1$), we claim that the confounding effect between the treatment and the outcome is small. Appendix B shows a heuristic proof for the claimed cut-off value. Figure 1 is an illustration of the Fisher transformation. As can be seen, when $|{\stackrel{ˉ}{d}}_{j}|$ is small, the transformation is almost the identity; when $|{\stackrel{ˉ}{d}}_{j}|$ is large, $|{z}_{j}|$ increases faster than $|{\stackrel{ˉ}{d}}_{j}|$. This is another advantage of using Fisher transformation: when we try to minimize AACC, the larger absolute values of correlation coefficients will get more penalty compared to the original scale.

Figure 1

An illustration of Fisher transformation

## 4.1 Simulation setup

In this section, we conduct simulation studies to compare the performance of the proposed methods to the existing methods. The generation of observed $\left(Y,T,\mathbf{X}\right)$ is as follows. First, the vector of baseline covariates (potential confounders), denoted as $\mathbf{X}=\left({X}_{1},{X}_{2},\dots ,{X}_{10}\right)$, is generated from the following distributions: ${X}_{1},{X}_{2}\sim N\left(0,1\right)$, ${X}_{3}\sim Bernoulli\left(0.5\right)$, ${X}_{4}\sim Bernoulli\left(0.3\right)$, ${X}_{5},\dots ,{X}_{7}\sim N\left(0,1\right)$ and ${X}_{8},\dots ,{X}_{10}\sim Bernoulli\left(0.5\right)$. Among the ten covariates, ${X}_{1}-{X}_{4}$ are real confounders related to both the treatment and the outcome.

The continuous treatment is generated from $N\left(m\left(\mathbf{X}\right),1\right)$, where the mean function is defined for different scenarios. In Scenario (A), $m\left(\mathbf{X}\right)$ is a linear combination of the real confounders. In Scenario (B), we consider a nonparametric model that is similar to a tree structure with main effects and one quadratic term while in Scenario (C), we add two interaction terms. The true forms of $m\left(\mathbf{X}\right)$ in different scenarios are displayed as follows:

•  –

Scenario (A): $m\left(\mathbf{X}\right)=6+0.3{X}_{1}+0.65{X}_{2}-0.35{X}_{3}-0.4{X}_{4}$;

•  –

Scenario (B): $m\left(X\right)=6+0.3I\left\{{X}_{1}>0.5\right\}+0.65I\left\{{X}_{2}<0\right\}-0.35I\left\{{X}_{3}=1\right\}-0.4I\left\{{X}_{4}=0\right\}+0.65I\left\{{X}_{1}>0\right\}$ $I\left\{{X}_{1}>1\right\}$;

•  –

Scenario (C): $m\left(X\right)=6+0.3I\left\{{X}_{1}>0.5\right\}+0.65I\left\{{X}_{2}<0\right\}-0.35I\left\{{X}_{3}=1\right\}-0.4I\left\{{X}_{4}=0\right\}+0.65I\left\{{X}_{1}>0\right\}$ $I\left\{{X}_{1}>1\right\}+0.3I\left\{{X}_{1}>0\right\}I\left\{{X}_{4}=1\right\}-0.65I\left\{{X}_{2}>0.3\right\}I\left\{{X}_{3}=0\right\}$.

The potential outcome function for a subject with covariates $\mathbf{X}$ is generated from $Y\left(t\right)|\mathbf{X}=3.85+0.4t+0.3{X}_{1}+0.36{X}_{2}+0.73{X}_{3}-0.2{X}_{4}+0.25{X}_{1}^{2}+\mathrm{\epsilon },$

where $\mathrm{\epsilon }\sim N\left(0,1\right)$. Based on the data generation process, the true dose–response function is $E\left[Y\left(t\right)\right]=E\left\{E\left[Y\left(t\right)|\mathbf{X}\right]\right\}=4.405+0.4t.$(11)

## 4.2 Results

To compare different methods, we set the value of the parameter of interest to be 0.4, which is the coefficient of T in the dose–response function (11). We apply IPW to estimate the coefficient and employ four different methods to estimate $m\left(\mathbf{X}\right)$ in the generalized propensity scores: (1) linear approximation (eq. 6) using all the covariates; (2) linear approximation with variable selection; (3) ${L}_{2}$ boosting by minimizing the empirical quadratic loss, called mboost by Bühlmann and Yu [22]; (4) boosting with the proposed stopping criteria: Pearson/polyserial, Spearman, Kendall and distance. In Method (2), we employ a variable selection technique that is similar to the idea suggested by Hirano and Imbens [8] to select covariates in the generalized propensity score model. First, we divide the treatment variable into three groups with equal sizes. Then, we test if each covariate is distributed the same in different treatment “groups” using ANOVA at the significance level of 0.05. We only include those covariates that are significantly different among treatment “groups”. In Table 2, we denote Method (1) as Linear1 and Method (2) as Linear2. We generate 1,000 datasets with a sample size of 500. The simulation results are shown in Table 2.

Table 2

Simulation results for Scenarios (A), (B) and (C)

In Scenario (A), the true mean function $m\left(\mathbf{X}\right)$ is linear in the covariates, and hence the linear approximation proposed by Robins et al. [13] leads to the smallest bias. In addition, Linear2 has smaller bias and MSE than Linear1, which indicates that variable selection in the propensity score model does improve the performance. Compared to Linear1 and Linear2, the proposed methods yield much smaller variances and MSEs, as well as better confidence interval coverages. Compared to the proposed methods, mboost based on ${L}_{2}$ boosting yields larger bias and MSE. In Scenarios (B) and (C) where $m\left(\mathbf{X}\right)$ follows a tree structure, Linear2 performs better than Linear1. In addition, the proposed methods are superior in terms of the bias, MSE and 95% confidence interval coverage. Our simulation results are not very sensitive to the choice of the correlation matrices among Pearson/polyserial, Spearman and Kendall correlations. Distance leads to slightly more biased estimates compared to the other proposed criteria.

To further explore the proposed algorithm, we randomly select 100 datasets from the simulated datasets in each scenario. We then compare the number of trees selected by each criterion with the optimal number of trees that leads to the smallest absolute bias with respect to the true causal effect (0.4). Table 3 shows the average number of trees based on the 100 datasets. Compared to the “best” model with the optimal number of trees, “distance” tends to select smaller models, which explains that “distance” yields relatively larger bias than the other three criteria in the simulation. The differences in the number of trees between the “best” model and the models selected by “Pearson/polyserial”, “Spearman” and “Kendall” are relatively small. In Scenarios (A) and (C), they yield slightly more complex models than the “best” models and in Scenario (B), they yield slightly smaller models.

Table 3

Average number of trees in boosting selected by each criterion

## 5.1 Early dieting in girls study

It is reported that dieting increases the likelihood of overeating, weight gain and chronic health problems [23]. We analyze the Early Dieting in Girls study, which is a longitudinal study that aims to examine parental influences on daughters’ growth and development from ages 5 to 15 [24]. The study involves 197 daughters and their mothers, who are from non-Hispanic, White families living in central Pennsylvania. The participants were assessed at five different waves. At each wave, daughters and their mothers were interviewed during a scheduled visit to the laboratory.

In this analysis, we study the influence of mothers’ weight concern on girls’ early dieting behavior. The treatment variable is mother’s overall weight concern (M2WTCON), which is measured at daughter’s age 7. It is the average score of five questions in the questionnaire. A higher value implies the mother is more concerned about gaining weight. In the dataset, its values range from 0 to 3.4. The histogram in Figure 2(A) and the QQ plot in Figure 2(B) show that the treatment is approximately normally distributed. The outcome is whether the daughter diets between ages 7 and 11. We exclude those daughters who reported dieting before age 7, which results in 158 subjects, of which 45 daughters reported dieting. There are 50 potential baseline confounders in this study regarding participants’ characteristics, such as: family history of diabetes and obesity, family income, daughter’s disinhibition, daughter’s body esteem, mother’s perception of mother’s current size and mother’s satisfaction with daughter’s current body [25].

Figure 2

Model diagnostics for the fitted boosting model

## 5.2 Estimation of the generalized propensity scores

Since the treatment is self-selected, to draw causal inferences, we need to adjust for the confounders in the study. Given a large number of potential confounders, we employ a boosting algorithm to estimate the generalized propensity scores. The simulation studies in Section 4 show that the estimation results are not sensitive to the choice of the correlation matrices among Pearson/polyserial, Kendall and Spearman. Therefore, we use “Pearson/polyserial” as the main criterion to select the optimal number of trees in boosting. Figure 3 displays the AACC value versus the number of trees from 1 to 20,000. Based on the data, the optimal number of trees $M=4,846$ and $AACC=0.11$. Figure 2(C) and (D) shows that the residuals from the boosting model (${T}_{i}-\stackrel{ˆ}{m}\left({\mathbf{X}}_{i}\right)$) have approximately constant variance and they are normally distributed. Based on the residual plots, we conclude that the boosting model sufficiently estimates the treatment-level given covariates.

Now we will focus on assessing the balance in the covariates. Notice that in the original data, $AACC=0.177$, which is much larger than 0.1, the cut-off value. In addition, if we look at each covariate separately, there are many covariates whose absolute correlation coefficients with T are larger than 0.2. As shown in Figure 4, after applying the weights, most of the absolute correlations among the treatment and each covariate in the weighted sample are below 0.1 on both the original scale and the Fisher transformed scale. This indicates that the confounding effect of the covariates between the treatment and the outcome is greatly reduced after weighting.

Figure 3

AACC value for $M=1,\dots ,20,000$

Figure 4

Absolute value of correlation coefficients between T and each covariate before weighting (black dots) and after weighting (red plus signs).The left panel shows the original scale and the right panel shows the Fisher transformed scale

## 5.3 Modeling the mean outcome

To draw causal inferences, the next step is to determine the functional form of the outcome model. The boxplot of the estimated inverse weights from our proposed method is displayed in Figure 5. As shown, most weights are distributed around the value of 1. However, there are some extreme weights with values larger than 10. Extreme weights are harmful to the analysis because they increase the variance of the causal estimates [26]. When we estimate the dose–response function, we shrink the top 5% of the weights to the 95th quantile.

Figure 5

Boxplot of the inverse probability weights using Pearson/polyserial correlation

Since all the potential confounders are well-adjusted by the propensity model, we can model the outcome as a function of the treatment. Otherwise, we may also include covariates that are related to the treatment in the outcome model. We then assume a regression spline function as in eq. (3). For binary outcomes, the regression spline function is $\phantom{\rule{1pt}{0ex}}\mathrm{l}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\phantom{\rule{1pt}{0ex}}\left\{E\left[Y\left(t\right)\right]\right\}={\mathrm{\beta }}_{0}+{\mathrm{\beta }}_{1}t+\cdots +{\mathrm{\beta }}_{p}{t}^{p}+{\mathrm{\beta }}_{p+1}\left(t-{\mathrm{\tau }}_{1}{\right)}_{+}^{p}+\cdots +{\mathrm{\beta }}_{p+K}\left(t-{\mathrm{\tau }}_{K}{\right)}_{+}^{p}.$The weighted AIC and BIC displayed in eqs. (4) and (5) are employed to determine the optimal p and K. For a binary outcome, the first part of eqs. (4) and (5) should be replaced by $-2\sum _{i=1}^{n}\left[{w}_{i}{Y}_{i}\phantom{\rule{1pt}{0ex}}\mathrm{l}\mathrm{n}\phantom{\rule{1pt}{0ex}}\phantom{\rule{1pt}{0ex}}{\stackrel{ˆ}{p}}_{i}+{w}_{i}\left(1-{Y}_{i}\right)\phantom{\rule{1pt}{0ex}}\mathrm{l}\mathrm{n}\phantom{\rule{1pt}{0ex}}\left(1-{\stackrel{ˆ}{p}}_{i}\right)\right],$where ${\stackrel{ˆ}{p}}_{i}$ is the estimated probability of early dieting for subject i. We consider three different values for p: $p=1,2,3$, which corresponds to piecewise linear, quadratic and cubic models. We consider $K=0,1,2,\dots ,9$. We select the optimal number of p and K based on the values of $\mathrm{B}\mathrm{I}{\mathrm{C}}_{w}$, which are displayed in Table 4. As shown, the best model is when $p=1$ and $K=0$. Therefore, the model we fit is $\phantom{\rule{1pt}{0ex}}\mathrm{l}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\phantom{\rule{1pt}{0ex}}\left\{E\left[Y\left(t\right)\right]\right\}={\mathrm{\beta }}_{0}+{\mathrm{\beta }}_{1}t.$The causal log odds ratio ($\mathrm{\beta }$) is estimated as 0.1782 with a standard error 0.2865 (p-value = 0.5349). The standard error is obtained using sandwich formula by the survey package in R. We may also employ bootstrapping to estimate the standard error by repeatedly taking a bootstrap sample with replacement from the original dataset and applying the same estimating procedure. Based on 1,000 replications, the bootstrapping estimate of the standard error is 0.2374, which is slightly smaller than the sandwich formula. It indicates that the probability of daughter’s early dieting increases when the mother’s weight concern increases. However, the causal effect is not significant at the significance level of 0.1.

Table 4

$\mathrm{B}\mathrm{I}{\mathrm{C}}_{w}$ for different K and p

## 6 Discussion

In this article, we focused on the causal inference problem with a continuous treatment variable. We are mainly interested in estimating the dose–response function. IPW based on marginal structural models is a useful tool to estimate the causal effect. When the treatment variable is continuous, the generalized propensity score is defined as the conditional density of the treatment given covariates. Because the dimension of covariates is usually large, it is suggested that the conditional density can be estimated in two steps. First, a mean function of the treatment given covariates is estimated; second, the conditional density is normally approximated using residuals from the first step or nonparametrically estimated by a kernel method. We suggest using a boosting algorithm to estimate the mean function and propose an innovative stopping criterion based on the correlation metrics. The proposed stopping criterion is similar to generalized boosted model proposed by McCaffrey et al. [12] for the binary treatment. Simulation results show that the proposed method performs better than the existing methods, especially when the function of the treatment given covariates is not linear.

It is known that, in causal inference problems, propensity scores are nuisance parameters and the parameter of interest is the causal treatment effect. It has been shown that a propensity score model with a better predictive performance may not lead to better causal treatment effect estimates [2729]. Therefore, while modeling propensity scores, we should really focus on the property of the causal estimates [30, 31]. However, the true causal treatment effect is unknown in practice. For example, in Brookhart and van der Laan [30], an over-fitted parametric propensity score model is used as the reference model; Galdo et al. [31] proposed a weighted cross-validation technique to approximate mean integrated squared error of the counterfactual mean function. From another perspective, some recent literature has focused on the estimation of propensity scores by achieving balance in the covariates (e.g., McCaffrey et al. [12]; Hainmueller [32]; Imai and Ratkovic [33]). The underlying idea is that by achieving balance, the bias in the treatment effect estimate due to measured covariates can be reduced [34]. The stopping rules proposed for boosting in this study also falls into this realm: we select the optimal number of trees in boosting by achieving balance in the covariates. The balance is measured through correlation between the treatment variable and the covariates in the weighted pseudo-sample.

There are several potential areas for future research. For example, in the proposed algorithm described in Section 3.2, the correlations in the weighted pseudo-sample are estimated using bootstrapping with unequal probabilities. However, bootstrapping in this case is computationally intensive. A more straightforward approach is to develop the weighted Pearson or distance correlation for nonrandom samples using estimating equations. It may greatly improve the computation time.

In the data application, we use a cut-off value of 0.1 for AACC. In other words, if $AACC<0.1$, we claim that the confounding effect of the covariates (potential confounders) is small; Based on Cohen’s effect sizes for the Pearson correlation coefficient [35], we may also claim that, when $0.1, the confounding effect is medium; and when $AACC>0.55$, the confounding effect is large. However, more theoretical and empirical justification is needed for the choice of the cut-off value, which can be explored in future work.

Finally, we should point out that the estimation of the generalized propensity scores is a much more challenging task than the case of a binary treatment. The reason is that we are concerned about all moments of the conditional distribution of the treatment given covariates, while in the binary case, we are only interested in the conditional mean. In the proposed method, we follow a two-step procedure by first modeling the mean function of the treatment given covariates. As shown in eq. (8), we assume that the random errors are normally distributed and have constant variance. If the model diagnostics (e.g., Figure 2) show that either of the two assumptions is invalid, we may transform the treatment variable (see the lottery example in Hirano and Imbens [8]) or use nonparametric methods to estimate the density. For example, replace eq. (7) by a kernel density estimator. Future research may explore the application of other machine learning algorithms or a mixture of normal distributions to estimate the conditional density.

## Appendix A: R codes

The following R function calculates the average absolute correlation coefficient (AACC) among a continuous treatment and covariates after applying the inverse probability weights. The subsequent R codes demonstrate how to estimate the dose–response function using a real dataset.

F.aac.iter=function(i,data,ps.model,ps.num,rep,criterion) {

# i: number of iterations (trees)

# data: dataset containing the treatment and the covariates

# ps.model: the boosting model to estimate p(T_iX_i)

# ps.num: the estimated p(T_i)

# rep: number of replications in bootstrap

# criterion: the correlation metric used as the stopping criterion

GBM.fitted=predict(ps.model,newdata=data,n.trees=floor(i),

type= “response”)

ps.den=dnorm((data$T-GBM.fitted)/sd(data$T-GBM.fitted),0,1)

wt=ps.num/ps.den

aac_iter=rep(NA,rep)

for (i in 1:rep){

bo=sample(1:dim(data)[1],replace=TRUE,prob=wt)

newsample=data[bo,]

j.drop=match(c(“T”),names(data))

j.drop=j.drop[!is.na(j.drop)]

x=newsample[,-j.drop]

if(criterion== “spearman”| criterion== “kendall”){

ac=apply(x, MARGIN=2, FUN=cor, y=newsample$T, method=criterion) } else if (criterion== “distance”){ ac=apply(x, MARGIN=2, FUN=dcor, y=newsample$T)

} else if (criterion== “pearson”){

ac=matrix(NA,dim(x)[2],1)

for (j in 1:dim(x)[2]){

ac[j] =ifelse (!is.factor(x[,j]), cor(newsample$T, x[,j], method=criterion),polyserial(newsample$T, x[,j]))

}

} else print(“The criterion is not correctly specified”)

aac_iter[i]=mean(abs(1/2*log((1+ac)/(1-ac))),na.rm=TRUE)

}

aac=mean(aac_iter)

return(aac)

}

# Create the data frame for the covariates

x=data.frame(BMIZ, factor(DIABETZ1), G1BDESTM, G1WTCON,

factor(INCOME1), M1AGE1, M1BMI, factor(M1CURLS), factor(M1CURMT),

M1DEPRS, M1ESTEEM, M1GFATCN, M1GNOW, factor(M1GSATN),

M1MFATCN, M1MNOW, factor(M1MSAT), factor(M1NOEX), M1OGIBOD,

M1PCEAFF, M1PCEEFF, M1PCEEXT, M1PCEIMP, M1PCEPER, M1PDSTOT,

M1RLOAD, factor(M1SMOKE), M1WGTTES, M1WTCON, M1YRED, factor(OBESE1),

g1discal, g1obcdc, g1ovrcdc, g1pFM, g1wgttes, m1cfqcwt, m1cfqenc,

m1cfqmon, m1cfqpwt, m1cfqrsp, m1cfqrst, m1cfqwtc, m1dis, m1hung,

m1lim, m1picky, m1rest, m1zsav, m1zsweet)

# Find the optimal number of trees using Pearson/polyserial correlation

library(gbm)

library(polycor)

mydata=data.frame(T=M2WTCON,X=x)

model.num=lm(T~1,data=mydata)

ps.num=dnorm((mydata$T-model.num$fitted)/(summary(model.num))$sigma,0,1) model.den=gbm(T~.,data=mydata, shrinkage=0.0005, interaction.depth=4, distribution= “gaussian”,n.trees=20000) opt=optimize(F.aac.iter,interval=c(1,20000), data=mydata, ps.model=model.den, ps.num=ps.num,rep=50,criterion= “pearson”) best.aac.iter=opt$minimum

best.aac=opt$objective # Calculate the inverse probability weights model.den$fitted=predict(model.den,newdata=mydata,

n.trees=floor(best.aac.iter), type= “response”)

ps.den=dnorm((mydata$T-model.den$fitted)/sd(mydata$T-model.den$fitted),0,1)

weight.gbm=ps.num/ps.den

# Outcome analysis using survey package

library(survey)

dataset=data.frame(earlydiet,M2WTCON, weight.gbm)

design.b=svydesign(ids=~1, weights=~weight.gbm, data=dataset)

fit=svyglm(earlydiet~M2WTCON, family=quasibinomial(),design=design.b)

summary(fit)

## Appendix B: Cut-off value for AACC

In this section, we provide a heuristic proof for the cut-off value for AACC. In the continuous treatment case, denote a covariate as ${X}_{j}$ and the treatment variable as T. If $\left({X}_{j},T\right)$ has a bivariate normal distribution and ${X}_{j}$, T are independent, the Fisher transformed Pearson’s correlation coefficient, ${z}_{j}$, has the following asymptotic distribution: $\sqrt{n{z}_{j}}\stackrel{d}{\to }N\left(0,1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{a}\mathrm{s}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}n\to \mathrm{\infty }.$On the other hand, if we dichotomize the continuous treatment to a binary treatment with a sample size of ${n}_{1}$ for the treatment group and ${n}_{0}$ for the control group (${n}_{1}+{n}_{0}=n$), we know the Cohen’s effect size is defined as: ${\mathrm{\Delta }}_{j}=\frac{{\stackrel{ˉ}{X}}_{j}^{\mathrm{t}\mathrm{r}\mathrm{e}\mathrm{a}\mathrm{t}\mathrm{e}\mathrm{d}}-{\stackrel{ˉ}{X}}_{j}^{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{t}\mathrm{r}\mathrm{o}\mathrm{l}}}{s},$where s is the pooled standard deviation. If there is no difference between the two groups, asymptotically, $\sqrt{\frac{{n}_{1}{n}_{0}}{{n}_{1}+{n}_{0}}}{\mathrm{\Delta }}_{j}\stackrel{d}{\to }N\left(0,1\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1pt}{0ex}}\mathrm{a}\mathrm{s}\phantom{\rule{1pt}{0ex}}\phantom{\rule{1em}{0ex}}{n}_{1},{n}_{0}\to \mathrm{\infty }.$Therefore, if the cut-off value for the standardized mean difference is 0.2 in the binary treatment case, the cut-off value for AACC in the continuous treatment case should be $0.2\sqrt{\frac{{n}_{1}{n}_{0}}{n\left({n}_{1}+{n}_{0}\right)}}$, which has a maximum value of 0.1 when ${n}_{1}={n}_{0}=\frac{n}{2}$. In fact, this cut-off value is consistent with what has been claimed regarding the effect size of Pearson correlation coefficient r [35]. That is, when $|r|<0.1$, the effect size is small; when $0.1<|r|<0.3$, the effect size is medium; when $|r|>0.5$, the effect size is large.

## Acknowledgments

The project described was supported by Award Number P50DA010075 from the National Institute on Drug Abuse and 5R21 DK082858-02 from the National Institute on Diabetes and Digestive and Kidney Disorders. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Drug Abuse or the National Institutes of Health. We would also like to thank Jennifer Savage Williams and Leann Birch for permission to use the Early Dieting in Girls Study data, which was funded by National Institute of Child Health and Human Development R01 HD32973. The content is solely the responsibility of the authors and does not necessarily represent the official views of NICHD.

## References

• 1.

Lechner M. Program heterogeneity and propensity score matching: an application to the evaluation of active labor market policies. Rev Econ Stat 2002;84:205–20. [Crossref]

• 2.

Imai K, Van Dyk D. Causal inference with general treatment regimes. J Am Stat Assoc 2004;99:854–66. [Crossref]

• 3.

Tchernis R, Horvitz-Lennon M, Normand SL. On the use of discrete choice models for causal inference. Stat Med 2005;24:2197–212. [PubMed] [Crossref]

• 4.

Karwa V, Slavković AB, Donnell ET. Causal inference in transportation safety studies: comparison of potential outcomes and causal diagrams. Ann Appl Stat 2011;5:1428–55. [Web of Science] [Crossref]

• 5.

McCaffrey DF, Griffin BA, Almirall D, Slaughter ME, Ramchand R, Burgette LF. A tutorial on propensity score estimation for multiple treatments using generalized boosted models. Stat Med 2013;32:3388–414. [Web of Science] [Crossref] [PubMed]

• 6.

Imbens GW. The role of the propensity score in estimating dose-response functions. Biometrika 2000;87:706–10. [Crossref]

• 7.

Kluve J, Schneider H, Uhlendorff A, Zhao Z. Evaluating continuous training programmes by using the generalized propensity score. J R Stat Soc Ser A (Stat Soc) 2012;175:587–617. [Web of Science] [Crossref]

• 8.

Hirano K, Imbens GW. The propensity score with continuous treatments. Applied Bayesian modeling and causal inference from incomplete-data perspectives, 2004:73–84.

• 9.

Robins J. Association, causation, and marginal structural models. Synthese 1999;121:151–79. [Crossref]

• 10.

Hall P, Wolff RC, Yao Q. Methods for estimating a conditional distribution function. J Am Stat Assoc 1999;94:154–63. [Crossref]

• 11.

Fan J, Yao Q, Tong H. Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems,. Biometrika 1996;83:189–206. [Crossref]

• 12.

McCaffrey DF, Ridgeway G, Morral AR. Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychol Methods 2004;9:403–25. [PubMed] [Crossref]

• 13.

Robins J, Hernán M, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology 2000;11:550–60. [PubMed] [Crossref]

• 14.

Eubank RL. Spline smoothing and nonparametric regression. New York: Marcel Dekker, 1988.

• 15.

Hens N, Aerts M, Molenberghs G. Model selection for incomplete and design-based samples. Stat Med 2006;25:2502–20. [PubMed] [Crossref]

• 16.

Platt RW, Brookhart AM, Cole SR, Westreich D, Schisterman EF. An information criterion for marginal structural models. Stat Med 2012;32:1383–93. [PubMed] [Web of Science]

• 17.

Breiman L, Friedman J, Stone C, Olshen R. Classification and regression trees. Belmont, CA: Chapman & Hall/CRC, 1984.

• 18.

Székely GJ, Rizzo ML. Brownian distance covariance. Ann Appl Stat 2009;32:1236–65. [Web of Science] [Crossref]

• 19.

Székely GJ, Rizzo ML, Bakirov NK. Measuring and testing dependence by correlation of distances. Ann Stat 2007;35:2769–94. [Crossref] [Web of Science]

• 20.

Olsson U. Maximum likelihood estimation of the polychoric correlation coefficient. Psychometrika 1979;44:443–60. [Crossref]

• 21.

Olsson U, Drasgow F, Dorans NJ. The polyserial correlation coefficient. Psychometrika 1982;47:337–47. [Crossref]

• 22.

Bühlmann P, Yu B. Boosting with the l2 loss: regression and classification. J Am Stat Assoc 2003;98:324–39. [Crossref]

• 23.

Neumark-Sztainer D, Wall M, Story M, Standish AR. Dieting and unhealthy weight control behaviors during adolescence: associations with 10-year changes in body mass index. J Adolesc Health 2012;50:80–6. [PubMed] [Crossref]

• 24.

Fisher JO, Birch LL. Eating in the absence of hunger and overweight in girls from 5 to 7 y of age. Am J Clin Nutr 2002;76:226–31.

• 25.

Birch LL, Fisher JO. Mothers’ child-feeding practices influence daughters’ eating and weight. Am J Clin Nutr 2000;71:1054–61. [PubMed]

• 26.

Kang JDY, Schafer JL. Demystifying double robustness: a comparison of alternative strategies for estimating a population mean from incomplete data. Stat Sci 2007;22:523–39. [Web of Science] [Crossref]

• 27.

Drake C. Effects of misspecification of the propensity score on estimators of treatment effect. Biometrics 1993;49:1231–6. [Crossref]

• 28.

Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med 2004;23:2937–60. [PubMed] [Crossref]

• 29.

Schafer JL, Kang J. Average causal effects from nonrandomized studies: a practical guide and simulated example. Psychol Methods 2008;13:279–313. [Web of Science] [PubMed] [Crossref]

• 30.

Brookhart MA, van der Laan MJ. A semiparametric model selection criterion with applications to the marginal structural model. Comput Stat Data Anal 2006;50:475–98. [Crossref]

• 31.

Galdo JC, Smith J, Black D. Bandwidth selection and the estimation of treatment effects with unbalanced data. Ann d’Economie Statistique 2008;91-92:189–216.

• 32.

Hainmueller J. Entropy balancing for causal effects: a multivariate reweighting method to produce balanced samples in observational studies. Political Anal 2012;20:25–46. [Crossref] [Web of Science]

• 33.

Imai K, Ratkovic M. Covariate balancing propensity score. J R Stat Soc Ser B (Stat Methodol) 2014;76:243–63. [Crossref]

• 34.

Harder VS, Stuart EA, Anthony JC. Propensity score techniques and the assessment of measured covariate balance to test causal associations in psychological research. Psychol Methods 2010;15:234. [Web of Science] [PubMed] [Crossref]

• 35.

Cohen J. Statistical power analysis for the behavioral sciences. New York: Psychology Press, 1988.

## About the article

Published Online: 2014-08-29

Published in Print: 2015-03-01

Research funding: National Institute on Drug Abuse (Grant/Award Number: “P50DA010075”) National Institute on Diabetes and Digestive and Kidney Disorders (Grant/Award Number: “5R21 DK082858-02”).

Citation Information: Journal of Causal Inference, ISSN (Online) 2193-3685, ISSN (Print) 2193-3677, Export Citation

## Comments (0)

Please log in or register to comment.
Log in