Randomized experiments provide the analyst with the opportunity to achieve unbiased estimation of causal effects. Unbiasedness is an important statistical property, entailing that the expected value of an estimator is equal to the true parameter of interest. Randomized experiments are often justified by the fact that they facilitate unbiased estimation of the average treatment effect (ATE). However, this unbiasedness is undermined when the analyst uses an inappropriate analytical tool.
Many statistical methods commonly used to estimate ATEs are biased and sometimes even inconsistent. Contrary to much conventional wisdom [1–3], even when all units have the same probability of entering treatment, the difference-in-means estimator is biased when clustering in treatment assignment occurs [4, 5]. In fact, unless the number of clusters grows with N, the difference-in-means estimator is not generally consistent for the ATE. Similarly, in experiments with heterogeneous probabilities of treatment assignment, the inverse probability weighted (IPW) difference-in-means estimator is not generally unbiased. It is perhaps more well-known that covariate adjustment with ordinary least squares is biased for the analysis of randomized experiments under complete randomization [6–9]. Ordinary least squares is, in fact, even inconsistent when fixed effects are used to control for heterogeneous probabilities of treatment assignment [10, 11]. In addition, Rosenbaum’s  approach for testing and interval estimation relies on strong functional form assumptions (e.g., additive constant effects), which may lead to misleading inferences when such assumptions are violated .
In this article, we draw on classical sampling theory to develop and present an alternative approach that is always unbiased for ATE (both asymptotically and with finite N), regardless of the clustering structure of treatment assignment, probabilities of entering into treatment, or functional form of treatment effects. This alternative also allows for covariate adjustment, also without risk of bias. We develop a generalized difference estimator that will allow analysts to utilize any model for outcomes in order to reduce sampling variability. This difference estimator, which requires either prior information or statistical independence of some units’ treatment assignment (including, e.g., blocked randomization, paired randomization, or auxiliary studies), also confers other desirable statistical properties, including location invariance. We also develop estimators of the sampling variability of our estimators that are guaranteed to have a nonnegative bias whenever the difference estimator relies on prior information. These results extend those of Middleton and Aronow , which provides unbiased estimators for experiments with complete randomization of clusters, including linear covariate adjustment.
Unbiasedness may not be the statistical property that analysts are most interested in. For example, analysts may choose an estimator with lower root mean squared error (RMSE) over one that is unbiased. However, in the realm of randomized experiments, where many small experiments may be performed over time, unbiasedness is particularly important. Results from unbiased but relatively inefficient estimators may be preferable when researchers seek to aggregate knowledge from many studies, as reported estimates may be systematically biased in one direction. Furthermore, clarifying the conditions under which unbiasedness will occur is an important enterprise. The class of estimators that is developed here is theoretically important, as it provides sufficient conditions for estimator unbiasedness.
This article proceeds as follows. In section 2, we provide a literature review of related work. In section 3, we detail the Neyman–Rubin causal model and define the causal quantity of interest. In section 4, we provide an unbiased estimator of the ATE and contrast it with other estimators in two common situations. In section 5, we develop the generalized difference estimator of the ATE, which incorporates covariate adjustment. In section 6, we define the sampling variance of our estimators and derive conservative estimators thereof. In section 7, we provide a simple illustrative numerical example. In section 8, we discuss practical implications of our findings.
2 Related literature
Our work follows in the tradition of sampling theoretic causal inference founded by Neyman . In recent years, this framework has gained prominence, first with the popularization of a model of potential outcomes [15, 16], and then notably with Freedman’s [6, 7] work on the bias of the regression estimator for the analysis of completely randomized experiments. The methods derived here relate to the design-based paradigm associated with two often disjoint literatures: that of survey sampling and that of causal inference. We discuss these two literatures in turn.
2.1 Design-based and model-assisted survey sampling
Design-based survey sampling finds its roots in Neyman , later formalized by Godambe  and contemporaries (see [18, 19] for lucid discussions of the distinction between design-based and model-based survey sampling). The design-based survey sampling literature grounds results in the first principles of classical sampling theory, without making parametric assumptions about the response variable of interest (which is instead assumed to be fixed before randomization). All inference is predicated on the known randomization scheme. In this context, Horvitz and Thompson  derive the workhorse, IPW estimator for design-based estimation upon which our results will be based. Refinements in the design-based tradition have largely focused on variance control; early important examples include Des Raj’s  difference estimator and Hajek’s  ratio estimator. Many textbooks (e.g., [23, 24]) on survey sampling relate this classical treatment.
The model-assisted  mode of inference combines features of a model-based and design-based approach. Here, modeling assumptions for the response variable are permitted, but estimator validity is judged by its performance from a design-based perspective. In this tradition, estimators are considered admissible if and only if they are consistent by design [26, 27]. Model-assisted estimators include many variants of regression (see, e.g., Ref. 28, ch. 7) or weighting estimators , and, perhaps most characteristically, estimators that combine both approaches (e.g., the generalized regression estimators described in Ref. 25). Our proposed estimators fall into the model-assisted mode of inference: unbiasedness is ensured by design but models may be used to improve efficiency.
2.2 Design-based causal inference
The design-based paradigm in causal inference may be traced to Neyman , which considers finite-population-based sampling theoretic inference for randomized experiments. Neyman established a model of potential outcomes (detailed in Section 3), derived the sampling variability of the difference-in-means estimator for completely randomized experiments (defined in Section 4.1.1) and proposed two conservative variance estimators. Imbens and Rubin [30, ch. 6] relate a modern treatment of Neyman’s approach and Freedman, Pisani and Purves [31, A32–A34] elegantly derive Neyman’s conservative variance estimators.
Rubin  repopularized a model of potential outcomes for statisticians and social scientists, though much associated work using potential outcomes falls into the model-based paradigm (i.e., in that it hypothesizes stochasticity beyond the experimental design). Although there exists a large body of research on causal inference in the model-based paradigm (i.e., sampling from a superpopulation) – textbook treatments can be found in, e.g., Morgan and Winship , Angrist and Pischke  and Hernan and Robins  – we focus our discussion on research in the Neyman-style, design-based paradigm.1
Freedman [6, 7, 36] rekindled interest in the design-based analysis of randomized experiments. Freedman raises major issues posed by regression analysis as applied to completely randomized experiments, including efficiency, bias, and variance estimation. Lin  and Miratrix, Sekhon and Yu  address these concerns by respectively proposing alternative regression-based and post-stratification-based estimators that are both at least as asymptotically efficient as the unadjusted estimator (and, in fact, the post-stratification-based estimator may be shown to be a special case of the regression-based estimator than Lin proposes). Turning to the issue of bias, Miratrix, Sekhon and Yu are also able to demonstrate that, for many experimental designs – including the completely randomized experiment – the post-stratification-based estimator is conditionally unbiased. Our contribution is to propose a broad class of unbiased estimators that are applicable to any experimental design while still permitting covariate adjustment.
Variance identification and conservative variance estimation for completely randomized and pair-randomized experiments are considered by Robins  and Imai  respectively, each showing how inferences may differ when a superpopulation is hypothesized. Samii and Aronow  and Lin  demonstrate that, in the case of completely randomized experiments, heteroskedasticity-robust variance estimates are conservative and Lin demonstrates that such estimates provide a basis for asymptotic inference under a normal approximation. Our article extends this prior work by proposing a new Horvitz–Thompson-based variance estimator that is conservative for any experimental design, though additional regularity conditions would be required for use in constructing confidence intervals and hypothesis tests.
Finally, we note increased attention to the challenges of analysis of cluster-randomized experiments under the design-based paradigm, as evidenced in Refs. [4, 5, 40, 41]. Middleton  notes the bias of regression estimators for the analysis of cluster-randomized designs with complete randomization of clusters. As in this article, Hansen and Bowers  propose innovative model-assisted estimators that allow for the regression fitting of outcomes, though conditions for unbiasedness are not established nor are the results generalized to alternative experimental designs. Imai, King and Nall  propose that pair matching is “essential” for cluster-randomized experiments at the design-stage and derive associated design-based estimators and conservative variance estimators. Middleton and Aronow  propose Horvitz–Thompson-type unbiased estimators (including linear covariate adjustment), along with multiple variance estimators, for experiments with complete randomization of clusters. Our article accommodates cluster-randomized designs, as well as any nonstandard design that might be imposed by the researcher.
3 Neyman–Rubin causal model
We begin by detailing the Neyman–Rubin nonparametric model of potential outcomes [16, 49], which serves as the basis of our estimation approach. Define a binary treatment indicator for units such that when unit i receives the treatment and therwise.2 If the stable unit treatment value assumption  holds, let be the potential outcome if unit i is exposed to the treatment, and let be the potential outcome if unit i is not exposed to the treatment. The observed outcome may be expressed as a function of the potential outcomes and the treatment:
The causal effect of the treatment on unit i, , is defined as the difference between the two potential outcomes for unit i:
The ATE, denoted by , is defined as the average value of for all units i. In the Neyman–Rubin model, the only random component is the allocation of units to treatment and control groups.
Since the ATE
where is the sum of the potential outcomes if in the treatment condition and is the sum of potential outcomes if in the control condition. An estimator of can be constructed using estimators of and :
where is the estimated sum of potential outcomes under treatment and is the estimated sum of potential outcomes under control.
Formally, the bias of an estimator is the difference between the expected value of the estimator and the true parameter of interest; an estimator is unbiased if this difference is equal to zero. If the estimators and are unbiased, the corresponding estimator of is also unbiased since
In the following sections, we demonstrate how to derive unbiased estimators of and and, in so doing, derive unbiased estimators of .
4 Unbiased estimation of ATEs
Define N as the number of units in the study, as the probability that unit i is selected into treatment, and as the probability that unit i is selected into control. We assume that, , and , or that there is a nonzero probability that each unit will be selected into treatment and that there is a nonzero probability that each unit will be selected into control. (When all units are assigned to either treatment or control, .) To derive an unbiased estimator of the ATE, we first posit estimators of and . Define the Horvitz–Thompson estimator  of ,
and, similarly, define the Horvitz–Thompson estimator of ,
From eq. , it follows that we may construct an unbiased estimator of :
We refer to this estimator of the ATE as the HT estimator. The HT estimator is subject to two major limitations. First, as proved in Appendix A, the estimator is not location invariant. By location invariance, we mean that, for a linear transformation of the data,
where and are constants, the estimator based on the original data, , relates to the estimator computed based on the transformed data, , in the following way:
Failure of location invariance is an undesirable property because it implies that rescaling the data (e.g., recoding a binary outcome variable) can substantively alter the estimate that we compute based on the data. Second, does not account for covariate information, and so may be imprecise relative to estimators that incorporate additional information. We address both of these issues in Section 5.
4.1 Special cases
The HT estimator is unbiased for all designs, but we will now demonstrate what the estimator reduces to under two common designs. In the first, a fixed number of units is selected for inclusion in the treatment, each with equal probability . In the second, we consider a case where units are selected as clusters into treatment.
4.1.1 Complete random assignment of units into treatment
Consider a design where a fixed number of units is selected for inclusion in the treatment, each with equal probability . The associated estimator is
Eq.  shows that for the special case where n of N units are selected into treatment, the HT estimator reduces to the difference-in-means estimator: the average outcome among treatment units minus the average outcome among control units. While the difference-in-means estimator is not generally unbiased for all equal-probability designs, it is unbiased for the numerous experiments that use this particular design.
4.1.2 Complete random assignment of clusters into treatment
Consider a design where a fixed number of clusters is selected for inclusion in the treatment, each with equal probability . The associated estimator is
Contrast the estimator in eq.  with the estimator in eq. . A key insight is that eq.  does not reduce to the difference-in-means estimator in eq. . In fact, the difference-in-means estimator may be biased for cluster randomized experiments . Moreover, since the difference-in-means estimator is algebraically equivalent to simple linear regression, regression will likewise be biased for cluster randomized designs .
5 Unbiased covariate adjustment
Regression for covariate adjustment is generally biased under the Neyman–Rubin causal model [6, 7]. In contrast, a generalized estimator can be constructed to obtain unbiased covariate adjustment. We draw upon the concept of difference estimation, which sampling theorists have been using since (at least) Des Raj . The primary insight into constructing the estimators in eq.  is that we need only construct unbiased estimators of totals under treatment and control conditions in order to construct an unbiased estimator of the ATE. Unlike Rosenbaum , we make no assumptions about the structure (e.g., additivity) of treatment effects.
Continuing with the above-defined notion of estimating totals, we can consider the following class of estimators,
where is a predetermined real-valued function of pretreatment covariate vector and of parameter vector .3 In inspecting eq. , one intuition is that if predicts the value of , then across units in the study population and will be correlated. By implication then, and will be correlated across randomizations, thus yielding such that . The same intuition holds for eq. , so that both estimators will typically have precision gains.
There are many options for choosing . One option for is a linear relationship between and : Similarly, if the relationship were thought to follow a logistic function (for binary ), While the choice of may be relevant for efficiency, it has no bearing on the unbiasedness of the estimator, so long as the choice is determined prior to examining the data.
We may now define the generalized difference estimator:
The generalized difference estimator both confers location invariance (as demonstrated in Appendix C) and, very often, decreased sampling variability. is equivalent to the Horvitz–Thompson estimator minus an adjustment term:
The adjustment accounts for the fact that some samples will show imbalance on . As we prove in Appendix B, a sufficient condition for to be unbiased is that, for all i, . The simplest way for this assumption to hold is for to be derived from an auxillary or prior source, but we examine this selection process further in Section 5.1.4
5.1 Deriving while preserving unbiasedness
If is derived from the data, unbiasedness is not guaranteed because the value of can depend on the particular andomization, and thus is not generally equal to zero. Formally, the estimator will generally be biased because
Consider two ways that one might derive that satisfy . The simplest is to assign a fixed value to , so that has no variance, and thus no covariance with . Assigning a fixed value to requires using a prior insight or an auxiliary dataset. The choice of may be suboptimal and, if chosen poorly, may increase the variance of the estimate, but, so long as the analyst does not use the data at hand in forming a judgment, there will be no consequence for bias. In fact, as we demonstrate in Section 6, this approach – where is constant across randomizations – will provide benefits for variance estimation.
Following the basic logic of Williams , a second option, only possible in some studies, is to exploit the fact that for some units , . Recall from (1) that , where and are constants. Since the only stochastic component of is , then , and . If is a function of any or all of the elements of the set and no other random variables, then . It follows that and therefore .5 There are many studies where this option is available. Consider first an experiment where units are independently assigned to treatment. Without loss of generality, let us assume that the analyst chooses to be linear. The analyst can then derive a parameter vector for each unit i in the following way: for each i, the analyst could perform an ordinary least squares regression of the outcome on covariates for all units except for unit i. Another example where the second option could be used is a block-randomized experiment. In a block-randomized experiment, units are partitioned into multiple groups, with randomization only occurring within partitions. Since the treatment assignment processes in different partitions are independent, for all such that i and j are in separate blocks. To derive for each i, the analyst could then use a regression of outcomes on covariates including all units not in unit i’s block. Unmeasured block specific effects may cause efficiency loss, but would not lead to bias.
Note that there exists a special case where may be derived from all without any consequence for bias. If there is no treatment effect whatsoever, then, for all units i, will be constant, and thus will have no variance (and thus no covariance with any random variables). This point will have greater importance in the following section, where we derive expressions for and develop estimators for the sampling variance of the proposed ATE estimators.
6 Sampling variance
In the section, we will provide expressions for the sampling variance of the HT estimator and the generalized difference estimator. We will then derive conservative estimators of these sampling variances.
6.1 Sampling variance of the HT estimator
We begin by deriving the sampling variance of the HT estimator:
By Horvitz and Thompson , the variance of ,
where is the probability that units i and j are jointly included in the treatment group. Similarly,
where is the probability that units i and j are jointly included in the control group. An expression for may be found in Ref. ,
6.2 Sampling variance of the generalized difference estimator
If are constants, this variance formula is also applicable to the generalized difference estimator. When is constant, we may simply redefine the outcome variable, . It follows that and . If we rewrite eq. , we can see that the generalized difference estimator is equivalent to the HT estimator applied to
Therfore, when is constant,
Conversely, the HT estimator may be considered a special case of the generalized difference estimator where is zero for all units. As we proceed, for notational clarity, we use as the outcome measure, noting that the variances derived will also apply to the generalized difference estimator if is substituted for (along with both associated potential outcomes) and is constant.
6.3 Accounting for clustering in treatment assignment
We will rewrite the and estimators to account for clustering in treatment assignment. (Our reasons for doing so, while perhaps not obvious now, will become clearer when we derive variance estimators in Section 6.4 and Appendices D and E. While the treatment effect estimators are identical, such a notational switch will allow us to simplify and reduce the bias of our eventual variance estimators.) Note that if, for some units , , then the total estimators may be equivalently rewritten. Define as the set of unit indices i that satisfy , where is indexed over all unique random variables in . Define as the value of , as the value of . Joint probabilities , , and π’0k0l are defined analogously. Given these definitions, we can rewrite the HT estimator of the total of treatment potential outcomes as
where . And, similarly,
In simple language, these estimators now operate over cluster totals as the units of observation. Since the units will always be observed together, they can be summed prior to estimation. The equivalency of these totaled and untotaled HT estimators serves as the basis for the estimation approach in Middleton and Aronow . We may now derive variance expressions logically equivalent to those in eqs. , , and :
where the last term is the product of the potential outcomes for each cluster total.
6.4 Conservative variance estimation
Our goal is now to derive conservative variance estimators: although not unbiased, these estimators are guaranteed to have a nonnegative bias.7 We can identify estimators of , and that are (weakly) positively, positively and negatively biased respectively.8 Recalling eq. , the signs of these biases will ensure a nonnegative bias for the overall variance estimator.
First, let us derive an unbiased estimator of under the assumption that, for all pairs , and This assumption is equivalent to assuming that all pairs of units have nonzero probability of being assigned to the same treatment condition. This assumption is violated in, e.g., pair-randomized studies, wherein the joint probability of two units in the same pair being jointly assigned to treatment is zero. We propose the Horvitz and Thompson  style estimator,
which is unbiased by and .
What if, for some , Aronow and Samii  prove that will be conservative, or non-negatively biased, if, for all k, (or, alternatively, all ). Aronow and Samii  provide an alternative conservative estimator of the variance for the general case, where may be positive or negative:
By an application of Young’s inequality, . (An abbreviated proof is presented in Appendix D.) Likewise, a generally conservative estimator of ,
Unfortunately, it is impossible to develop a generally unbiased estimator of the covariance between and because and can never be jointly observed. However, again using Young’s inequality, we can derive a generally conservative (which is, in this case, nonpositively biased) covariance estimator:
In Appendix E, we prove that . One important property of this estimator is that, under the sharp null hypothesis of no treatment effect whatsoever, this estimator is unbiased.
where . ( is included to avoid division by zero.) The fact that is conservative has been established. But also note that when, for all and π0k0l>0, and there is no treatment effect whatsoever, the estimator is exactly unbiased. A proof of this statement trivially follows from the fact that when , eq.  reduces to , which is not a random variable.9
7 Illustrative numerical example
In this section, we present an illustrative numerical example to demonstrate the properties of our estimators. This example is designed to be representative of small experiments in the social sciences that may be subject to both clustering and blocking. Consider a hypothetical randomized experiment run on 16 individuals organized into 10 clusters across two blocks. A single prognostic covariate is available, and two clusters in each block are randomized into treatment, with the others randomized into control. In Table 1, we detail the structure of the randomized experiment, including the potential outcomes for each unit. Note that we have assumed no treatment effect whatsoever.
We may now assess the performance of both (average) treatment effect estimators and associated variance estimators, by computing the estimated ATE and variance over all 90 possible randomizations.
Let us first consider four traditional, regression based, estimators. The simplest regression based estimator is the simple IPW difference-in-means estimator, logically equivalent to an IPW least squares regression of the outcome on the treatment indicator. The IPW difference-in-means estimator is a consistent estimator if the finite population grows in such a fashion that the WLLN holds, e.g., independent assignment of units or a growing number of clusters (see , for a discussion of the consistency of the difference-in-means estimator in the equal-probability, clustered random assignment case). To estimate the variance of this estimator, we use the Huber-White “robust” clustered variance estimator from a IPW least squares regression.
We then examine an alternative regression strategy: ordinary least squares, holding fixed effects for randomization strata (the “FE” estimator). Under modest regularity conditions, Angrist  demonstrates that the fixed effects estimator converges to a reweighted causal effect; in this case, the estimator would be consistent as the treatment effect is constant (zero) across all observations. Similarly, we also use the fixed effects estimator including the covariate (the “FE (Cov.)” estimator) in the regression. For both estimators, the associated variance estimator is the Huber-White “robust” clustered variance estimator. Last among the regression estimators is the random effects estimator (the “RE (Cov.)” estimator), as implemented using the lmer() function in the lme4  package in R. As with the general recommendation of Green and Vavreck  for cluster randomized experiments, we assume a Gaussian random effect associated with each cluster, fixed effects for randomization strata and control for the covariate . Variance estimates are empirical Bayes estimates also produced by the lmer() function.
We now examine four cases of the Horvitz–Thompson-based estimators proposed in this article. Referring back to eqs.  and , we first use and to estimate the ATE and variance. We then use three different forms of the generalized difference estimator. In all cases, we use the general formulations in and , but vary the form and parameters of . In the “G (Prior)” specification, we set (a reasonable agnostic choice for a binary outcome), neither varying the fitting function according to observed data nor incorporating information on the covariate. In the “G (Linear)” specification, , where b indicates the block of the unit. For units in block 1, we estimate and from an OLS regression of on the covariate (including an intercept) using only units in block 2. and are similarly estimated from an OLS regression using only units in block 1. As detailed in Section 5, this procedure preserves unbiasedness since, for all units, . In the “G (Logit)” specification, . and are now derived from a logistic regression using only the units in block 2 (and vice versa for and ). The G (Logit) specification is also unbiased by . However, while the variance estimators for the HT-based estimators are not generally unbiased (only conservative), the variance estimators will be unbiased since the sharp null hypothesis of no treatment effect holds and for all clusters , and .
In Table 2, we demonstrate that the only unbiased estimators are the Horvitz–Thompson-based estimators: all of the regression-based estimators, including variance estimators, are negatively biased. Although the relative efficiency (e.g., RMSE) of each estimator depends on the particular characteristics of the data at hand, the example demonstrates a case wherein the Horvitz–Thompson-based estimators that exploit covariate information have lower RMSE than do the regression-based estimators.
Furthermore, the proposed variance estimators have RMSE on par with the regression-based estimators. However, since the regression-based estimators are negatively biased, researchers may run the risk of systematically underestimating the variance of the estimated ATE when using standard regression estimators. In randomized experiments, where conservative variance estimators are typically preferred, the negative bias of traditional estimators may be particularly problematic.
The estimators proposed here illustrate a principled and parsimonious approach for unbiased estimation of ATEs in randomized experiments of any design. Our method allows for covariate adjustment, wherein covariates can be allowed to have any relationship to the outcome imaginable. Conservative variance estimation also flows directly from the design of the study in our framework. Randomized experiments have been justified on the grounds that they create conditions for unbiased causal inference but the design of the experiment cannot generally be ignored when choosing an estimator. Bias may be introduced by the method of estimation, and even consistency may not be guaranteed.
In this article, we return to the sampling theoretic foundations of the Neyman  model to derive unbiased, covariate adjusted estimators. Sampling theorists developed a sophisticated understanding about the relationship between unbiased estimation and design decades ago. As we demonstrate in this article, applying sampling theoretic insights into the analysis of randomized experiments permits a broad class of intuitive and clear estimators that highlight the design of the experiment.
Appendix A: non-invariance of the Horvitz–Thompson estimator
This proof follows from Ref. 5. To show that the estimator in eq.  is not invariant, let be a linear transformation of the treatment outcome for the ith person such that
and likewise, the control outcomes,
We can demonstrate that the HT estimator is not location invariant because the estimate based on this transformed variable will be
Unless , the term on the left does not generally reduce to zero but instead varies across treatment assignments, so eq.  does not generally equal eq.  for a given randomization. Therefore, the HT estimator is not generally location invariant. The equation also reveals that multiplicative scale changes where and (e.g., transforming from feet to inches) need not be of concern. However, a transformation that includes an additive component, such as reverse coding a binary indicator variable ( and ), will lead to a violation of invariance. So, for any given randomization, transforming the data in this way can yield substantively different estimates.
Appendix B: unbiasedness of the generalized difference estimator
Appendix C: location invariance of the generalized difference estimator
Unlike the HT estimator, the generalized difference estimator is location invariant. If the outcome measure changes such that , we assume that the predictive function will also change by the identical transformation such that
If we conceptualize as a function designed to predict the value of , then the intuition behind this transformation is clear; if we change the scaling of the outcome variable, it logically implies that the numerical prediction of the outcome will change accordingly. By eqs.  and ,
Appendix D: abbreviated proof of conservative variance estimator
We present an abbreviated proof from Aronow and Samii . Without loss of generality, we prove that will be positively biased for .
By Young’s inequality,
may be estimated without bias:
by , and eq. . Since , . By inspection, is also conservative.
Examining this estimator reveals why we have totaled clusters prior to estimation of variances. By combining totals, we apply Young’s inequality to all pairs of cluster totals, instead of all cluster-crosswise pairs of individual units. The bounds need only apply to a single totaled quantity, rather than to each of the constituent components. This step therefore will typically reduce the bias of the estimator.
Appendix E: proof of conservative covariance estimator
By Young’s inequality,
may be estimated without bias:
by , , and eq. . Since , . Unbiasedness under the sharp null hypothesis of no effect is ensured by eq. , where if , . Much as in Appendix D, the bias of the estimator is reduced by totaling clusters prior to estimation. In fact, unbiasedness under the sharp null hypothesis of no effect only holds because we have totaled clusters. Otherwise, the bounds would have to operate over all units, and pairs of units, within each cluster.
Angrist JD, Lavy J. The effect of high school matriculation awards: evidence from randomized trials. NBER Working Paper 9389, 2002. Google Scholar
Green DP, Vavreck L. Analysis of cluster-randomized experiments: a comparison of alternative estimation approaches. Pol Anal 2008;16:138–52. Google Scholar
Middleton JA, Aronow PM. Unbiased estimation of the average treatment effect in cluster-randomized experiments. Working paper. Yale University, 2011. Google Scholar
Humphreys M. Bounds on least squares estimates of causal effects in the presence of heterogeneous assignment probabilities. Working paper. 2009. Available at: http://www.columbia.edu/ mh2245/papers1/monotonicity4.pdf
Rosenbaum PR. Covariance adjustment in randomized experiments and observational studies. Stat Sci 2002;17:286–304. Google Scholar
Samii C, Aronow PM. On equivalencies between design-based and regression-based variance estimators for randomized experiments. Stat Probability Lett 2012;82:365–70. Web of ScienceCrossrefGoogle Scholar
Godambe VP. A unified theory of sampling from finite populations. J R Stat Soc Ser B Methodol 17, 1995;17(2):269-278. Google Scholar
Basu D. An essay on the logical foundations of survey sampling, part I. In: Godambe V, Sprott D, editor. Foundations of statistical inference. Toronto, ON: Holt, Rinehart, and Winston, 1971;203-242. Google Scholar
Sarndal C-E. Design-based and model-based inference in survey sampling. Scand J Stat 1978;5:27–52. Google Scholar
Des Raj. On a method of using multi-auxiliary information in sample surveys. J Am Stat Assoc 1965;60:270–77. Google Scholar
Hajek J. Comment on “An essay on the logical foundations of survey sampling, Part I.” In: Godambe V, Sprott, D, editors. Foundations of statistical inference. Toronto: Holt, Rinehart, and Winston, 1971;236. Google Scholar
Thompson ME. Theory of sample surveys. London: Chapman and Hall, 1997. Google Scholar
Lohr SL. Sampling: design and analysis. Pacific Grove, CA: Duxbury Press, 1999. Google Scholar
Sarndal C-E, Swensson B, Wretman J. Model assisted survey sampling. New York: Springer, 1992. Google Scholar
Cochran WG. Sampling techniques, 3rd ed. New York: Wiley, 1977.Google Scholar
Imbens GW, Rubin D. Causal inference in statistics. Unpublished textbook, 2009. Google Scholar
Freedman DA, Pisani R, Purves RA. Statistics, 3rd ed. New York: Norton, 1998. Google Scholar
Morgan SL, Winship C. Counterfactuals and causal inference: methods and principles for social research. New York: Cambridge University Press, 2007. Google Scholar
Angrist JD, Pischke J-S. Mostly harmless econometrics: an empiricist’s companion. Princeton, NJ: Princeton University Press, 2009. Google Scholar
Hernan M, Robins J. Causal inference. London: Chapman and Hall, In press. Google Scholar
Rosenbaum PR. Observational studies, 2nd ed. New York: Springer, 2002. Google Scholar
Imai K, King G, Nall C. The essential role of pair matching in cluster-randomized experiments, with application to the Mexican universal health insurance evaluation. Stat Sci 2009;24:29–53. CrossrefWeb of ScienceGoogle Scholar
Rosenblum M, van der Laan MJ. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables. Int J Biostat 2010;6:Article 13.Web of ScienceCrossrefGoogle Scholar
Wood J. On the covariance between related Horvitz-Thompson estimators. J Off Stat 2008;24:53–78. Google Scholar
Aronow PM, Samii C. Conservative variance estimation for sampling designs with zero pairwise inclusion probabilities. Surv Methodol, 2013. Google Scholar
Bates D, Maechler M. lme4: linear mixed-effects models using S4 classes. R package, version 0.999375–37, 2010. Google Scholar
Fisher RA. The design of experiments. Edinburgh: Oliver & Boyd, 1935. Google Scholar
Neyman JS, Dabrowska DM, Speed TP. On the application of probability theory to agricultural experiments: essay on principles, section 9. Stat Sci  1990;5:465–80. Google Scholar
About the article
Published Online: 2013-05-29
An alternative design-based tradition, typified by Rosenbaum , permits hypothesis testing, confidence interval construction, and Hodges-Lehmann point estimation via Fisher’s  exact test. Although links may be drawn between this Fisherian mode of inference and the Neyman paradigm , the present work is not directly connected to the Fisherian mode of inference.
This assumption is made without loss of generality; multiple discrete treatments (or equivalently, some units not being sampled into either treatment or control) are easily accommodated in this framework. All instances of (1 – Ti) in the text may be replaced by Ci, an indicator variable for whether or not unit i receives the control, with one exception to be noted in Section 6.
An alternative is to also allow to vary between the treatment and control groups, particularly if effect sizes are anticipated to be large. Many of our results will also hold under such a specification, although the conditions for unbiasedness (and conservative variance estimation) will be somewhat more restrictive.
Interestingly (and perhaps unsurprisingly), is quite similar to the double robust (DR) estimator proposed by Robins  (and similar estimators, e.g., Ref. 43) the key differences between the DR estimator and the difference estimator follow. (a) The DR estimator utilizes estimated, rather than known, probabilities of entering treatment, and thus is subject to bias with finite N. (b) Even if known probabilities of entering treatment were used, in is chosen using a regression model, which typically fails to satisfy the restrictions necessary to yield unbiasedness established in Section 5.1. Thus, the DR estimator is subject to bias with finite N.
More formally and without loss of generality, let , where is a matrix of pretreatment covariates (that may or may not coincide with ), is an arbitrary function (e.g., the least squares fit), and is the function implied by eq. . Since only is a random variable, the random variable equals some function . implies (equivalently ) which, in turn, implies .
If there are multiple treatments, the following simplification cannot be used. Furthermore, the associated estimator in Section 6.4 must apply to eq. , for which the derivation is trivially different.
Although the variance estimator is nonnegatively biased, the associated standard errors may not be (due to Jensen’s inequality) and any particular draw may be above or below the true value of the variance due to sampling variability.
, the variance estimator as applied to , is not generally guaranteed to be conservative. Specifically, when not constant, there is no guarantee that will be conservative, though an analogy to linearized estimators suggests that it should be approximately conservative with large N. Importantly, however, when, for all , and and the sharp null hypothesis of no treatment effect holds, is unbiased for .