Network interference, where the outcome of an individual is affected by the treatment assignment of those in their social network, is pervasive in real-world settings. However, it poses a challenge to estimating causal effects. We consider the task of estimating the total treatment effect (TTE), or the difference between the average outcomes of the population when everyone is treated versus when no one is, under network interference. Under a Bernoulli randomized design, we provide an unbiased estimator for the TTE when network interference effects are constrained to low-order interactions among neighbors of an individual. We make no assumptions on the graph other than bounded degree, allowing for well-connected networks that may not be easily clustered. We derive a bound on the variance of our estimator and show in simulated experiments that it performs well compared with standard estimators for the TTE. We also derive a minimax lower bound on the mean squared error of our estimator, which suggests that the difficulty of estimation can be characterized by the degree of interactions in the potential outcomes model. We also prove that our estimator is asymptotically normal under boundedness conditions on the network degree and potential outcomes model. Central to our contribution is a new framework for balancing model flexibility and statistical complexity as captured by this low-order interactions structure.
Accurately estimating causal effects is relevant in numerous applications, from pharmaceutical companies researching the efficacy of a new medication, to policy makers understanding the impact of social welfare programs, to social media companies evaluating the impact of different recommendation algorithms on user engagement across their platforms. Often, the entity interested in understanding a causal effect will design an experiment where they randomly assign subsets of the population to treatment (e.g., new medication) and to control (e.g., a placebo) and draw conclusions based on the observed outcomes of the participants (e.g., health outcomes). Other times, the entity may need to determine causal effects from observational data accrued in a previous study where they did not have full control over the treatment assignment mechanism.
Our work focuses on estimating the total treatment effect (TTE), or the difference between the average outcomes of the population when everyone is treated versus when no one is treated, given data collected from a randomized experiment. This estimand is sometimes referred to as the global average treatment effect, as in ref. . The TTE is a quantity of interest in scenarios where the decision maker must choose between adopting the proposed treatment or sticking with the status quo. For example, a social media company may develop a new recommendation algorithm for suggesting content to their users, and they want to decide whether to roll out this new algorithm across their platform. As another example, suppose that a pharmaceutical company develops a vaccine for some infectious disease. Then, public health experts and officials must decide whether the vaccine is safe and efficacious enough to warrant its recommendation to the general population. As the side effects of the new treatment are unknown, the goal is to determine the efficacy of the treatment relative to the status quo baseline by running a budgeted randomized trial, where the number of treated individuals in the trial is limited for safety reasons.
The techniques and guarantees for estimating causal effects in classical causal inference heavily rely upon the stable unit treatment value assumption (SUTVA), which posits that the outcome of each individual is independent of the treatment assignment of all other individuals . Unfortunately, SUTVA is violated in all the aforementioned applications because people influence and are impacted by their peers. In the presence of this network interference, an individual’s outcome is affected by the treatment assignment of others in their social network, and SUTVA no longer holds. Distinguishing between the direct effect of treatment on an individual and the network effect of others’ treatment on the individual can be challenging. This has resulted in a growing literature on causal inference in the presence of network (interference) effects, sometimes referred to as spillover or peer influence effects. In this work, we consider the task of estimating the TTE from unit randomized trials under neighborhood interference, when an individual is affected by the treatment of its direct neighbors but is otherwise unaffected by the treatment of the individuals outside their neighborhood. Furthermore, we focus on unit randomized designs (RDs), wherein individuals are independently assigned to either treatment or control in a randomized experiment. This is in contrast to cluster RDs, which have been proposed as an approach to address network interference for randomized experimental design but which may not be feasible in practice due to an incompatibility with existing experimental platforms.
Estimating a causal effect from randomized experiments involves two decisions. First, we must decide what kind of experiment to run, i.e., how we choose the individuals to participate in the study, and how we determine which individuals are assigned to receive the new treatment. In this article, we focus on randomization inference, in which we assume the entire population participates in the study, and the randomness arises from the assignment of treatment or control. Second, after the experiment is conducted, we must decide how to analyze the data and construct an estimate from the measured observations. The literature addressing network interference can largely be categorized as either constructing clever RDs to exploit clustering structure in the network, or designing new estimators that exploit structure in the potential outcomes model. Without making assumptions on either the network or the potential outcomes model, it is impossible to estimate the TTE under arbitrary interference [3–5].
Table 1 summarizes how different assumptions on the potential outcomes model or network structure lead to different types of solutions, with a focus on unbiased estimators and neighborhood interference models where the network is known. The columns correspond to assumptions on the network structure (in order of increasing generality from left to right), while the rows correspond to assumptions on the structure in the potential outcomes model (in order of increasing generality from top to bottom). The body of the graph lists proposed solutions for estimating the TTE under the corresponding assumptions on the network/model structure. For example, under a fully general network structure and a linear potential outcomes model, refs [6–11] propose using an ordinary least squares (OLS) estimator with a Bernoulli RD. Since we focus on solutions proposing unbiased estimators, we also list bounds on the variance of the estimators in the table, as a point of comparison. As Table 1 indicates, the literature has either focused on analyzing the Horvitz–Thompson estimator under new RDs that exploit network structure by increasing the correlation between neighbors’ treatment assignments or alternatively using regression-style estimators with Bernoulli RD, exploiting strong functional assumptions on the potential outcomes model. Without assuming any structure beyond neighborhood interference, the baseline solution of using the Horvitz–Thompson estimator under the Bernoulli RD is an unbiased estimator whose variance scales exponentially in the degree of the network.
|Model structure||Network structure|
|disconnected subcommunities||-restricted growth||Fully general|
|Linear||Directions for future work||OLS, Bernoulli RD; [6–11]|
|Generalized linear||Regression/machine learning, Bernoulli RD; |
|-order interactions||SNIPE, Bernoulli RD;|
|Arbitrary neighborhood interference||Horvitz–Thompson, Cluster RD; ; [12–15]||Horvitz–Thompson, Randomized Cluster RD; ; [1,7,16,17]||Horvitz–Thompson, Bernoulli RD; ; |
In our work, we propose a hierarchy of model classes that extrapolates between simple linear models and complex general models, such that a practitioner can choose the strength of the model assumptions they are willing to impose. Naturally, assuming a more limited model simplifies the causal inference task. We characterize the complexity of a potential outcomes model with the order of interactions , which also corresponds to the polynomial degree of the potential outcomes function when viewed as a polynomial of the treatment vector. A -order interactions model is one in which each neighborhood set of size at most can have a unique additive network effect on the potential outcome. Our model allows for heterogeneity in the influence of different sets of treated neighbors, strictly generalizing beyond the typical parametric model classes used in the literature, which oftentimes assumes anonymous interference. We make no assumptions on the graph beyond bounded degree, so the graph may be well connected and not easily clustered. We summarize our contributions and results below:
Assuming a -order interactions model, under a nonuniform Bernoulli RD with treatment probabilities for , we present the structured neighborhood interference polynomial estimator (SNIPE), a simple unbiased estimator for the TTE.
We derive a bound on the variance of our estimator, which scales polynomially in the degree of the network and exponentially in the degree of the potential outcomes model. We also show that our estimator is asymptotically normal.
For a -regular graph and uniform treatment probabilities with and , we prove that the minimax optimal mean squared error for estimating the TTE is lower bounded by , implying that the exponential dependence on is necessary.
We provide experimental results to illustrate that using regression models that do not allow for heterogeneity among the network effects can lead to considerable bias when the anonymous interference assumption is violated. The experiments validate that our estimator is unbiased for -order interaction models, and obtains a lower mean squared error than existing alternatives.
To interpret the upper bound on the variance of our proposed estimator for the TTE, we note that our variance scales as , compared to the variance of the Horvitz–Thompson estimator under Bernoulli RD which scales as , where is always bounded above by . For smaller values of , the -order interactions model imposes stronger structural assumptions on the potential outcomes model than required by the Horvitz–Thompson estimator. In turn, our estimator has significantly lower variance, scaling only polynomially in the network degree yet exponential in , as opposed to exponential in . In addition, the minimax lower bound shows that the exponential dependence on is also minimax optimal amongst -order interaction models, implying that the order of interactions, or polynomial degree, of the potential outcomes model is a meaningful property that expresses the complexity of estimating the TTE.
2 Related literature
While there has been a flurry of recent activity in addressing the challenges that arise from network interference, every proposed solution concept fundamentally hinges upon making key assumptions on the form of network interference. Without any assumptions, the vector of observed outcomes under a particular treatment vector may have no relationship to the potential outcomes under any other treatment vector. We should naturally expect that if we are willing to assume stronger assumptions, then we may be able to obtain stronger results conditioned on those assumptions being satisfied. As such, the literature ranges between results that impose fewer assumptions on the model and graph, resulting in unbiased estimators with high variance, or results that impose strong assumptions on the model and graph, resulting in simple, unbiased estimators with low variance.
Early works propose framing assumptions via exposure functions, or constant treatment response [3,4]. This assumes that there is some known exposure mapping, , which maps the treatment vector , along with unit-specific traits , to a discrete space of exposures, or effective treatments. The potential outcomes function for unit is then assumed to only be a function of its exposure, or effective treatment, such that if , then . If is as large as , then this assumption places no effective restriction on the potential outcomes function; thus, this assumption is only meaningful when is relatively small. One commonly used exposure mapping expresses the neighborhood interference assumption [17–20], in which each unit is associated to a neighborhood , and unit ’s potential outcome is only a function of the treatments of units within . We could use an exposure mapping to formulate this assumption, where for denoting the maximum neighborhood size, and if the treatments assigned to individuals in are the same between and . Another commonly used exposure mapping expresses the anonymous interference assumption [14,21–23], in which the potential outcomes are only a function of the treatments through the number of treated neighbors, i.e., , if the number of treated individuals in via and is the same. While the exposure mapping framework provides a highly generalizable and abstract framework for assumptions, it is fundamentally discrete in nature and the complexity of estimation is characterized by the number of possible exposures , which could still be large. As a result, ref.  suggests a collection of additional assumptions that can be imposed on top of anonymous neighborhood interference, including distributional or functional form assumptions, or additivity assumptions as suggested in ref. .
The majority of works in the literature (along with our work) assume neighborhood interference with respect to a known graph. Notable exceptions include ref. , which considers a highly structured spatial interference setting with network effects decaying with distance, and , which provides methods for testing hypotheses about interference effects including higher order spillovers. Without imposing any additional assumptions on the potential outcomes besides neighborhood interference, a natural solution is to use the Horvitz–Thompson estimator to estimate the average outcomes under full neighborhood treatment and full neighborhood control . While the estimator is unbiased, the variance of the estimator scales inversely with the probability that a unit’s full neighborhood is in either treatment or control. Under a Bernoulli( ) RD, where each individual is treated independently with probability , the variance scales as , as indicated in the bottom right cell of Table 1. The exponential dependence on renders the estimator impractical for realistic networks.
One approach in response to the aforementioned challenge is to consider cleverly constructing RDs that increase overlap, i.e., the probability that a unit’s entire neighborhood is assigned to treatment or control. The earliest literature in this line of work additionally assumes partial interference, also referred to as the multiple networks setting, in which the population can be partitioned into disjoint groups, and network interference only occurs within groups and not across groups [12,14,15,20,21,26,27]. This assumption makes sense in contexts where interference is only expected between naturally clustered groups of individuals, such as households, cities, or countries. Given knowledge of the groups, we can then randomly assign groups to different treatment saturation levels, e.g., jointly assigning groups to either treatment or control, increasing the correlation of treatments within neighborhoods. Then, a difference in means estimator or a Horvitz–Thompson estimator can be used to estimate the TTE. The asymptotic consistency of these estimators relies on the number of groups going to infinity, with a variance scaling inversely with the fraction of treated clusters, i.e., as indicated in the bottom left cell of Table 1. In practice, even networks that are clearly clustered into separate groups may not have a sufficiently large number of groups to result in accurate estimates.
For general connected graphs, one can still implement a cluster-based RD on constructed clusters, where the clusters are constructed to minimize the number of edges between clusters [1,7,16,17]. References [1,17] provide guarantees for graphs exhibiting a restricted growth property, which limits the rate at which local neighborhoods of the graph can grow in size, and ref.  proves that using randomized cluster RD along with the Horvitz–Thompson estimator achieves a variance of for restricted growth networks, which is a significant gain from the exponential dependence on under Bernoulli design. A limitation of these solutions is that the cluster RD can be difficult to implement due to incompatibility with existing experimentation platforms or to ethical concerns. When the existing experimentation platform is already set up for a unit RD, the experimenter may have the desire to avoid overhauling the platform to work with cluster RD due to time or resource constraints . In addition, refs [28–31] detail some ethical issues that arise in cluster RDs including, but not limited to, problems with informed consent (e.g., it may be infeasible to gain informed consent from every unit in a cluster) and concerns about distributional justice (e.g., with regards to how clusters are selected and assigned to treatment). Furthermore, both refs [28,30] comment that many of the existing, formal research ethics guidelines were designed with unit RD in mind and thus, offer little guidance for considerations that arise in cluster RDs. The work we present here focuses on scenarios with unit RDs, when cluster RDs may not be feasible or may be undesirable due to any of the aforementioned concerns.
The alternate approach that has gained traction empirically is to impose additional functional assumptions on the potential outcomes in addition to anonymous neighborhood interference. The most common assumption is that the potential outcomes are linear with respect to a particular statistic of the treatment vector, where the linear function is shared across units [6–11]. Under this assumption, estimating the entire potential outcomes function reduces to linear regression, which one could solve using OLS, as indicated in the upper right most cell of Table 1. After recovering the linear model, one could estimate any desired causal estimand. The results rely on correctly choosing the statistic that governs the linearity, or more generally reduces the task to heuristic feature engineering for generalized linear models . One could plausibly extend the function class beyond linear and instead apply machine learning techniques to estimate the function that generalizes beyond linear regression. While we do not state a variance bound in Table 1, one would expect that when is sufficiently large, the variance would scale with , with some dependence on the complexity of the model class; when is very small, the variance may scale with , as the regression still requires sufficient variance of covariates represented in the data, i.e., a sufficient spread of number of neighbors treated. Reference  considers nonparametric models yet focuses on estimating a locally linearized estimand. A drawback of these approaches is that they assume anonymous interference, which imposes a symmetry in the potential outcomes such that the causal network effect of treating any of an individual’s neighbors is equal regardless of which neighbor is chosen. In addition, they assume that the function that we are learning is shared across units, or at least units of the same known covariate type, which can be limiting.
While we have primarily focused on summarizing the literature for unbiased estimators, there has also been some work considering how structure in the potential outcomes model can be exploited to reduce bias of standard estimators. References [32–34] use application domain-specific structure in two-sided marketplaces and network routing to compare the bias of the difference in means estimators under different experimental designs. Reference  uses structure in ridesharing platforms to propose a new estimator with reduced bias. In addition, there has been some limited empirical work studying bias under model misspecification .
Complementary to the literature on randomized experiments, there has been a growing literature considering causal inference over observational studies in the presence of network interference. However, the limitations similarly mirror the aforementioned concerns. A majority of the literature assumes partial interference [15,36–39], i.e., the multiple networks setting, which then enables causal inference of a variety of different estimands. In particular, it is commonly assumed that the different groups in the network are sampled from some underlying distribution, and the statistical guarantees are given with respect to the number of groups going to infinity. Alternately, other works assume that the potential outcomes only depend on a simple and known statistic of the neighborhood treatment, most commonly the number or fraction of treated [11,40,41]. Either the neighborhood statistic only takes finite values, or assumptions are imposed on the functional form of the potential outcomes, which imply anonymous interference and reduce inference to a regression task or maximum likelihood calculation. Reference  considers a general exposure mapping model alongside an inverse propensity weighted estimator, but the estimator has high variance when the exposure mapping is complex.
In contrast to the majority of the mentioned literature, we neither rely on cluster RDs nor anonymous interference assumptions. We instead propose a potential outcomes model with low-order interactions structure, where the degree of interactions characterizes the difficulty of inference, also studied in ref. . For , this model is equivalent to heterogeneous additive network effects in ref. , which can be derived from the joint assumptions of additivity of main effects and interference in ref. . When is larger than the network degree, then this assumption is equivalent to an arbitrary neighborhood interference, providing a nested hierarchy of models that encompass the simple linear model class, the fully general model class, as well as model classes of varying complexity in between. References [43,44], which consider similar models as we do, focus on the setting when the underlying network is fully unknown, and yet there is richer available information either in the form of historical data  or multiple measurements over the course of a multi-stage experiment , neither of which we assume in this work. Our estimator also has close connections to the pseudoinverse estimator in ref. , the Riesz estimator in ref. , and the estimator in ref. , which is discussed in Section 4.
3.1 Causal network
Let denote the underlying population of individuals. We model the network effects in the population as a directed graph over the individuals with edge set . An edge signifies that the treatment assignment of individual affects the outcome of individual . As an individual’s own treatment is likely to affect their outcome, we expect self-loops in this graph. In much of the article, we are concerned with neighborhood interference effects, and we use to denote the in-neighborhood of an individual . Note that this definition allows . Many of our variance bounds reference the network degree. We let denote the maximum in-degree of any individual, denote the maximum out-degree, and .
3.2 Potential outcomes model
To each individual , we associate a treatment assignment , where we interpret as an assignment to the treatment group and as an assignment to the control group. We collect all treatment assignments into the vector . We use to denote the outcome of individual . As our setting assumes network interference, the classical SUTVA assumption is violated. That is, is not a function only of . Rather, may be a function of , the treatment assignments of the entire population. Since each treatment variable is binary, we can indicate an exact treatment assignment as a product of (for treated individuals) and (for untreated individuals) factors. As such, we can express a general potential outcome function as a polynomial in ,
where is individual ’s outcome when their set of treated neighbors is exactly . Via a change of basis, we can equivalently express as a polynomial in the “treated subsets”:
where represents the additive effect on individual ’s outcome that they receive when the entirety of subset (perhaps among other individuals) is treated. Note that represents the baseline effect, the component of ’s outcome that is independent of the treatment assignments.
So far, the potential outcomes model described in (3.1) is completely general. However, it is parameterized by coefficients , which makes it untenable in most settings. To combat this, we impose some structural assumptions on these coefficients. First, we observe that the populations of interest can be quite large (e.g., the population of an entire country), and their influence networks may have high diameter. Throughout most of the article, we assume that individuals’ outcomes are influenced only by their immediate in-neighborhood.
(Neighborhood interference) only depends on the treatment of individuals in . Equivalently, for any and such that for all . In our notation, for any .
Next, we note that the degree of each can (under the neighborhood interference assumption) be as large as . In such a model, one’s outcome may be differently influenced by a treated coalition of any size in their neighborhood. Contrast this with a simpler linear potential outcomes model, wherein an individual’s outcome receives only an independent additive effect from each of their treated neighbors. This illustrates that the degree of the polynomial may serve as a proxy for its complexity. In this work, we consider the scenario where the polynomial degree may be significantly smaller than .
(Low polynomial degree) Each potential outcome function has degree at most . In our notation, whenever . Along with Assumption 1, it follows that the potential outcomes function from (3.1) can be expressed in the following form,
where we define .
We remark that while we use the formal mathematical term of “low polynomial degree,” since this describes a function over a vector of binary variables, a low polynomial degree constraint is equivalent to a constraint on the order of interactions amongst the treatments of neighbors. In the simplest setting when , this is equivalent to a model in which the networks effects are additive across treated neighbors, strictly generalizing beyond all linear models that have been widely used in applied settings.
We use the notation in (3.2) to express the potential outcomes model for the remainder of the article. If , note that could be completely removed from the definition of in equation (3.2), reducing to the arbitrary neighborhood interference setting. However, we turn our attention to settings where might be much smaller than the degree of the graph ( ), and we can assume that only low-order interactions within neighborhoods have an effect on an individual. As noted earlier, taking corresponds to the heterogeneous linear outcomes model in ref. . We include further examples to help in understanding this low polynomial degree assumption in Section 3.2.1.
Many of our variance bounds utilize an upper bound on the treatment effects for each individual. We define such that
It follows that for any treatment vector .
We emphasize that the model in (3.2) captures fully general neighborhood interference when . Even when , the number of parameters in the model grows with . This results in the total number of observations always being smaller than the number of parameters in the model, so using simple regression-style estimators to identify the model is impossible.
We can consider the order of interactions or polynomial degree as a way to measure the complexity of the model class. Our subsequent results suggest that is meaningful as it also captures a notion of statistical complexity or difficulty of estimation. This is evidenced by the variance upper bound and minimax lower bound results in Section 5, where we see that the smaller is relative to the degree of the graph, the smaller the variance incurred and the larger is with respect to the graph degree, the higher the variance incurred. In some sense, the “lower” the degree of the model, i.e., the more structure imposed, the “easier” it is to estimate the TTE. On the flip side, the “higher” the degree of the model, i.e., the closer it is to being fully general, the “harder” it is to estimate.
Assumption 2 implies that the model exhibits a particular type of sparsity in the coefficients with respect to the monomial basis, in which the coefficients corresponding to sets larger than size are zero. In our setting when the treatments are budgeted, i.e., is small, these coefficients precisely correspond to effects that are observable in a Bernoulli RD, i.e., the probability for observing or measuring the coefficient corresponding to set is . As such, there is an direct connection between this hierarchy of models as parameterized by and the ability to measure and estimate the TTE, which is formalized by our subsequent analysis.
3.2.1 Examples of low-degree interaction models
We provide a few examples to illustrate when the polynomial degree of the potential outcomes model may naturally be smaller than the total neighborhood size (i.e., ).
Consider a potential outcomes model that exhibits the joint assumptions of additivity of main effects and interference effects as defined in ref. . This imposes that the potential outcomes satisfy
where is a vector of all zeros and is a standard basis vector. As discussed in ref. , this assumption implies a low-degree interaction model with , which they refer to as heterogeneous additive network effects.
Consider a hypothesized setting where each individual’s neighborhood can be divided into smaller sub-communities. For example, one’s close contacts may include their immediate family, their close friends, their work colleagues, etc. Within each of these sub-communities, there may be nontrivial (higher order) interactions between the treatments of its members. However, it is reasonable to assume that the cumulative effects of each sub-community have an additive effect on the individual’s outcome. That is, there are no nontrivial interactions resulting from the treatment of individuals across different sub-communities. In this case, a natural choice for is the size of the largest sub-community, which could be significantly smaller than the size of the largest neighborhood.
Suppose a social networking platform is testing a new “hangout room” feature where groups of up to five people can interact in a novel environment on the platform. One can posit that a natural choice for in this setting is 5, as the change in any individual’s usage on the platform can be attributed to how they utilize this new feature, which in turn is a function of it being introduced to various subsets of up to five users in that individual’s neighborhood.
Consider a setting where an individual’s outcome is a low-degree polynomial in some auxiliary quantity, which is itself linear in the treatment assignments of their neighborhood. For example, we might have
A similar setting is explored in our simulated experiments in Section 6.
Consider an example in which network effects only arise from pairwise edge interactions and triangular interactions, i.e., individual ’s outcome consists of a sum of its baseline outcome, its own direct effect , pairwise edge effects for , and triangle effects for such that there is also an edge between and , indicating that the three individuals are mutual connections. For such a model, would be 2 due to the triangular interactions.
We emphasize that any potential outcomes model that takes a binary treatment vector as its input can be written as a polynomial in , taking the form in equation (3.2) for general . However, the low-degree assumption, i.e., that , will not generally admit high-degree models. For example, both threshold models and saturation models generally require the degree of to be . In a threshold potential outcomes model, an individual experiences network effects once a particular threshold of their neighbors are treated . An example of this type of model is given by
where Saturation models allow for network or peer effects to increase up to a particular saturation level, such as
where is some maximum saturation threshold. In this model, an individual receives additive effects from each subset of treated neighbors until a certain threshold effect is reached, after which the networks effects have “saturated” and the treatment of additional neighbors contributes no additional effect.
3.3 Causal estimand and RD
Throughout most of the article, we concern ourselves with estimating the TTE. This quantifies the difference between the average of individual’s outcomes when the entire population is treated versus the average of individual’s outcomes when the entire population is untreated:
where represents the all ones vector and represents the zero vector. Plugging in our parameterization from equation (3.2), we may re-express the TTE as follows:
Since exposing individuals to treatment can have a deleterious and irreversible effect on their outcomes, we wish to estimate the TTE after treating a small random subset of the population. We focus on a nonuniform Bernoulli design, wherein each individual is independently assigned treatment with probability for . That is, each . Such a RD is both straightforward to implement and to understand. Furthermore, many existing experimentation platforms are already designed for Bernoulli randomization, making it easy to collect new data or to re-analyze existing data and adjust for network interference, rather than requiring a complete overhaul of the existing experimentation platform to allow for more complex randomization schemes.
In this section, we introduce the estimator that will serve as our central focus through the rest of the article: the SNIPE. While we restrict our attention to nonuniform Bernoulli design, the techniques to derive this estimator can be generalized to a wide variety of causal estimands and experimental designs. In fact, our estimator is connected to the Horvitz–Thompson estimator and turns out to be a special case of both the pseudoinverse estimator first introduced by Swaminathan et al.  and more recently the Riesz estimator of Harshaw et al. . We discuss these connections in Section 4.3.
To provide intuition, we first derive the estimator in the case with the linear heterogenous potential outcomes model  via a connection to OLS. Then, we show how it can be extended to the more general polynomial setting. Our main result (Section 5) establishes both the unbiasedness of this family of estimators and a bound on its variance under Bernoulli design.
Recalling that , the vector collects the parameters in some canonical ordering. We will assume throughout that is always first in this ordering and otherwise index the entries of these vectors by their corresponding set . As an example, when and , we may have , where is the in-degree of unit . In a similar manner, the treated subsets vector collects the indicators in the same ordering. In our example, . By using this notation, we may express
where the inner product argument in the second equation is the -length vector with first entry 0 and remaining entries 1.
4.1 Building intuition in the linear setting ( )
To motivate our estimator, we consider the linear heterogeneous potential outcomes model ( ) under nonuniform Bernoulli RD. By using the TTE expression from (4.1), we can recast the problem of estimating the TTE into the problem of estimating the parameter vector : by linearity of expectation, an unbiased estimator of will give rise to an unbiased estimator of .
As a thought experiment toward estimating , imagine that we can perform independent replications of our randomized experiment. In each replication , we observe the treated subsets vector , obtained from our Bernoulli RD, and the outcome . We visualize,
To minimize the sum of squared deviations from the true coefficients, we use OLS, computing
We consider the limit of this estimate as . The consistency of the least squares estimator ensures that . By the law of large numbers, we have
Similarly, the vector . Assuming the invertibility of the matrix (we show this explicitly for Bernoulli design below), we obtain in the limit,
Now, we return from our thought experiment to the actual experimental setting wherein a single instantiation is realized. We consider the estimator
Note that the matrix depends only on our experimental design and thus can be utilized by our estimator. The unbiasedness of this estimator follows from our aforementioned computation; by applying linearity,
By applying linearity once more, we may obtain the unbiased estimator for the TTE,
For the specific setting of nonuniform Bernoulli design with , one can use the fact that and for to compute,
which we can invert to obtain
By plugging into equation (4.3), we obtain the explicit form for our estimator:
where SNIPE refers to the fact that this is the SNIPE estimator with .
4.2 SNIPE in the general polynomial setting
For larger , we can use the same least squares approach to obtain an unbiased estimate of the coefficient vector , and then plug this into equation (4.1) to obtain . The outcomes remain linear functions in – albeit for a significantly longer containing entries – so the same convergence results apply. Suppose we index the entries of by the sets and corresponding to the matrix row and column. By the independence and marginal treatment probabilities of nonuniform Bernoulli design, we see that
Working with this matrix is significantly more tedious, so we relegate the details to Appendix A. There, we show that is invertible by giving an explicit formula for the entries of its inverse. Then, we plug these entries into equation (4.3) to derive the following explicit formula for the estimator:
where we define the coefficient function such that
for each and . When it is clear from context that refers to the SNIPE estimator, we suppress the subscript for cleaner notation.
We pause here to again emphasize that our technique gives us an unbiased estimate of the coefficient vector for each individual . From here, we leverage the linearity of expectation to obtain an unbiased estimator for the , which is a linear function of these coefficients. This same strategy can be applied to develop estimators for any causal effect that is linear in the coefficients. We highlight some other potential estimands and the explicit form of their estimators in Appendix E. The techniques that we discuss in Section 5 can be used to establish further properties of these estimators.
This estimator can be evaluated in time and only utilizes structural information about the graph (not any influence coefficients ). Structurally, the estimator takes the form of a weighted average of the outcomes of each individual , where the weights themselves are functions of the treatment assignments of all members of the in-neighborhood . To make use of the low-order interference assumption, the estimator separately scales the effect of treatment of each sufficiently small subset of using the scaling function . The definition of this ensures the unbiasedness of the estimator.
In the special case of a uniform treatment probability across all nodes, we can simplify this estimator to show that it is only a function of the number of treated individuals in ’s neighborhood and not the identities. Let denote the set of treated units. We can rewrite the estimator as follows:
Thus, while our estimation guarantees allow for heterogeneity in the potential outcomes model, when treatment probabilities are uniform, computing the estimator does not depend on the identity of the treated individuals, but only the number of treated neighbors. As a result, the estimator can be evaluated in time, which is a significant improvement compared to the computational complexity when the treatment probabilities are nonuniform.
4.3 Connection to other estimator classes
Our estimator takes the form of a linear weighted estimator
under specifically constructed weight functions . From this perspective, we can draw connections between our estimator and others appearing in the literature.
4.3.1 Horvitz–Thompson estimator
First, we show that in the special case where , our estimator is identical to the classical Horvitz–Thompson estimator. In this case, the restriction is satisfied for every , so that includes all subsets of . By using this, we may simplify
so we exactly recover the Horvitz–Thompson estimator. As a result, when is sufficiently large relative to the degree of the nodes in the graph, our estimator is very similar to the Horvitz–Thompson estimator, only differing for the nodes that have graph degrees larger than . In this sense, under Bernoulli randomization, our estimator can be thought of as a generalization of Horvitz–Thompson to additionally account for low polynomial degree structure, which is most relevant for simplifying the potential outcomes associated with high-degree vertices.
4.3.2 Pseudoinverse estimator
The key technical steps of deriving our estimator can be described as constructing unbiased estimators for each unit ’s contribution to the TTE using a connection to OLS for linear models, and subsequently averaging the unbiased estimates due to the fact that the causal estimand is a linear function of these individual contributions. This overall technique has also appeared in the previous literature in semiparametric estimation, with one clear example described by Swaminathan et al.  in a seemingly different context of off-policy evaluation for online recommendation systems. In their model, a context arrives, at which point the principal selects a tuple (or slate) of actions and observes a random reward based on the interaction between the context and the slate. The authors make a linearity assumption that posits that , where indicates the choice of a particular action in each entry of the tuple, and is a context-specific reward weight vector associated to context . In our setting, the context is an individual , the reward weights are their effect coefficients , and the slate indicator vector is our treated subsets vector .
A primary goal of ref.  is to estimate , which can be used to inform a good slate selection policy. The mean squared error of an estimate for is , where encodes the random selection of the context, slate, and reward. In this framing, the least squares estimator
is the minimizer of the mean squared error (MSE) with the minimum norm. As the reward distribution is unknown, it is replaced with an empirical estimate of given past data. We can perform the substitutions as described in the previous paragraph, noting that Bernoulli design ensures that is invertible (Appendix A), to recover our estimator from (4.2).
4.3.3 Riesz estimator
In their recent work, Harshaw et al.  consider a very general causal inference framework wherein a treatment is drawn from an underlying experimental design distribution over an intervention set . We observe an outcome for each unit , where we assume that belongs to a model space , which is a subspace of containing all measurable and square-integrable (with respect to the distribution over ) functions . Each individual is additionally endowed with a linear effect functional , and the goal is to estimate the average individual treatment effect .
In our setting, with the nonuniform Bernoulli design distribution (i.e., , with mutually independent). consists of all linear functions over the basis , and each restricts to the linear functions over . Finally, each unit has effect functional , so that .
Under two assumptions – correct model specification (needed in our work as well) and positivity (always satisfied in our setting since Bernoulli design ensures each treatment in occurs with positive probability) – the Riesz representation theorem guarantees the existence of an unbiased estimator for , referred to as a Riesz estimator, of the form
where is a Riesz representor for with the property that
for each . As each model space has dimension , we can identify by solving the linear system that verifies that ( ) holds for each function in some basis for . A canonical choice is the standard basis, giving rise to equations
for each . The choice is a solution to this system (Appendix B) and gives rise to our estimator.
5 Properties of estimator under Bernoulli design
The following theorem summarizes the key properties of our estimator.
Under a potential outcomes model satisfying the neighborhood interference assumption with polynomial degree at most , the estimator defined in (4.4) is unbiased with variance upper bounded by
where each and .
Notably, a sequence of networks with and has variance asymptotically approaching 0. We defer the proof of this theorem to Appendix B. Rather than appealing to the convergence properties of the pseudoinverse estimator to establish unbiasedness, we present an alternate combinatorial proof. To bound the variance, we carefully consider different possible overlapping subsets of individuals to separately bound many covariance terms that make up the overall variance expression.
To understand the variance bounds for our estimator, we can compare against the variance of Horvitz–Thompson under a Bernoulli design. In the simple setting of a -regular graph and uniform Bernoulli randomization, ref.  showed that the Horvitz–Thompson estimator has a variance that is lower bounded by . In contrast, the variance of our estimator only scales polynomially in the degree , but exponentially in the polynomial degree , which is achieved by simply changing the estimator, without requiring any additional clustering structure of the graph and without utilizing complex RDs. This is a significant gain when the polynomial degree is significantly lower than the graph degree . The simplest setting of already expresses all potential outcomes models that satisfy additivity of main effects and additivity of interference, as defined in ref. ; this subsumes all linear models that are commonly used in the practical literature, yet which require additional homogeneity assumptions.
We also remark that when , both the Horvitz–Thompson estimator and our estimator are unbiased, for the same class of functions. When , the Horvitz–Thompson estimator is unbiased for a strictly larger class of functions than our estimator, precisely those characterized by . In this way, if the practitioner can use domain knowledge to argue that the true potential outcomes model belongs to the class of functions parametrized by from equation (3.2) for , then our estimator provides an advantage over the Hortvitz-Thompson estimator. This is because both estimators are unbiased but the variance of our estimator does not have exponential dependence on the graph degree. However, depending on the flexibility that the practitioner desires or needs to express in the potential outcomes, there is no clear winner. For example, if one desires a fully nonparametric potential outcomes to capture the most general neighborhood interference settings, they might set . Then, both our estimator and the Horvitz–Thompson estimator are unbiased, and both have variance scaling exponentially in . On the other hand, suppose anonymous interference was satisfied for a particular application and the potential outcomes could be modeled:
Then, using an OLS estimator would give an unbiased estimate with lower variance than our estimator. Thus, our model and estimator can be viewed as simply “adding to the toolbox” that practitioners may use depending on how expressive they need their potential outcomes models to be.
In the special setting of uniform Bernoulli design and , our estimator as stated in (4.5) is the same as an estimator presented in ref. . They consider a fully nonparametric setting under anonymous interference, in which the goal is to estimate the derivative of the total outcomes under changes of the population-wide treatment probability. As the derivative can be estimated by locally linearizing the outcomes function, they derive the special case of the estimator in (4.5) under by taking the derivative of the expected population outcomes under a Bernoulli randomization, constructed by inverse propensity weights. This suggests that under a fully nonparametric setting, our estimator may be used for estimating an appropriately defined local estimand. There may also be opportunities to perform variance reduction on our estimator given knowledge of the graph structure, as proposed in ref. ; however, their solution requires anonymous interference, and it is not clear how to extend their solution concept beyond , anonymous interference, and uniform treatment probabilities.
5.1 Minimax lower bound on mean squared error
To understand the optimality of our estimator, we construct a lower bound on the minimax optimal mean squared error rate. In particular, we show that for a setting with a -regular graph and sufficiently-small uniform treatment probabilities , the best achievable mean squared error is lower bounded by .
(Minimax lower bound) For any with , and any estimator , there exists a causal network on n nodes with maximum degree d and effect coefficients for which the minimax squared error under uniform treatment probabilities is bounded below by
For a -order interactions model, estimating the TTE requires being able to measure the network effect of the size subsets of a unit’s neighborhood. The probability that a set of size is jointly assigned to treatment is . As a result, the scaling of is somewhat intuitive.
The proof of Theorem 2 (given in Appendix C) uses a generalized variation of LeCam’s method for fuzzy hypothesis testing , [48, Sec. 2.7.4]. We reduce the problem of estimation to the creation of a hypothesis test to distinguish between two priors. We consider a setting with a -regular graph, uniform treatment probabilities , and two Gaussian priors over the effect coefficients. The priors have the same variances but with shifted means such that for some carefully tuned parameter . The failure of hypothesis tests that rely on is attributable to one of two factors: (1) a significant shift in brought about by the variability of the coefficients, or (2) inaccuracy of the estimator. We bound the probability of the former with a Gaussian tail bound. We bound the latter in terms of the Kullback-Leibler (KL)-divergence between the distributions over induced by and . The rest of the argument involves a calculation of this KL-divergence and a selection of to ensure a nonvanishing error probability.
While our lower bound clearly indicates that the exponential dependence on as exhibited by is necessary, a notable difference between our lower and upper bounds is the dependence on the graph degree . Recall that the upper bound on the variance of our SNIPE estimator in Section 1 scales roughly as ; however, our lower bound result has no dependence on . We attribute this gap to a weakness in the analyses and believe that it can be tightened with more careful calculation. In the upper bound, the arises as a bound on the sum of binomial coefficients of the form for . While this bound is precise asymptotically for a regime, where is a small constant and is large, it is loose in the most general case where , where we know is sufficient. On the other hand, our lower bound argument considers a setting with a -regular graph, but it only uses the degree of the graph to normalize the parameters on the prior distributions, such that we see no dependence on in the final lower bound.
5.2 Central limit theorem
In this section, we show is asymptotically normal using Stein’s method, a common approach to proving central limit theorems [3,49–52]. In particular, we use Theorem 3.6 from Ross , which we restate below for convenience.
[53, Theorem 3.6]. Let be random variables such that , , , and define Let collection have dependency neighborhoods for , and also define Then, for a standard normal random variable,
where is the Wasserstein distance between and .
Our estimator can be written in the form of , where and depend only on the treatment assignments of individuals . We apply Theorem 3.6 from ref. , to the random variables , such that by construction and . An application of Theorem 3.6 from ref.  implies that is asymptotically normal as long as the bound in (5.1) converges to 0 as approaches infinity. As the size of the dependency neighborhoods , the moments of , and the scaling of the variance all depend on the network parameters, for simplicity, we additionally assume in the conditions of Theorem 3 that , , and are bounded above by a constant, and we also assume the treatment probability is also lower bounded as a function of .
For some constant , we have as .
This is a typical assumption made in the literature [3,51,54] that rules out degenerate cases, such as all the potential outcomes being 0, that would result in the estimator having an unnaturally low variance scaling smaller than .
(Central limit theorem) Under Assumptions 1–3, and assuming that , , and are all with respect to , , and for all i, the normalized error converges in distribution to a standard normal random variable as , for .
Theorem 3 states that our estimator is asymptotically normal, assuming the boundedness of the magnitude of the potential outcomes, the polynomial degree, and the network degree. For the complete proof, refer to Appendix D. The proof follows from a straightforward application of Theorem 3.6 from ref.  with appropriate bounds for the moments and , the dependency neighborhood size , and the variance . The dependency neighborhood for variable is the set , i.e., the set of individuals that share in-neighbors with individual . This follows from the observation that both and are a function of the treatment variables for shared in-neighbors . The size of the largest dependency neighborhood is thus bounded by . The moments , will be bounded as a function of as defined in (3.3) along with the network degree , treatment probability , and the polynomial degree . The boundedness assumptions are used to argue that these moments do not grow too quickly in , so that the Wasserstein distance in (5.1) converges to 0. The details of these calculations are deferred to Appendix D. We remark that the strict boundedness assumption can be relaxed so that , , and are polylogarithmic with respect to , so long as we tighten the lower bound on the growth of by a corresponding logarithmic factor.
5.3 Variance estimator
The central limit theorem result in Theorem 3 implies that we can construct asymptotically valid confidence intervals by if we knew , where indicates the cdf of a standard normal. Since the practitioner typically does not know , we construct a conservative estimator for by applying the approach from Aronow and Samii [3,55]. We begin by rewriting the SNIPE estimator as a sum over possible exposures :
Note that both and are deterministic, and the only randomness is expressed in the indicator function . Overloading notation, we let and be expressed only as a function of , where we assume the order is clearly specified. Then, the variance of our estimator is given by
If , an unbiased estimate for the corresponding term in the variance expression is given by
Otherwise, if , by the same technique as given in ref. , using the observation that , the corresponding term in the variance expression can be approximated by the inflated estimate
The expectation of this estimate will be higher than the true term in the variance via Cauchy–Schwarz inequality. Therefore, a conservative variance estimator is given by