Skip to content
BY 4.0 license Open Access Published by De Gruyter August 3, 2023

Exploiting neighborhood interference with low-order interactions under unit randomized design

  • Mayleen Cortez-Rodriguez EMAIL logo , Matthew Eichhorn and Christina Lee Yu


Network interference, where the outcome of an individual is affected by the treatment assignment of those in their social network, is pervasive in real-world settings. However, it poses a challenge to estimating causal effects. We consider the task of estimating the total treatment effect (TTE), or the difference between the average outcomes of the population when everyone is treated versus when no one is, under network interference. Under a Bernoulli randomized design, we provide an unbiased estimator for the TTE when network interference effects are constrained to low-order interactions among neighbors of an individual. We make no assumptions on the graph other than bounded degree, allowing for well-connected networks that may not be easily clustered. We derive a bound on the variance of our estimator and show in simulated experiments that it performs well compared with standard estimators for the TTE. We also derive a minimax lower bound on the mean squared error of our estimator, which suggests that the difficulty of estimation can be characterized by the degree of interactions in the potential outcomes model. We also prove that our estimator is asymptotically normal under boundedness conditions on the network degree and potential outcomes model. Central to our contribution is a new framework for balancing model flexibility and statistical complexity as captured by this low-order interactions structure.

MSC 2010: 62K99; 91D30; 60F05

1 Introduction

Accurately estimating causal effects is relevant in numerous applications, from pharmaceutical companies researching the efficacy of a new medication, to policy makers understanding the impact of social welfare programs, to social media companies evaluating the impact of different recommendation algorithms on user engagement across their platforms. Often, the entity interested in understanding a causal effect will design an experiment where they randomly assign subsets of the population to treatment (e.g., new medication) and to control (e.g., a placebo) and draw conclusions based on the observed outcomes of the participants (e.g., health outcomes). Other times, the entity may need to determine causal effects from observational data accrued in a previous study where they did not have full control over the treatment assignment mechanism.

Our work focuses on estimating the total treatment effect (TTE), or the difference between the average outcomes of the population when everyone is treated versus when no one is treated, given data collected from a randomized experiment. This estimand is sometimes referred to as the global average treatment effect, as in ref. [1]. The TTE is a quantity of interest in scenarios where the decision maker must choose between adopting the proposed treatment or sticking with the status quo. For example, a social media company may develop a new recommendation algorithm for suggesting content to their users, and they want to decide whether to roll out this new algorithm across their platform. As another example, suppose that a pharmaceutical company develops a vaccine for some infectious disease. Then, public health experts and officials must decide whether the vaccine is safe and efficacious enough to warrant its recommendation to the general population. As the side effects of the new treatment are unknown, the goal is to determine the efficacy of the treatment relative to the status quo baseline by running a budgeted randomized trial, where the number of treated individuals in the trial is limited for safety reasons.

The techniques and guarantees for estimating causal effects in classical causal inference heavily rely upon the stable unit treatment value assumption (SUTVA), which posits that the outcome of each individual is independent of the treatment assignment of all other individuals [2]. Unfortunately, SUTVA is violated in all the aforementioned applications because people influence and are impacted by their peers. In the presence of this network interference, an individual’s outcome is affected by the treatment assignment of others in their social network, and SUTVA no longer holds. Distinguishing between the direct effect of treatment on an individual and the network effect of others’ treatment on the individual can be challenging. This has resulted in a growing literature on causal inference in the presence of network (interference) effects, sometimes referred to as spillover or peer influence effects. In this work, we consider the task of estimating the TTE from unit randomized trials under neighborhood interference, when an individual is affected by the treatment of its direct neighbors but is otherwise unaffected by the treatment of the individuals outside their neighborhood. Furthermore, we focus on unit randomized designs (RDs), wherein individuals are independently assigned to either treatment or control in a randomized experiment. This is in contrast to cluster RDs, which have been proposed as an approach to address network interference for randomized experimental design but which may not be feasible in practice due to an incompatibility with existing experimental platforms.

Estimating a causal effect from randomized experiments involves two decisions. First, we must decide what kind of experiment to run, i.e., how we choose the individuals to participate in the study, and how we determine which individuals are assigned to receive the new treatment. In this article, we focus on randomization inference, in which we assume the entire population participates in the study, and the randomness arises from the assignment of treatment or control. Second, after the experiment is conducted, we must decide how to analyze the data and construct an estimate from the measured observations. The literature addressing network interference can largely be categorized as either constructing clever RDs to exploit clustering structure in the network, or designing new estimators that exploit structure in the potential outcomes model. Without making assumptions on either the network or the potential outcomes model, it is impossible to estimate the TTE under arbitrary interference [35].

Table 1 summarizes how different assumptions on the potential outcomes model or network structure lead to different types of solutions, with a focus on unbiased estimators and neighborhood interference models where the network is known. The columns correspond to assumptions on the network structure (in order of increasing generality from left to right), while the rows correspond to assumptions on the structure in the potential outcomes model (in order of increasing generality from top to bottom). The body of the graph lists proposed solutions for estimating the TTE under the corresponding assumptions on the network/model structure. For example, under a fully general network structure and a linear potential outcomes model, refs [611] propose using an ordinary least squares (OLS) estimator with a Bernoulli RD. Since we focus on solutions proposing unbiased estimators, we also list bounds on the variance of the estimators in the table, as a point of comparison. As Table 1 indicates, the literature has either focused on analyzing the Horvitz–Thompson estimator under new RDs that exploit network structure by increasing the correlation between neighbors’ treatment assignments or alternatively using regression-style estimators with Bernoulli RD, exploiting strong functional assumptions on the potential outcomes model. Without assuming any structure beyond neighborhood interference, the baseline solution of using the Horvitz–Thompson estimator under the Bernoulli RD is an unbiased estimator whose variance scales exponentially in the degree of the network.

Table 1

Literature Summary. Each row corresponds to an assumption on the structure of the potential outcomes model, and each column corresponds to an assumption on the structure of network. We list a proposed solution under the corresponding model/network assumptions in the following order: an unbiased ̲ estimator for the TTE, the RD, a bound on the variance (if available), and citations to related work. In the variance bounds, Y max is a bound on the effects on any individual, d is the maximum neighborhood size, C is the number of subcommunities or clusters, p is the treatment probability that is assumed to be small, κ is the restricted growth parameter, and β is the polynomial degree of the potential outcomes model. Our result proposes an estimator for the TTE under a β -order interactions (equivalently, β -degree polynomial) structure, a fully general graph, and Bernoulli design. All the solutions in the table rely on full knowledge of the network under neighborhood interference, and we focus on unbiased estimators.

Model structure Network structure
C disconnected subcommunities κ -restricted growth Fully general
Linear Directions for future work OLS, Bernoulli RD; [611]
Generalized linear Regression/machine learning, Bernoulli RD; [11]
β -order interactions SNIPE, Bernoulli RD; O Y max 2 d 2 β + 2 n p β
Arbitrary neighborhood interference Horvitz–Thompson, Cluster RD; O Y max 2 C p ; [1215] Horvitz–Thompson, Randomized Cluster RD; O Y max 2 κ 4 d 2 n p ; [1,7,16,17] Horvitz–Thompson, Bernoulli RD; O Y max 2 d 2 n p d ; [3]

In our work, we propose a hierarchy of model classes that extrapolates between simple linear models and complex general models, such that a practitioner can choose the strength of the model assumptions they are willing to impose. Naturally, assuming a more limited model simplifies the causal inference task. We characterize the complexity of a potential outcomes model with the order of interactions β , which also corresponds to the polynomial degree of the potential outcomes function when viewed as a polynomial of the treatment vector. A β -order interactions model is one in which each neighborhood set of size at most β can have a unique additive network effect on the potential outcome. Our model allows for heterogeneity in the influence of different sets of treated neighbors, strictly generalizing beyond the typical parametric model classes used in the literature, which oftentimes assumes anonymous interference. We make no assumptions on the graph beyond bounded degree, so the graph may be well connected and not easily clustered. We summarize our contributions and results below:

  1. Assuming a β -order interactions model, under a nonuniform Bernoulli RD with treatment probabilities { p i } i = 1 n for p i [ p , 1 p ] , we present the structured neighborhood interference polynomial estimator (SNIPE), a simple unbiased estimator for the TTE.

  2. We derive a bound on the variance of our estimator, which scales polynomially in the degree d of the network and exponentially in the degree β of the potential outcomes model. We also show that our estimator is asymptotically normal.

  3. For a d -regular graph and uniform treatment probabilities p i = p with p β < 0.16 and β d , we prove that the minimax optimal mean squared error for estimating the TTE is lower bounded by Ω 1 n p β , implying that the exponential dependence on β is necessary.

  4. We provide experimental results to illustrate that using regression models that do not allow for heterogeneity among the network effects can lead to considerable bias when the anonymous interference assumption is violated. The experiments validate that our estimator is unbiased for β -order interaction models, and obtains a lower mean squared error than existing alternatives.

To interpret the upper bound on the variance of our proposed estimator for the TTE, we note that our variance scales as O ( poly ( d ) n p β ) , compared to the variance of the Horvitz–Thompson estimator under Bernoulli RD which scales as O ( 1 n p d ) , where β is always bounded above by d . For smaller values of β , the β -order interactions model imposes stronger structural assumptions on the potential outcomes model than required by the Horvitz–Thompson estimator. In turn, our estimator has significantly lower variance, scaling only polynomially in the network degree d yet exponential in β , as opposed to exponential in d . In addition, the minimax lower bound shows that the exponential dependence on β is also minimax optimal amongst β -order interaction models, implying that the order of interactions, or polynomial degree, of the potential outcomes model is a meaningful property that expresses the complexity of estimating the TTE.

2 Related literature

While there has been a flurry of recent activity in addressing the challenges that arise from network interference, every proposed solution concept fundamentally hinges upon making key assumptions on the form of network interference. Without any assumptions, the vector of observed outcomes under a particular treatment vector z { 0 , 1 } n may have no relationship to the potential outcomes under any other treatment vector. We should naturally expect that if we are willing to assume stronger assumptions, then we may be able to obtain stronger results conditioned on those assumptions being satisfied. As such, the literature ranges between results that impose fewer assumptions on the model and graph, resulting in unbiased estimators with high variance, or results that impose strong assumptions on the model and graph, resulting in simple, unbiased estimators with low variance.

Early works propose framing assumptions via exposure functions, or constant treatment response [3,4]. This assumes that there is some known exposure mapping, f ( z , θ i ) Δ , which maps the treatment vector z , along with unit-specific traits θ i , to a discrete space Δ of exposures, or effective treatments. The potential outcomes function for unit i is then assumed to only be a function of its exposure, or effective treatment, such that if f ( z , θ i ) = f ( z , θ i ) , then Y i ( z ) = Y i ( z ) . If Δ is as large as 2 n , then this assumption places no effective restriction on the potential outcomes function; thus, this assumption is only meaningful when Δ is relatively small. One commonly used exposure mapping expresses the neighborhood interference assumption [1720], in which each unit i is associated to a neighborhood N i [ n ] , and unit i ’s potential outcome is only a function of the treatments of units within N i . We could use an exposure mapping to formulate this assumption, where Δ = 2 d for d denoting the maximum neighborhood size, and f ( z , N i ) = f ( z , N i ) if the treatments assigned to individuals in N i are the same between z and z . Another commonly used exposure mapping expresses the anonymous interference assumption [14,2123], in which the potential outcomes are only a function of the treatments through the number of treated neighbors, i.e., f ( z , N i ) = f ( z , N i ) , if the number of treated individuals in N i via z and z is the same. While the exposure mapping framework provides a highly generalizable and abstract framework for assumptions, it is fundamentally discrete in nature and the complexity of estimation is characterized by the number of possible exposures Δ , which could still be large. As a result, ref. [4] suggests a collection of additional assumptions that can be imposed on top of anonymous neighborhood interference, including distributional or functional form assumptions, or additivity assumptions as suggested in ref. [18].

The majority of works in the literature (along with our work) assume neighborhood interference with respect to a known graph. Notable exceptions include ref. [24], which considers a highly structured spatial interference setting with network effects decaying with distance, and [25], which provides methods for testing hypotheses about interference effects including higher order spillovers. Without imposing any additional assumptions on the potential outcomes besides neighborhood interference, a natural solution is to use the Horvitz–Thompson estimator to estimate the average outcomes under full neighborhood treatment and full neighborhood control [3]. While the estimator is unbiased, the variance of the estimator scales inversely with the probability that a unit’s full neighborhood is in either treatment or control. Under a Bernoulli( p ) RD, where each individual is treated independently with probability p , the variance scales as O ( 1 n p d ) , as indicated in the bottom right cell of Table 1. The exponential dependence on d renders the estimator impractical for realistic networks.

One approach in response to the aforementioned challenge is to consider cleverly constructing RDs that increase overlap, i.e., the probability that a unit’s entire neighborhood is assigned to treatment or control. The earliest literature in this line of work additionally assumes partial interference, also referred to as the multiple networks setting, in which the population can be partitioned into disjoint groups, and network interference only occurs within groups and not across groups [12,14,15,20,21,26,27]. This assumption makes sense in contexts where interference is only expected between naturally clustered groups of individuals, such as households, cities, or countries. Given knowledge of the groups, we can then randomly assign groups to different treatment saturation levels, e.g., jointly assigning groups to either treatment or control, increasing the correlation of treatments within neighborhoods. Then, a difference in means estimator or a Horvitz–Thompson estimator can be used to estimate the TTE. The asymptotic consistency of these estimators relies on the number of groups going to infinity, with a variance scaling inversely with the fraction of treated clusters, i.e., O ( 1 C p ) as indicated in the bottom left cell of Table 1. In practice, even networks that are clearly clustered into separate groups may not have a sufficiently large number of groups to result in accurate estimates.

For general connected graphs, one can still implement a cluster-based RD on constructed clusters, where the clusters are constructed to minimize the number of edges between clusters [1,7,16,17]. References [1,17] provide guarantees for graphs exhibiting a restricted growth property, which limits the rate at which local neighborhoods of the graph can grow in size, and ref. [1] proves that using randomized cluster RD along with the Horvitz–Thompson estimator achieves a variance of O ( 1 n p ) for restricted growth networks, which is a significant gain from the exponential dependence on d under Bernoulli design. A limitation of these solutions is that the cluster RD can be difficult to implement due to incompatibility with existing experimentation platforms or to ethical concerns. When the existing experimentation platform is already set up for a unit RD, the experimenter may have the desire to avoid overhauling the platform to work with cluster RD due to time or resource constraints [11]. In addition, refs [2831] detail some ethical issues that arise in cluster RDs including, but not limited to, problems with informed consent (e.g., it may be infeasible to gain informed consent from every unit in a cluster) and concerns about distributional justice (e.g., with regards to how clusters are selected and assigned to treatment). Furthermore, both refs [28,30] comment that many of the existing, formal research ethics guidelines were designed with unit RD in mind and thus, offer little guidance for considerations that arise in cluster RDs. The work we present here focuses on scenarios with unit RDs, when cluster RDs may not be feasible or may be undesirable due to any of the aforementioned concerns.

The alternate approach that has gained traction empirically is to impose additional functional assumptions on the potential outcomes in addition to anonymous neighborhood interference. The most common assumption is that the potential outcomes are linear with respect to a particular statistic of the treatment vector, where the linear function is shared across units [611]. Under this assumption, estimating the entire potential outcomes function reduces to linear regression, which one could solve using OLS, as indicated in the upper right most cell of Table 1. After recovering the linear model, one could estimate any desired causal estimand. The results rely on correctly choosing the statistic that governs the linearity, or more generally reduces the task to heuristic feature engineering for generalized linear models [11]. One could plausibly extend the function class beyond linear and instead apply machine learning techniques to estimate the function that generalizes beyond linear regression. While we do not state a variance bound in Table 1, one would expect that when p is sufficiently large, the variance would scale with O ( 1 n ) , with some dependence on the complexity of the model class; when p is very small, the variance may scale with O ( 1 p n ) , as the regression still requires sufficient variance of covariates represented in the data, i.e., a sufficient spread of number of neighbors treated. Reference [22] considers nonparametric models yet focuses on estimating a locally linearized estimand. A drawback of these approaches is that they assume anonymous interference, which imposes a symmetry in the potential outcomes such that the causal network effect of treating any of an individual’s neighbors is equal regardless of which neighbor is chosen. In addition, they assume that the function that we are learning is shared across units, or at least units of the same known covariate type, which can be limiting.

While we have primarily focused on summarizing the literature for unbiased estimators, there has also been some work considering how structure in the potential outcomes model can be exploited to reduce bias of standard estimators. References [3234] use application domain-specific structure in two-sided marketplaces and network routing to compare the bias of the difference in means estimators under different experimental designs. Reference [35] uses structure in ridesharing platforms to propose a new estimator with reduced bias. In addition, there has been some limited empirical work studying bias under model misspecification [16].

Complementary to the literature on randomized experiments, there has been a growing literature considering causal inference over observational studies in the presence of network interference. However, the limitations similarly mirror the aforementioned concerns. A majority of the literature assumes partial interference [15,3639], i.e., the multiple networks setting, which then enables causal inference of a variety of different estimands. In particular, it is commonly assumed that the different groups in the network are sampled from some underlying distribution, and the statistical guarantees are given with respect to the number of groups going to infinity. Alternately, other works assume that the potential outcomes only depend on a simple and known statistic of the neighborhood treatment, most commonly the number or fraction of treated [11,40,41]. Either the neighborhood statistic only takes finite values, or assumptions are imposed on the functional form of the potential outcomes, which imply anonymous interference and reduce inference to a regression task or maximum likelihood calculation. Reference [42] considers a general exposure mapping model alongside an inverse propensity weighted estimator, but the estimator has high variance when the exposure mapping is complex.

In contrast to the majority of the mentioned literature, we neither rely on cluster RDs nor anonymous interference assumptions. We instead propose a potential outcomes model with low-order interactions structure, where the degree of interactions β characterizes the difficulty of inference, also studied in ref. [43]. For β = 1 , this model is equivalent to heterogeneous additive network effects in ref. [44], which can be derived from the joint assumptions of additivity of main effects and interference in ref. [18]. When β is larger than the network degree, then this assumption is equivalent to an arbitrary neighborhood interference, providing a nested hierarchy of models that encompass the simple linear model class, the fully general model class, as well as model classes of varying complexity in between. References [43,44], which consider similar models as we do, focus on the setting when the underlying network is fully unknown, and yet there is richer available information either in the form of historical data [44] or multiple measurements over the course of a multi-stage experiment [43], neither of which we assume in this work. Our estimator also has close connections to the pseudoinverse estimator in ref. [45], the Riesz estimator in ref. [46], and the estimator in ref. [22], which is discussed in Section 4.

3 Model

3.1 Causal network

Let [ n ] { 1 , , n } denote the underlying population of n individuals. We model the network effects in the population as a directed graph over the individuals with edge set E [ n ] × [ n ] . An edge ( j , i ) E signifies that the treatment assignment of individual j affects the outcome of individual i . As an individual’s own treatment is likely to affect their outcome, we expect self-loops in this graph. In much of the article, we are concerned with neighborhood interference effects, and we use N i { j [ n ] : ( j , i ) E } to denote the in-neighborhood of an individual i . Note that this definition allows i N i . Many of our variance bounds reference the network degree. We let d in denote the maximum in-degree of any individual, d out denote the maximum out-degree, and d max { d in , d out } .

3.2 Potential outcomes model

To each individual i , we associate a treatment assignment z i { 0 , 1 } , where we interpret z i = 1 as an assignment to the treatment group and z i = 0 as an assignment to the control group. We collect all treatment assignments into the vector z . We use Y i to denote the outcome of individual i . As our setting assumes network interference, the classical SUTVA assumption is violated. That is, Y i is not a function only of z i . Rather, Y i : { 0 , 1 } n R may be a function of z , the treatment assignments of the entire population. Since each treatment variable z i is binary, we can indicate an exact treatment assignment as a product of z i (for treated individuals) and ( 1 z i ) (for untreated individuals) factors. As such, we can express a general potential outcome function Y i as a polynomial in z ,

Y i ( z ) = T [ n ] a i , T j T z j k [ n ] \ T ( 1 z k ) ,

where a i , T is individual i ’s outcome when their set of treated neighbors is exactly T . Via a change of basis, we can equivalently express Y i ( z ) as a polynomial in the “treated subsets”:

(3.1) Y i ( z ) = S [ n ] c i , S j S z j ,

where c i , S represents the additive effect on individual i ’s outcome that they receive when the entirety of subset S (perhaps among other individuals) is treated. Note that c i , represents the baseline effect, the component of i ’s outcome that is independent of the treatment assignments.

So far, the potential outcomes model described in (3.1) is completely general. However, it is parameterized by 2 n coefficients { c i , S } , which makes it untenable in most settings. To combat this, we impose some structural assumptions on these coefficients. First, we observe that the populations of interest can be quite large (e.g., the population of an entire country), and their influence networks may have high diameter. Throughout most of the article, we assume that individuals’ outcomes are influenced only by their immediate in-neighborhood.

Assumption 1

(Neighborhood interference) Y i ( z ) only depends on the treatment of individuals in N i . Equivalently, Y i ( z ) = Y i ( z ) for any z and z such that z j = z for all j N i . In our notation, c i , S = 0 for any S N i .

Next, we note that the degree of each Y i ( z ) can (under the neighborhood interference assumption) be as large as d in . In such a model, one’s outcome may be differently influenced by a treated coalition of any size in their neighborhood. Contrast this with a simpler linear potential outcomes model, wherein an individual’s outcome receives only an independent additive effect from each of their treated neighbors. This illustrates that the degree of the polynomial may serve as a proxy for its complexity. In this work, we consider the scenario where the polynomial degree may be significantly smaller than d in .

Assumption 2

(Low polynomial degree) Each potential outcome function Y i ( z ) has degree at most β . In our notation, c i , S = 0 whenever S > β . Along with Assumption 1, it follows that the potential outcomes function Y i ( z ) from (3.1) can be expressed in the following form,

(3.2) Y i ( z ) = S S i β c i , S j S z j ,

where we define S i β { S N i : S β } .

We remark that while we use the formal mathematical term of “low polynomial degree,” since this describes a function over a vector of binary variables, a low polynomial degree constraint is equivalent to a constraint on the order of interactions amongst the treatments of neighbors. In the simplest setting when β = 1 , this is equivalent to a model in which the networks effects are additive across treated neighbors, strictly generalizing beyond all linear models that have been widely used in applied settings.

We use the notation in (3.2) to express the potential outcomes model for the remainder of the article. If β d in , note that β could be completely removed from the definition of Y i in equation (3.2), reducing to the arbitrary neighborhood interference setting. However, we turn our attention to settings where β might be much smaller than the degree of the graph ( β d in ), and we can assume that only low-order interactions within neighborhoods have an effect on an individual. As noted earlier, taking β = 1 corresponds to the heterogeneous linear outcomes model in ref. [44]. We include further examples to help in understanding this low polynomial degree assumption in Section 3.2.1.

Many of our variance bounds utilize an upper bound on the treatment effects for each individual. We define Y max such that

(3.3) Y max max i [ n ] S S i β c i , S .

It follows that Y i ( z ) Y max for any treatment vector z .

Remark 1

We emphasize that the model in (3.2) captures fully general neighborhood interference when β = d . Even when β < d , the number of parameters in the model grows with n . This results in the total number of observations always being smaller than the number of parameters in the model, so using simple regression-style estimators to identify the model is impossible.

Remark 2

We can consider the order of interactions or polynomial degree β as a way to measure the complexity of the model class. Our subsequent results suggest that β is meaningful as it also captures a notion of statistical complexity or difficulty of estimation. This is evidenced by the variance upper bound and minimax lower bound results in Section 5, where we see that the smaller β is relative to the degree of the graph, the smaller the variance incurred and the larger β is with respect to the graph degree, the higher the variance incurred. In some sense, the “lower” the degree of the model, i.e., the more structure imposed, the “easier” it is to estimate the TTE. On the flip side, the “higher” the degree of the model, i.e., the closer it is to being fully general, the “harder” it is to estimate.

Remark 3

Assumption 2 implies that the model exhibits a particular type of sparsity in the coefficients with respect to the monomial basis, in which the coefficients corresponding to sets larger than size β are zero. In our setting when the treatments are budgeted, i.e., p is small, these coefficients precisely correspond to effects that are observable in a Bernoulli RD, i.e., the probability for observing or measuring the coefficient corresponding to set S is p S . As such, there is an direct connection between this hierarchy of models as parameterized by β and the ability to measure and estimate the TTE, which is formalized by our subsequent analysis.

3.2.1 Examples of low-degree interaction models

We provide a few examples to illustrate when the polynomial degree of the potential outcomes model may naturally be smaller than the total neighborhood size (i.e., β < d ).

Example 1

Consider a potential outcomes model that exhibits the joint assumptions of additivity of main effects and interference effects as defined in ref. [18]. This imposes that the potential outcomes satisfy

Y i ( z ) = Y i ( 0 ) + ( Y i ( z i e i ) Y i ( 0 ) ) + k [ n ] ( Y i ( z k e k ) Y i ( 0 ) ) ,

where 0 R n is a vector of all zeros and e j { 0 , 1 } n is a standard basis vector. As discussed in ref. [44], this assumption implies a low-degree interaction model with β = 1 , which they refer to as heterogeneous additive network effects.

Example 2

Consider a hypothesized setting where each individual’s neighborhood can be divided into smaller sub-communities. For example, one’s close contacts may include their immediate family, their close friends, their work colleagues, etc. Within each of these sub-communities, there may be nontrivial (higher order) interactions between the treatments of its members. However, it is reasonable to assume that the cumulative effects of each sub-community have an additive effect on the individual’s outcome. That is, there are no nontrivial interactions resulting from the treatment of individuals across different sub-communities. In this case, a natural choice for β is the size of the largest sub-community, which could be significantly smaller than the size of the largest neighborhood.

Example 3

Suppose a social networking platform is testing a new “hangout room” feature where groups of up to five people can interact in a novel environment on the platform. One can posit that a natural choice for β in this setting is 5, as the change in any individual’s usage on the platform can be attributed to how they utilize this new feature, which in turn is a function of it being introduced to various subsets of up to five users in that individual’s neighborhood.

Example 4

Consider a setting where an individual’s outcome is a low-degree polynomial in some auxiliary quantity, which is itself linear in the treatment assignments of their neighborhood. For example, we might have

Y i ( z ) = c i , + j N i c i j z j + j N i c i j z j 2 + + j N i c i j z j β .

A similar setting is explored in our simulated experiments in Section 6.

Example 5

Consider an example in which network effects only arise from pairwise edge interactions and triangular interactions, i.e., individual i ’s outcome consists of a sum of its baseline outcome, its own direct effect c i , i , pairwise edge effects c i , j for j N i , and triangle effects c i , { j , j } for j , j N i such that there is also an edge between j and j , indicating that the three individuals are mutual connections. For such a model, β would be 2 due to the triangular interactions.

Remark 4

We emphasize that any potential outcomes model that takes a binary treatment vector z as its input can be written as a polynomial in z , taking the form in equation (3.2) for general β . However, the low-degree assumption, i.e., that β d , will not generally admit high-degree models. For example, both threshold models and saturation models generally require the degree of Y i ( z ) to be N i . In a threshold potential outcomes model, an individual experiences network effects once a particular threshold of their neighbors are treated [7]. An example of this type of model is given by

Y i ( z ) = I j N i z j θ S N i c i , S j S z j ,

where 0 θ N i . Saturation models allow for network or peer effects to increase up to a particular saturation level, such as

Y i ( z ) = min θ , S N i c i , S j S z j ,

where θ is some maximum saturation threshold. In this model, an individual receives additive effects from each subset of treated neighbors until a certain threshold effect is reached, after which the networks effects have “saturated” and the treatment of additional neighbors contributes no additional effect.

3.3 Causal estimand and RD

Throughout most of the article, we concern ourselves with estimating the TTE. This quantifies the difference between the average of individual’s outcomes when the entire population is treated versus the average of individual’s outcomes when the entire population is untreated:

(3.4) TTE 1 n i = 1 n ( Y i ( 1 ) Y i ( 0 ) ) ,

where 1 represents the all ones vector and 0 represents the zero vector. Plugging in our parameterization from equation (3.2), we may re-express the TTE as follows:

(3.5) TTE = 1 n i = 1 n S S i β S c i , S .

Since exposing individuals to treatment can have a deleterious and irreversible effect on their outcomes, we wish to estimate the TTE after treating a small random subset of the population. We focus on a nonuniform Bernoulli design, wherein each individual i is independently assigned treatment with probability p i [ p , 1 p ] for p > 0 . That is, each z i Bernoulli ( p i ) . Such a RD is both straightforward to implement and to understand. Furthermore, many existing experimentation platforms are already designed for Bernoulli randomization, making it easy to collect new data or to re-analyze existing data and adjust for network interference, rather than requiring a complete overhaul of the existing experimentation platform to allow for more complex randomization schemes.

4 Estimator

In this section, we introduce the estimator that will serve as our central focus through the rest of the article: the SNIPE. While we restrict our attention to nonuniform Bernoulli design, the techniques to derive this estimator can be generalized to a wide variety of causal estimands and experimental designs. In fact, our estimator is connected to the Horvitz–Thompson estimator and turns out to be a special case of both the pseudoinverse estimator first introduced by Swaminathan et al. [45] and more recently the Riesz estimator of Harshaw et al. [46]. We discuss these connections in Section 4.3.

To provide intuition, we first derive the estimator in the β = 1 case with the linear heterogenous potential outcomes model [44] via a connection to OLS. Then, we show how it can be extended to the more general polynomial setting. Our main result (Section 5) establishes both the unbiasedness of this family of estimators and a bound on its variance under Bernoulli design.

Recalling that S i β { S N i : S β } , the vector c i collects the parameters { c i , S } S S i β in some canonical ordering. We will assume throughout that S i β is always first in this ordering and otherwise index the entries of these vectors by their corresponding set S . As an example, when β = 1 and N i = { j 1 , , j d i } , we may have c i = [ c i , c i , { j 1 } c i , { j d i } ] , where d i is the in-degree of unit i . In a similar manner, the treated subsets vector z ˜ i collects the indicators { j S z j } S S i β in the same ordering. In our β = 1 example, z ˜ i = [ 1 z j 1 z j d i ] . By using this notation, we may express

(4.1) Y i ( z ) = c i , z ˜ i , TTE = 1 n i = 1 n ( 1 S i e 1 ) , c i ,

where the inner product argument in the second equation is the S i -length vector with first entry 0 and remaining entries 1.

4.1 Building intuition in the linear setting ( β = 1 )

To motivate our estimator, we consider the linear heterogeneous potential outcomes model ( β = 1 ) under nonuniform Bernoulli RD. By using the TTE expression from (4.1), we can recast the problem of estimating the TTE into the problem of estimating the parameter vector c i : by linearity of expectation, an unbiased estimator of c i will give rise to an unbiased estimator of TTE .

As a thought experiment toward estimating c i , imagine that we can perform M independent replications of our randomized experiment. In each replication m [ M ] , we observe the treated subsets vector ( z ˜ i ) m , obtained from our Bernoulli RD, and the outcome ( Y i ) m . We visualize,

( Y i ) 1 ( Y i ) 2 ( Y i ) M Y i R M = 1 z j 1 1 z j d i 1 1 z j 1 2 z j d i 2 1 z j 1 M z j d i M X i R M × ( d i + 1 ) c i , c i j 1 c i j d i c i R ( d i + 1 ) .

To minimize the sum of squared deviations from the true coefficients, we use OLS, computing

c ˆ i = ( X i X i ) 1 X i Y i .

We consider the limit of this estimate as M . The consistency of the least squares estimator ensures that c ˆ i P c i . By the law of large numbers, we have

X i X i = m = 1 M 1 z j 1 m z j 2 m z j d i m z j 1 m ( z j 1 m ) 2 z j 1 m z j 2 m z j 1 m z j d i m z j 2 m z j 1 m z j 2 m ( z j 2 m ) 2 z j 2 m z j d i m z j d i m z j 1 m z j d i m z j 2 m z j d i m ( z j d i m ) 2 a . s . M E [ z ˜ i z ˜ i ] .

Similarly, the vector X i Y i a . s . M E [ z ˜ i Y i ( z ) ] . Assuming the invertibility of the matrix E [ z ˜ i z ˜ i ] (we show this explicitly for Bernoulli design below), we obtain in the limit,

c i = ( M E [ z ˜ i z ˜ i ] ) 1 M E [ z ˜ i Y i ( z ) ] = E [ z ˜ i z ˜ i ] 1 E [ z ˜ i Y i ( z ) ] .

Now, we return from our thought experiment to the actual experimental setting wherein a single instantiation ( z ˜ i , Y i ) is realized. We consider the estimator

(4.2) c i ^ E [ z ˜ i z ˜ i ] 1 z ˜ i Y i .

Note that the matrix E [ z ˜ i z ˜ i ] 1 depends only on our experimental design and thus can be utilized by our estimator. The unbiasedness of this estimator follows from our aforementioned computation; by applying linearity,

E [ c i ^ ] = E [ z ˜ i z ˜ i ] 1 E [ z ˜ i Y i ] = c i .

By applying linearity once more, we may obtain the unbiased estimator for the TTE,

(4.3) TTE ^ = 1 n i = 1 n ( 1 S i β e 1 ) , c i ^ = 1 n i = 1 n Y i E [ z ˜ i z ˜ i ] 1 ( 1 S i β e 1 ) , z ˜ i .

For the specific setting of nonuniform Bernoulli design with β = 1 , one can use the fact that E [ z j k ] = E [ z j k 2 ] = p j k and E [ z j k z j k ] = p j k p j k for k k to compute,

E [ z ˜ i z ˜ i ] = 1 p j 1 p j 2 p j d i p j 1 p j 1 p j 1 p j 2 p j 1 p j d i p j 2 p j 1 p j 2 p j 2 p j 2 p j d i p j d i p j 1 p j d i p j 2 p j d i p j d i ,

which we can invert to obtain

E [ z ˜ i z ˜ i ] 1 = 1 + k p j k 1 p j k 1 1 p j 1 1 1 p j d i 1 1 p j 1 1 p j 1 ( 1 p j 1 ) 0 0 0 1 p j 2 ( 1 p j 2 ) 0 0 0 1 1 p j d i 0 0 1 p j d i ( 1 p j d i ) .

We calculate

E [ z ˜ i z ˜ i ] 1 ( 1 S i β e 1 ) = j N i 1 1 p j 1 p j 1 ( 1 p j 1 ) 1 p j d i ( 1 p j d i ) .

By plugging into equation (4.3), we obtain the explicit form for our estimator:

TTE ^ SNIPE ( 1 ) 1 n i = 1 n Y i j N i z j p j p j ( 1 p j ) ,

where SNIPE ( 1 ) refers to the fact that this is the SNIPE estimator with β = 1 .

4.2 SNIPE in the general polynomial setting

For larger β , we can use the same least squares approach to obtain an unbiased estimate of the coefficient vector c i , and then plug this into equation (4.1) to obtain TTE ^ . The outcomes Y i ( z ) remain linear functions in z ˜ i – albeit for a significantly longer z ˜ i containing S i β = k = 0 min ( β , N i ) N i k entries – so the same convergence results apply. Suppose we index the entries of E [ z ˜ i z ˜ i ] by the sets S and T corresponding to the matrix row and column. By the independence and marginal treatment probabilities of nonuniform Bernoulli design, we see that

( E [ z ˜ i z ˜ i ] ) S , T = j S T p j .

Working with this matrix is significantly more tedious, so we relegate the details to Appendix A. There, we show that E [ z ˜ i z ˜ i ] is invertible by giving an explicit formula for the entries of its inverse. Then, we plug these entries into equation (4.3) to derive the following explicit formula for the estimator:

(4.4) TTE ^ SNIPE ( β ) 1 n i = 1 n Y i S S i β g ( S ) j S z j p j p j ( 1 p j ) ,

where we define the coefficient function g : 2 [ n ] R such that

g ( S ) = s S ( 1 p s ) s S ( p s )

for each S [ n ] and g ( ) = 0 . When it is clear from context that TTE ^ refers to the SNIPE estimator, we suppress the subscript for cleaner notation.

Remark 5

We pause here to again emphasize that our technique gives us an unbiased estimate of the coefficient vector c i for each individual i . From here, we leverage the linearity of expectation to obtain an unbiased estimator for the TTE , which is a linear function of these c i coefficients. This same strategy can be applied to develop estimators for any causal effect that is linear in the c i , S coefficients. We highlight some other potential estimands and the explicit form of their estimators in Appendix E. The techniques that we discuss in Section 5 can be used to establish further properties of these estimators.

This estimator can be evaluated in O ( n d in β ) time and only utilizes structural information about the graph (not any influence coefficients c i , S ). Structurally, the estimator takes the form of a weighted average of the outcomes Y i of each individual i , where the weights themselves are functions of the treatment assignments of all members j of the in-neighborhood N i . To make use of the low-order interference assumption, the estimator separately scales the effect of treatment of each sufficiently small subset of N i using the scaling function g ( S ) . The definition of this g ( S ) ensures the unbiasedness of the estimator.

In the special case of a uniform treatment probability p i = p across all nodes, we can simplify this estimator to show that it is only a function of the number of treated individuals in i ’s neighborhood and not the identities. Let T = { j : z j = 1 } denote the set of treated units. We can rewrite the estimator as follows:

(4.5) TTE ^ = 1 n i = 1 n Y i S S i β j S z j p p j S z j p p 1 = 1 n i = 1 n Y i k = 0 min ( β , N i T ) A N i T A = k N i \ T β k 1 p p k ( 1 ) ( 1 ) k p 1 p = 1 n i = 1 n Y i k = 0 min ( β , N i T ) N i T k = 0 min ( β k , N i \ T ) N i \ T ( 1 ) k + p 1 p k p 1 p .

Thus, while our estimation guarantees allow for heterogeneity in the potential outcomes model, when treatment probabilities are uniform, computing the estimator does not depend on the identity of the treated individuals, but only the number of treated neighbors. As a result, the estimator can be evaluated in O ( n β 2 ) time, which is a significant improvement compared to the O ( n d in β ) computational complexity when the treatment probabilities are nonuniform.

4.3 Connection to other estimator classes

Our estimator takes the form of a linear weighted estimator

TTE ^ = 1 n i = 1 n Y i w i ( z ) ,

under specifically constructed weight functions w i : { 0 , 1 } n R . From this perspective, we can draw connections between our estimator and others appearing in the literature.

4.3.1 Horvitz–Thompson estimator

First, we show that in the special case where N i β , our estimator is identical to the classical Horvitz–Thompson estimator. In this case, the restriction S β is satisfied for every S N i , so that S i β includes all subsets of N i . By using this, we may simplify

w i ( z ) = S N i g ( S ) j S z j p j p j ( 1 p j ) = S N i j S z j p j p j S N i j S ( z j p j ) ( 1 p j ) (by definition of  g ( S ) ) = j N i 1 + z j p j p j j N i 1 z j p j 1 p j (distributivity) = j N i z j p j j N i 1 z j 1 p j = I ( z treats all of N i ) Pr ( z treats all of N i ) I ( z treats none of N i ) Pr ( z treats none of N i ) .


TTE ^ SNIPE = 1 n i = 1 n Y i I ( z treats all of N i ) Pr ( z treats all of N i ) I ( z treats none of N i ) Pr ( z treats none of N i ) = TTE ^ H T ,

so we exactly recover the Horvitz–Thompson estimator. As a result, when β is sufficiently large relative to the degree of the nodes in the graph, our estimator is very similar to the Horvitz–Thompson estimator, only differing for the nodes that have graph degrees larger than β . In this sense, under Bernoulli randomization, our estimator can be thought of as a generalization of Horvitz–Thompson to additionally account for low polynomial degree structure, which is most relevant for simplifying the potential outcomes associated with high-degree vertices.

4.3.2 Pseudoinverse estimator

The key technical steps of deriving our estimator can be described as constructing unbiased estimators for each unit i ’s contribution to the TTE using a connection to OLS for linear models, and subsequently averaging the unbiased estimates due to the fact that the causal estimand is a linear function of these individual contributions. This overall technique has also appeared in the previous literature in semiparametric estimation, with one clear example described by Swaminathan et al. [45] in a seemingly different context of off-policy evaluation for online recommendation systems. In their model, a context x X arrives, at which point the principal selects a tuple (or slate) of actions s = ( s 1 , , s ) and observes a random reward r based on the interaction between the context and the slate. The authors make a linearity assumption that posits that V ( x , s ) = E r [ r x , s ] = 1 s ϕ x , where 1 s indicates the choice of a particular action in each entry of the tuple, and θ x is a context-specific reward weight vector associated to context x . In our setting, the context x is an individual i , the reward weights θ x are their effect coefficients c i , and the slate indicator vector 1 s is our treated subsets vector z ˜ i .

A primary goal of ref. [45] is to estimate ϕ x , which can be used to inform a good slate selection policy. The mean squared error of an estimate w for r is E μ [ ( 1 s w r ) 2 ] , where μ encodes the random selection of the context, slate, and reward. In this framing, the least squares estimator

θ ¯ x = ( E μ [ 1 s 1 s x ] ) E μ [ r 1 s x ]

is the minimizer of the mean squared error (MSE) with the minimum norm. As the reward distribution is unknown, it is replaced with an empirical estimate of r given past data. We can perform the substitutions as described in the previous paragraph, noting that Bernoulli design ensures that E [ z ˜ i z ˜ i ] is invertible (Appendix A), to recover our estimator c i ^ from (4.2).

4.3.3 Riesz estimator

In their recent work, Harshaw et al. [46] consider a very general causal inference framework wherein a treatment z is drawn from an underlying experimental design distribution over an intervention set Z . We observe an outcome Y i ( z ) for each unit i [ n ] , where we assume that Y i belongs to a model space i , which is a subspace of Y containing all measurable and square-integrable (with respect to the distribution over Z ) functions Z R . Each individual is additionally endowed with a linear effect functional θ i : Y R , and the goal is to estimate the average individual treatment effect τ = 1 n i = 1 n θ i ( Y i ) .

In our setting, Z = { 0 , 1 } n with the nonuniform Bernoulli design distribution (i.e., Pr ( z i = 1 ) = p i , with { z i } mutually independent). Y consists of all linear functions over the basis { j S z j : S [ n ] , S β } , and each i restricts to the linear functions over { j S z j : S S i β } . Finally, each unit i has effect functional θ i ( Y i ) = Y i ( 1 ) Y i ( 0 ) , so that τ = TTE .

Under two assumptions – correct model specification (needed in our work as well) and positivity (always satisfied in our setting since Bernoulli design ensures each treatment in Z occurs with positive probability) – the Riesz representation theorem guarantees the existence of an unbiased estimator for τ , referred to as a Riesz estimator, of the form

τ ^ = 1 n i = 1 n Y i ( z ) ψ i ( z ) ,

where ψ i i is a Riesz representor for θ i with the property that

θ i ( Y ) = E z [ ψ i ( z ) Y ( z ) ] ( )

for each Y i . As each model space i has dimension S i β , we can identify ψ i by solving the linear system that verifies that ( ) holds for each function in some basis for i . A canonical choice is the standard basis, giving rise to equations

θ i ( j S z j ) = I ( S ) = E z [ ψ i ( z ) j S z j ] ,

for each S S i β . The choice ψ i ( z ) = S S i β g ( S ) j S z j p j p j ( 1 p j ) is a solution to this system (Appendix B) and gives rise to our estimator.

5 Properties of estimator under Bernoulli design

The following theorem summarizes the key properties of our estimator.

Theorem 1

Under a potential outcomes model satisfying the neighborhood interference assumption with polynomial degree at most β , the estimator defined in (4.4) is unbiased with variance upper bounded by

d in d out Y max 2 n e d in β max 4 β 2 , 1 p ( 1 p ) β ,

where each p i [ p , 1 p ] and p > 0 .

Notably, a sequence of networks with n and d = o ( log n ) has variance asymptotically approaching 0. We defer the proof of this theorem to Appendix B. Rather than appealing to the convergence properties of the pseudoinverse estimator to establish unbiasedness, we present an alternate combinatorial proof. To bound the variance, we carefully consider different possible overlapping subsets of individuals to separately bound many covariance terms that make up the overall variance expression.

To understand the variance bounds for our estimator, we can compare against the variance of Horvitz–Thompson under a Bernoulli design. In the simple setting of a d -regular graph and uniform Bernoulli ( p ) randomization, ref. [17] showed that the Horvitz–Thompson estimator has a variance that is lower bounded by Ω ( 1 n p d ) . In contrast, the variance of our estimator only scales polynomially in the degree d , but exponentially in the polynomial degree β , which is achieved by simply changing the estimator, without requiring any additional clustering structure of the graph and without utilizing complex RDs. This is a significant gain when the polynomial degree β is significantly lower than the graph degree d . The simplest setting of β = 1 already expresses all potential outcomes models that satisfy additivity of main effects and additivity of interference, as defined in ref. [18]; this subsumes all linear models that are commonly used in the practical literature, yet which require additional homogeneity assumptions.

We also remark that when β = d , both the Horvitz–Thompson estimator and our estimator are unbiased, for the same class of functions. When β < d , the Horvitz–Thompson estimator is unbiased for a strictly larger class of functions than our estimator, precisely those characterized by β = d . In this way, if the practitioner can use domain knowledge to argue that the true potential outcomes model belongs to the class of functions parametrized by β from equation (3.2) for β < d , then our estimator provides an advantage over the Hortvitz-Thompson estimator. This is because both estimators are unbiased but the variance of our estimator does not have exponential dependence on the graph degree. However, depending on the flexibility that the practitioner desires or needs to express in the potential outcomes, there is no clear winner. For example, if one desires a fully nonparametric potential outcomes to capture the most general neighborhood interference settings, they might set β = d . Then, both our estimator and the Horvitz–Thompson estimator are unbiased, and both have variance scaling exponentially in d . On the other hand, suppose anonymous interference was satisfied for a particular application and the potential outcomes could be modeled:

Y i ( z ) = c 0 + c 1 z i + c 2 j N i z j .

Then, using an OLS estimator would give an unbiased estimate with lower variance than our estimator. Thus, our model and estimator can be viewed as simply “adding to the toolbox” that practitioners may use depending on how expressive they need their potential outcomes models to be.

In the special setting of uniform Bernoulli design and β = 1 , our estimator as stated in (4.5) is the same as an estimator presented in ref. [22]. They consider a fully nonparametric setting under anonymous interference, in which the goal is to estimate the derivative of the total outcomes under changes of the population-wide treatment probability. As the derivative can be estimated by locally linearizing the outcomes function, they derive the special case of the estimator in (4.5) under β = 1 by taking the derivative of the expected population outcomes under a Bernoulli randomization, constructed by inverse propensity weights. This suggests that under a fully nonparametric setting, our estimator may be used for estimating an appropriately defined local estimand. There may also be opportunities to perform variance reduction on our estimator given knowledge of the graph structure, as proposed in ref. [22]; however, their solution requires anonymous interference, and it is not clear how to extend their solution concept beyond β = 1 , anonymous interference, and uniform treatment probabilities.

5.1 Minimax lower bound on mean squared error

To understand the optimality of our estimator, we construct a lower bound on the minimax optimal mean squared error rate. In particular, we show that for a setting with a d -regular graph and sufficiently-small uniform treatment probabilities p , the best achievable mean squared error is lower bounded by Ω 1 n p β .

Theorem 2

(Minimax lower bound) For any n , d , β , p with p β < 0.16 , and any estimator TTE ^ , there exists a causal network on n nodes with maximum degree d and effect coefficients { c i , S } i [ n ] , S S i for which the minimax squared error under uniform treatment probabilities p i = p is bounded below by

E [ ( TTE ^ TTE ) 2 ] = Ω 1 n p β .

For a β -order interactions model, estimating the TTE requires being able to measure the network effect of the size β subsets of a unit’s neighborhood. The probability that a set of size β is jointly assigned to treatment is p β . As a result, the scaling of 1 p β is somewhat intuitive.

The proof of Theorem 2 (given in Appendix C) uses a generalized variation of LeCam’s method for fuzzy hypothesis testing [47], [48, Sec. 2.7.4]. We reduce the problem of TTE estimation to the creation of a hypothesis test to distinguish between two priors. We consider a setting with a d -regular graph, uniform treatment probabilities p i = p , and two Gaussian priors Γ 0 , Γ 1 over the effect coefficients. The priors have the same variances but with shifted means such that E Γ 0 [ TTE ] E Γ 1 [ TTE ] = 2 δ for some carefully tuned parameter δ . The failure of hypothesis tests that rely on TTE ^ is attributable to one of two factors: (1) a significant shift in TTE brought about by the variability of the coefficients, or (2) inaccuracy of the estimator. We bound the probability of the former with a Gaussian tail bound. We bound the latter in terms of the Kullback-Leibler (KL)-divergence between the distributions over ( Y , z ) induced by Γ 0 and Γ 1 . The rest of the argument involves a calculation of this KL-divergence and a selection of δ to ensure a nonvanishing error probability.

While our lower bound clearly indicates that the exponential dependence on β as exhibited by p β is necessary, a notable difference between our lower and upper bounds is the dependence on the graph degree d . Recall that the upper bound on the variance of our SNIPE estimator in Section 1 scales roughly as d β + 2 ; however, our lower bound result has no dependence on d . We attribute this gap to a weakness in the analyses and believe that it can be tightened with more careful calculation. In the upper bound, the d β arises as a bound on the sum of binomial coefficients of the form d k for k β . While this bound is precise asymptotically for a regime, where β is a small constant and d is large, it is loose in the most general case where β = d , where we know 2 d is sufficient. On the other hand, our lower bound argument considers a setting with a d -regular graph, but it only uses the degree of the graph to normalize the parameters on the prior distributions, such that we see no dependence on d in the final lower bound.

5.2 Central limit theorem

In this section, we show TTE ^ SNIPE is asymptotically normal using Stein’s method, a common approach to proving central limit theorems [3,4952]. In particular, we use Theorem 3.6 from Ross [53], which we restate below for convenience.

[53, Theorem 3.6]. Let X 1 , , X n be random variables such that E [ X i 4 ] < , E [ X i ] = 0 , ν 2 = Var i X i , and define W = i X i ν . Let collection ( X 1 , , X n ) have dependency neighborhoods D i for i [ n ] , and also define D max 1 i n D i . Then, for Z a standard normal random variable,

(5.1) d W ( W , Z ) D 2 ν 3 i = 1 n E X i 3 + 28 D 3 2 π ν 2 i = 1 n E [ X i 4 ] ,

where d W ( W , Z ) is the Wasserstein distance between W and Z .

Our estimator can be written in the form of TTE ^ = 1 n i [ n ] Y i w i ( z ) , where Y i and w i ( z ) depend only on the treatment assignments of individuals j N i . We apply Theorem 3.6 from ref. [53], to the random variables X i 1 n ( Y i w i ( z ) E [ Y i w i ( z ) ] ) , such that by construction W = i X i ν = ( TTE ^ TTE ) ν and TTE ^ = ν W + TTE . An application of Theorem 3.6 from ref. [53] implies that TTE ^ is asymptotically normal as long as the bound in (5.1) converges to 0 as n approaches infinity. As the size of the dependency neighborhoods D , the moments of X i , and the scaling of the variance ν all depend on the network parameters, for simplicity, we additionally assume in the conditions of Theorem 3 that d , Y max , and β are bounded above by a constant, and we also assume the treatment probability p is also lower bounded as a function of n .

Assumption 3

For some constant c > 0 , we have n Var [ TTE ^ ] c as n .

This is a typical assumption made in the literature [3,51,54] that rules out degenerate cases, such as all the potential outcomes being 0, that would result in the estimator having an unnaturally low variance scaling smaller than 1 n .

Theorem 3

(Central limit theorem) Under Assumptions 13, and assuming that d , Y max , and β are all O ( 1 ) with respect to n , p = ω ( n 1 4 β ) , and p i 1 2 for all i, the normalized error ( TTE ^ TTE ) ν converges in distribution to a standard normal random variable as n , for ν 2 = Var [ TTE ^ ] .

Theorem 3 states that our estimator is asymptotically normal, assuming the boundedness of the magnitude of the potential outcomes, the polynomial degree, and the network degree. For the complete proof, refer to Appendix D. The proof follows from a straightforward application of Theorem 3.6 from ref. [53] with appropriate bounds for the moments E X i 3 and E [ X i 4 ] , the dependency neighborhood size D , and the variance ν . The dependency neighborhood for variable X i is the set { j [ n ] s.t. N i N j } , i.e., the set of individuals j that share in-neighbors with individual i . This follows from the observation that both X i and X j are a function of the treatment variables z k for shared in-neighbors k N i N j . The size of the largest dependency neighborhood is thus bounded by D d in d out . The moments E X i 3 , E [ X i 4 ] will be bounded as a function of Y max as defined in (3.3) along with the network degree d , treatment probability p , and the polynomial degree β . The boundedness assumptions are used to argue that these moments do not grow too quickly in n , so that the Wasserstein distance in (5.1) converges to 0. The details of these calculations are deferred to Appendix D. We remark that the strict boundedness assumption can be relaxed so that d , Y max , and β are polylogarithmic with respect to n , so long as we tighten the lower bound on the growth of p by a corresponding logarithmic factor.

5.3 Variance estimator

The central limit theorem result in Theorem 3 implies that we can construct asymptotically valid confidence intervals by [ TTE ^ ν Φ 1 ( 1 α ) , TTE ^ + ν Φ 1 ( 1 α ) ] if we knew ν , where Φ indicates the cdf of a standard normal. Since the practitioner typically does not know ν , we construct a conservative estimator for ν by applying the approach from Aronow and Samii [3,55]. We begin by rewriting the SNIPE estimator as a sum over possible exposures x :

TTE ^ = 1 n i [ n ] Y i ( z ) w i ( z ) = 1 n i [ n ] x { 0 , 1 } N i I ( z N i = x ) Y i ( x ) w i ( x ) .

Note that both Y i ( x ) and w i ( x ) are deterministic, and the only randomness is expressed in the indicator function I ( z N i = x ) . Overloading notation, we let Y i and w i be expressed only as a function of z N i , where we assume the order β is clearly specified. Then, the variance of our estimator is given by

Var [ TTE ^ ] = 1 n 2 i , j [ n ] x { 0 , 1 } N i x { 0 , 1 } N j Y i ( x ) w i ( x ) Y j ( x ) w j ( x ) Cov ( I ( z N i = x ) , I ( z N j = x ) ) .

If P ( { z N i = x } { z N j = x } ) > 0 , an unbiased estimate for the corresponding term in the variance expression is given by

I ( z N i = x ) I ( z N j = x ) P ( { z N i = x } { z N j = x } ) Y i ( x ) w i ( x ) Y j ( x ) w j ( x ) Cov ( I ( z N i = x ) , I ( z N j = x ) ) .

Otherwise, if P ( { z N i = x } { z N j = x } ) = 0 , by the same technique as given in ref. [55], using the observation that Cov ( I ( z N i = x ) , I ( z N j = x ) ) = P ( z N i = x ) P ( z N j = x ) , the corresponding term in the variance expression can be approximated by the inflated estimate

1 2 ( I ( z N i = x ) P ( z N i = x ) Y i 2 ( x ) w i 2 ( x ) + I ( z N j = x ) P ( z N j = x ) Y j 2 ( x ) w j 2 ( x ) ) .

The expectation of this estimate will be higher than the true term in the variance via Cauchy–Schwarz inequality. Therefore, a conservative variance estimator Var ^ ( TTE ^ SNIPE ) is given by