Conditional As-If Analyses in Randomized Experiments

The injunction to `analyze the way you randomize' is well-known to statisticians since Fisher advocated for randomization as the basis of inference. Yet even those convinced by the merits of randomization-based inference seldom follow this injunction to the letter. Bernoulli randomized experiments are often analyzed as completely randomized experiments, and completely randomized experiments are analyzed as if they had been stratified; more generally, it is not uncommon to analyze an experiment as if it had been randomized differently. This paper examines the theoretical foundation behind this practice within a randomization-based framework. Specifically, we ask when is it legitimate to analyze an experiment randomized according to one design as if it had been randomized according to some other design. We show that a sufficient condition for this type of analysis to be valid is that the design used for analysis be derived from the original design by an appropriate form of conditioning. We use our theory to justify certain existing methods, question others, and finally suggest new methodological insights such as conditioning on approximate covariate balance.


Introduction
It is a long-standing idea in statistics that the design of an experiment should inform its analysis. Fisher placed the physical act of randomization at the center of his inferential theory, enshrining it as "the reasoned basis" for inference (Fisher, 1935). Building on these insights, Kempthorne (1955) proposed a randomization theory of inference for experiments, in which inference follows from the precise randomization mechanism used in the design. This approach has gained popularity in the causal inference literature because it relies on very few assumptions (Splawa-Neyman et al., 1990;Imbens and Rubin, 2015).
Yet the injunction to "analyze the way you randomize" is not always followed in practice, as noted by Senn (2004) who argues that in clinical trials the analysis does not always follow strictly from the randomization performed. For instance, a Bernoulli randomized experiment, in which each unit is assigned to treatment or control independently of other units (for instance, by independent coin flips) leading to a random number of treated units, might be analyzed as if it were a completely randomized experiment, in which the total number of treated units would be fixed prior to randomization. Or, similarly, we might analyze a completely randomized experiment as if it had been stratified.
This paper studies such as-if analyses in detail in the context of Neymanian causal inference, and makes three contributions. First we formalize the notion of as-if analyses, motivating their usefulness and proposing a rigorous validity criterion (Section 2). Our framework is grounded in the randomization-based approach to inference. In the two examples we described above, the analysis conditions on some aspect of the observed assignment; for instance, in the first example, the complete randomization is obtained by fixing the number of treated units to its observed value. The idea that inference should be conditional on quantities that affect the precision of estimation is not new in the experimental design literature (e.g., Cox, 1958Cox, , 2009Dawid, 1991) or the larger statistical inference literature (e.g., Särndal et al., 1989;Sundberg, 2003), and it has been reaffirmed recently in the causal inference literature (Branson and Miratrix, 2019;Hennessy et al., 2016).
Our second contribution is to verify that in our setting, conditioning leads to valid as-if analyses. We also warn against a dangerous pitfall: some as-if analyses look conditional on the surface, but are in fact neither conditional nor valid. This is the case, for instance, of analyzing a completely randomized experiment by conditioning on the covariate balance being no worse than that of the observed assignment (Section 3). We further point out that the motivation for the use of conditioning in analyzing experimental data should not be to increase precision, but rather to increase relevance. Consider the case of a Bernoulli randomized experiment with 100 units and a 0.5 probability of any given unit receiving treatment. Intuitively, if the assignment we observe has 1 treated unit and 99 control units, we should have a wider confidence interval than if the observed assignment had 50 treated units and 50 control units. This is exactly what will occur if analyzing the experiment conditionally, as a completely randomized experiment. The idea of relevance is discussed further in Section 2.3.
Our third contribution is to show how our ideas can be used to suggest new methods (Section 4) and also show how they can be used to evaluate existing methods (Section 5). We finish the work by discussing practical challenges (Section 6). Our goal in these discussions is primarily to highlight areas for future work, as this paper is primarily conceptual in nature.

As-if confidence procedures 2.1 Setup
Consider N units and let Z i ∈ {0, 1} (i = 1, . . . , N ) be a binary treatment indicator for unit i. We adopt the potential outcomes framework (Rubin, 1974;Splawa-Neyman et al., 1990), where, under the Stable Unit Treatment Value Assumption (Rubin, 1980), each unit has two potential outcomes, one under treatment, Y i (1), and one under control, Y i (0), and the observed response is Y obs i = Z i Y i (1) + (1 − Z i )Y i (0). We denote by Z, Y (1), and Y (0) the vectors of binary treatment assignments, treatment potential outcomes, and control potential outcomes for all N units. Let τ be our estimand of interest; in most of the examples we take τ to be the average treatment effect τ = N −1 N i=1 {Y i (1)−Y i (0)}, but our results apply more generally to any estimand that is a function of the potential outcome vectors (Y (0), Y (1)). An estimatorτ is a function of the assignment vector Z and the observed outcomes. For clarity, we will generally writeτ =τ (Z) to emphasize the dependence ofτ on Z, but keep the dependence on the potential outcomes implicit. Denote by η 0 the design describing how treatment is allocated, so for any particular assignment z, η 0 (z) gives the probability of observing z under design η 0 . In randomization-based inference, we consider the potential outcomes (Y (0), Y (1)) as fixed and partially unknown quantities; the randomness comes exclusively from the assignment vector Z following the distribution η 0 . The estimand τ is therefore fixed because it is a function of the potential outcomes only, while the observed outcomes and the estimatorτ (Z) are random because they are functions of the random assignment vector Z.
Our focus is on the construction of confidence intervals for τ under the randomizationbased perspective. We define a confidence procedure as a function C mapping any assignment z ∈ Z η 0 and associated vector of observed outcomes, Y obs , to an interval in R, where Z η 0 = {z ∈ {0, 1} N : η 0 (z) > 0} is the support of the randomization distribution. Standard confidence intervals are examples of confidence procedures, and are usually based on an approximate distribution ofτ (Z) − τ . For careful choices ofτ and η 0 , the random variablê τ (Z) − τ , standardized by its standard deviation var η 0 (τ ) induced by the design η 0 , is asymptotically standard normal (see Li and Ding, 2017). We can then construct an interval whereV η 0 is an estimator of var η 0 (τ ). Such a confidence interval is built from the (estimated) 2.5 and 97.5 quantiles ofτ (Z) − τ , shifted byτ (Z). Discussing the validity of these kinds of confidence procedures is difficult for two reasons. First, they are generally based on asymptotics, so validity in finite populations can only be approximate. Second, they use the square root of estimates of the variance which, in practice, tend to be biased, especially with finite population causal inference under which var η 0 (τ ) may only be estimated conservatively, in generality. These two issues are often accepted in practice but obscure the conceptual underpinnings of as-if analyses. We circumvent these issues by focusing instead on oracle confidence procedures, which are based on the true quantiles of the distribution ofτ (Z) − τ induced by the design η 0 . Specifically, we consider 1 − 2α level confidence intervals (0 < α < 1/2) of the form where U α (η 0 ) and L α (η 0 ) are the α upper and lower quantiles, respectively, of the distribution ofτ (Z) − τ under design η 0 . Because they do not depend on Z, the quantiles U α (η 0 ) and L α (η 0 ) are fixed. The confidence procedure in Equation (2) is an oracle procedure because unlike the interval in Equation (1), it cannot be computed from observed data. Oracle procedures allow us to set aside the practical issues of approximation and estimability to focus on the essence of the problem. We discuss some of the practical issues that occur without oracles in Section 6.

As-if confidence procedures
Given data from an experiment, it is natural to consider the confidence procedure C(τ (Z); η 0 ) constructed with the design η 0 that was actually used to randomize the treatment assignment. This would be following the "analyze as you randomize" principle. Consider, however, the oracle procedure based on the distribution ofτ (Z) − τ induced by some other design η that assigns positive probability to the observed assignment. In this case we say that the experiment is analyzed as-if it were randomized according to η. We generalize this idea further by allowing the design η used in the oracle procedure to vary depending on the observed assignment. This can be formalized with the concept of a design map.
Definition 1 (Design map). Let D be the set of designs (probability distributions) with support in {0, 1} N . A function H : Z η 0 → D which maps each assignment z ∈ Z η 0 to a design H(z) ∈ D is a design map.
As an example, the design of the experiment, η 0 , is one element of the set of designs D (i.e., η 0 ∈ D).
A confidence procedure C(τ (Z); H(Z)) can then be constructed using design map H as follows: This is an instance of as-if analysis, in which the design used to analyze the data depends on the observed assignment. That is, while we traditionally have one rule for how to create a confidence interval, we may now have many rules, possibly as many as |Z η 0 |, specified via the design map. In the special case in which the design map H is constant, i.e H(Z) = η for all Z ∈ Z η 0 , we write C(τ (Z); η) instead of C(τ (Z); H(Z)), with a slight abuse of notation, leading to Equation 3. Note that the design map function itself is fixed before observing the treatment assignment.
Example 1. Consider an experiment run according to a Bernoulli design with probability of treatment π = 0.5, where, to ensure that our estimator is defined, we remove assignments with no treated or control units. That is, Z η 0 is the set of all assignments such that at least one unit receives treatment and one unit receives control, and η 0 = Unif(Z η 0 ). Let η k (k = 1, . . . , N − 1) be the completely randomized design with k treated units. We use the That is, we analyze the Bernoulli design as if it were completely randomized, with the observed number of treated units assigned to treatment. Consider the concrete case where N = 10. Suppose that we observe assignment z with N 1 (z) = 3. The design H(z) = η 3 corresponds to complete randomization with 3 units treated out of 10, and the confidence procedure C(z; H(z)) is constructed using the distribution ofτ (Z) − τ induced by η 3 . We analyze as if a completely randomized design with 3 treated units had actually been run. If instead we observe z * with N 1 (z * ) = 4, then the confidence procedure C(z * ; H(z * )) would be constructed by considering the distribution of τ (Z) − τ induced by η 4 .
Example 2. Let our units have a categorical covariate X i . Categorical covariates form blocks: they partition the sample based on covariate value. Assume that the actual experiment was run using complete randomization with fixed number of treated units N 1 , but discarding assignments where there is not at least one treated unit and one control unit in each block. That is, Z η 0 is the set of all assignments with N 1 treated units such that at least one unit receives treatment and one unit receives control in each block and η 0 = Unif(Z η 0 ). This restriction on the assignment space is to account for the associated blocked estimator being undefined; with moderate size blocks we can ignore this nuisance event due to its low probability. For vector κ whose jth entry is an integer strictly less than the size of the jth block and strictly greater than 0, let η κ be a block randomized design with the number of treated units in each block corresponding to the numbers given in vector κ. We use the design map H(Z) = η N 1,block (Z) where N 1,block (Z) is the vector that gives the number of treated units within each block and η N 1,block (Z) is the blocked design corresponding to N 1,block (Z). We use post-stratification 1 (Holt and Smith, 1979;Miratrix et al., 2013) to analyze this completely randomized design as if it were block randomized.
Example 3. Consider an experiment with factorial structure in the treatments, particularly with two factors of interest each with two levels, active and passive (the case with more factors or levels is immediate). Assume that the actual experiment was run using complete randomization on each factor separately, such that there were exactly N 1 units receiving the active level for each factor. This marginal randomization means that the number of units assigned to the combined levels of both factors is random. To ensure our estimator is well defined, we assume that there is a non-zero number of units assigned to each treatment combination. Define n z 1 ,z 2 as the observed number of units assigned to level z 1 of factor 1 and level z 2 of factor 2. Then Z η 0 is the set of all assignments such that 0 < N 1 < N units receive the active level of factor 1 and 0 < N 1 < N units receive the active level of factor 2, and n z 1 ,z 2 > 0 for all z 1 and z 2 . Let η 0 = Unif(Z η 0 ). For vector γ of length four whose jth entry is an integer strictly less than N and strictly greater than 0 such that j γ j = N , let η γ be a standard factorial design with complete randomization on the joint treatment groups, with treatment group sizes corresponding to the numbers given in vector γ. We use the design map H(Z) = η n Z 1 ,Z 2 , where n Z 1 ,Z 2 is the vector of the four observed n z 1 ,z 2 . H(Z) is a factorial design where the vector n Z 1 ,Z 2 gives the number of units assigned to each treatment combination. Then this design map leads to the analysis as a factorial design with complete randomization, fixing the number of units in each treatment group (rather than just the marginal number for each factor separately).
Example 4. Assume that the actual experiment was run using complete randomization with exactly N 1 units treated. That is, Z η 0 is the set of all assignments such that 0 < N 1 < N units receive treatment and η 0 = Unif(Z η 0 ). Let X i be a continuous covariate for each unit i. Define a covariate balance measure ∆ X (Z), e.g.
For any assignment z, define A(z) = {Z : |∆ X (Z)| ≤ |∆ X (z)|}, which will give us the set of assignments with covariate balance better than or equal to the observed covariate balance. We use the design map H : z → pr η 0 {Z | Z ∈ A(z)}. Then this design map leads to analyzing the completely randomized design as if it were rerandomized (see Morgan and Rubin, 2012, for more on rerandomization) with acceptable covariate balance cut-off equal to that of the observed covariate balance.
Example 5. Assume that the actual experiment was run using block randomization where N 1,block is a fixed vector, and hence we can drop the Z in the notation used in Example 2, that gives the number of treated units within each block. That is, Z η 0 is the set of all assignments such that the number of treated units in each block is given by N 1,block and η 0 = Unif(Z η 0 ). Let η correspond to a completely randomized design, as laid out in Example 2, with N 1 , the total number of units treated across all blocks, treated. We use the constant design map H(Z) = η. This corresponds to analyzing this block randomized design as if it were completely randomized.
Throughout the paper, we focus on settings in which the same estimator is used in the original analysis and in the as-if analysis. In practice, the two analyses might employ different estimators. For instance, in Example 2, we might analyze the completely randomized experiment with a difference-in-means estimator, but use the standard blocking estimator to analyze the as-if stratified experiment. We discuss this point further in Section 6 but in the rest of this article, we fix the estimator and focus on the impact of changing only the design.

Validity, relevance and conditioning
We have formalized the concept of an as-if analysis, but we have not yet addressed an important question: why should we even consider such analyses instead of simply analyzing the way we randomize? Before we answer this question, we first introduce a minimum validity criterion for as-if procedures.
Definition 2 (Valid confidence procedure). Fix γ ∈ [0, 1]. Let η 0 ∈ D be the design used in the original experiment and let H be a design map on Z η 0 . The confidence procedure C(τ (Z); H(Z)) is said to be valid with respect to η 0 , or η 0 -valid, at level γ if pr η 0 {τ ∈ C(τ (Z); H(Z))} ≥ γ. When a procedure is valid at all levels, we simply say that it is η 0valid.
This criterion is intuitive: a confidence procedure is valid if its coverage is as advertised over the original design. The following simple result formalizes the popular injunction to "analyze the way you randomize" (Lachin, 1988, p. 317): Given that the procedure C(τ (Z); η 0 ) is η 0 -valid, why should we consider alternative as-if analyses, even valid ones? That is, having observed Z, why should we use a design H(Z) to perform the analysis, instead of the original η 0 ? A natural, but only partially correct, answer would be that the goal of an as-if analysis is to increase the precision of our estimator and obtain smaller confidence intervals while maintaining validity. After all, this is the purpose of restricted randomization approaches when considered at the design stage. For instance, if we have reasons to believe that a certain factor might affect the responses of experimental units, stratifying on this factor will reduce the variance of our estimator. This analogy, however, is misleading. The primary goal of an as-if analysis is not to increase the precision of the analysis but to increase its relevance. In fact, we argue heuristically in Section 6 that (given a conditionally unbiased estimator) an as-if analysis will not increase precision on average, over the original assignment distribution. Rather, it is frequently the change of estimator that has created this impression.
Informally, an observable quantity is relevant if a reasonable person cannot tell if a confidence interval will be too long or too short given that quantity. The concept of relevance captures the idea that our inference should be an accurate reflection of our uncertainty given the observed information from the realized randomization; our confidence intervals should be narrower if our uncertainty is lower and wider if our uncertainty is higher. Defining the concept of relevance formally is difficult. See Buehler (1959) and Robinson (1979) for a more formal treatment. Our appendix B gives a precise discussion in the context of betting games, following Buehler (1959). We will not attempt a formal definition here and instead, following Liu and Meng (2016), we will illustrate its essence with a simple example.
Consider the Bernoulli design scenario of Example 1 with 100 units. From Equation (2), the oracle interval C(τ (Z); η 0 ) constructed from the original design has the same length regardless of the assignment vector Z actually observed. Yet, intuitively, an observed assignment vector with 1 treated unit and 99 control units should lead to less precise inference, and therefore wider confidence intervals than a balanced assignment vector with 50 treated units and 50 control units. In a sense, the confidence interval C(τ (Z); η 0 ) is too narrow if the observed assignment is severely imbalanced, too wide if it is well balanced, but right overall. Let Z 1 be the set of all assignments with a single treated unit and Z 50 the set of all assignments with 50 treated units. If the confidence interval C(τ (Z); η 0 ) has level γ, we expect where the inequalities are usually strict. See appendix B for a proof in a concrete setting. More formally, we say that the procedure is valid marginally, but is not valid conditional on the number of treated units. This example is illustrated in Figure 1, which shows that the conditional coverage is below 0.95 if the proportion of treated units is not close to 0.5, and above 0.95 if the proportion is around 0.5. To remedy this, we should use wider confidence intervals in the first case, and narrower ones in the second. The right panel of Figure 1 shows that in this case, a large fraction of assignments have a proportion of treated units close to 0.5, therefore the confidence interval C(τ (Z); η 0 ) will be too large for many realizations of the design η 0 . Our confidence interval should be relevant to the assignment vector actually observed, and reflect the appropriate level of uncertainty. In the context of randomizationbased inference, this takes the form of an as-if analysis.
The concept of relevance and its connection to conditioning has a long history in statistics. Cox (1958) gives a dramatic example of a scenario in which two measuring instruments have widely different precisions, and one of them is chosen at random to measure a quantity of interest. Cox argues that the relevant measure of uncertainty is that of the instrument actually used, not that obtained by marginalizing over the choice of the instrument. In other words, the analysis should be conditional on the instrument actually chosen. This is an illustration of the conditionality principle (Birnbaum, 1962). Many of the examples we gave earlier are similar in spirit to this example. In the context of randomization-based inference, this conditioning argument leads to valid as-if analyses, as we show in Section 3. An important complication, explored in Section 3, is that conditional as-if analyses are only a subset of possible as-if analyses, and while the former are guaranteed to be valid, the latter enjoy no such guarantees.

Conditional as-if analyses 3.1 Conditional design maps
We define a conditional as-if analysis as an analysis conducted with a conditional design map as defined below.
Definition 3. [Conditional design map] Consider an experiment with design η 0 . Take any function w : Z η 0 → Ω, for some set Ω, and for υ ∈ Ω define the design η υ ∈ D as η υ (z) = pr η 0 {z | w(z) = υ}. That is, η υ is the design which conditions on w(Z) = υ. Then a function H : is called a conditional design map (where here we condition on the observed value w(Z)).
It is easy to verify that a conditional design map also satisfies Definition 1. For z ∈ Z η 0 , H(z) is a design, not the probability of z under some design. The probability of any where the probability is that induced by η 0 . We introduced the shorthand H(Z)(z) = η w(Z) (z) in order to ease the notation.
For an alternative perspective on conditional design maps, notice that any function w : The corresponding conditional design map would then, for a given Z, restrict and renormalize the original η 0 to the Z υ containing Z. An important note is that the mapping function w, and therefore the partitioning of the assignment space, must be fixed before observing the treatment assignment.
Example 1 (cont.). The design map H(Z) in Example 1 is a conditional design map, with w(Z) = N 1 (Z). Here we partition the assignments by the number of treated units.
Example 2 (cont.). The design map H(Z) in Example 2 is a conditional design map, with w(Z) = N 1,block (Z). Here we partition the assignments by the vector of the number of treated units in each block.
While Definition 3 implies Definition 1, the converse is not true: some design maps are not conditional. For instance, the design maps we consider in Example 4 and Example 5 are not conditional, as will be discussed in Section 3.2. We can now state our main validity result.
Theorem 1. Consider a design η 0 and a function w : Z η 0 → Ω. Then an oracle procedure built with the conditional design map H(Z) = η w(Z) is η 0 -valid.
Proof of Theorem 1 is provided in appendix A. In fact, the intervals obtained are not just valid marginally; they are also conditionally valid within each Z υ ∈ P w , in the sense that for any υ ∈ Ω. Conditional validity implies unconditional validity because if we have valid inference for each partition of the assignment space then we will have validity over all partitions; similar arguments regarding conditional inference have been made previously (for example, Dawid, 1991, states unconditional coverage of conditional intervals with correct conditional coverage as trivial on page 84). However, this result is novel in the randomization-based causal inference framework, to our knowledge. Conditional validity is good: it implies increased relevance, at least with respect to function w. We discuss this connection more in appendix B in the context of betting games.
Details are provided in appendix A. Corollary 1 states that any partition P of the support Z η 0 induces a valid oracle confidence procedure; having observed assignment Z, one simply needs to identify the unique element Z ∈ P containing Z and construct an oracle interval using the design obtained by restricting η 0 to the set Z.
An additional benefit of using conditional design maps is replicability. Consider Example 1, and the corresponding discussion of relevance with Bernoulli designs in Section 2.3. Under the original analysis for the Bernoulli design we would expect that the estimates for the bad randomizations with an extreme proportion of treated units will be far from the truth, on average. But if we do not adjust the estimated precision of our estimators based on this information, we may not only have an estimate that is far from the truth but our confidence intervals will imply confidence in that poor estimate. Although our conditional analysis will cover the truth the same proportion of the time as the original analysis, we would expect the length of our confidence intervals to reflect less certainty when we have a poor randomization. In terms of replicability, this means that we are less likely to end up being confident in an extreme result.

Non-conditional design maps
Theorem 1 states that a sufficient condition for an as-if procedure to be valid is that it be a conditional as-if procedure. Although this condition is not necessary, we will now show that some non-conditional as-if analyses can have arbitrarily poor properties. Example 4, in particular, provides a sharp illustration of this phenomenon and, although it is an edge case, it helps build intuition for why some design maps are not valid.
Example 4 (cont.). The design map H(Z) = pr η 0 {z | z ∈ A(Z)} introduced in Example 4 is not a conditional design map. This can be seen by noticing that the sets {A(Z)} Z∈Zη 0 where A(Z) = {z : |∆ X (z)| ≤ |∆ X (Z)|} do not form a partition of Z η 0 .
This example is particularly deceptive because the design map H(Z) does involve a conditional distribution. And yet, it is not a conditional design map in the sense of Definition 3 because it does not partition the space of assignments; each assignment z, except for the assignments with the very worst balance, will belong to multiple A(Z). Therefore Theorem 1 does not apply.
Consider the special case where covariates X i are drawn from a continuous distribution and Y i (0) = Y i (1) = X i (i = 1, . . . , N ) 2 . We are interested in the average treatment effect, which is the difference in mean potential outcomes under treatment versus control. Suppose that assignments are balanced such that half the units are assigned to treatment and half are assigned to control. Then given any z ∈ Z η 0 , with probability one there are only two assignments with exactly the same value for |∆ X (z)|, z and the assignment 1 − z; see appendix A for proof of this statement. By construction, then, our assignment is one of two worst case assignments in terms of balance for the set A(Z). Under the model Y i (0) = Y i (1) = X i , the observed difference,τ (Z) − τ =τ (Z) will be the most extreme in A(Z) and thus τ would lie outside the oracle confidence interval if, as is typical, the set is large enough with |A(Z)| ≥ 2/γ, where |A(Z)| is the size of set A(Z). Thus this design map would lead to poor coverage. In fact, we show in appendix A that if we instead make the inequality strict and take A(Z) = {z : |∆ X (z)| < |∆ X (Z)|}, the as-if procedure of Example 4 has a coverage of 0. Intuitively, this is because the observed assignment Z always has the worst covariate balance of all assignments within the support A(Z). Although extreme, this example illustrates the fact that as-if analyses are not guaranteed to be valid if they are not conditional.
Example 5 (cont.). The design map introduced in Example 5, in which we analyze a blocked design as if it were completely randomized, is also not a conditional design map. This can be seen by noticing that the complete randomization does not partition the blocked design but rather the blocked design is a subset, or a single element of a partition, of the completely randomized design.
This implies that Example 5 can also lead to invalid analyses; if the blocked design originally used is a particularly bad partition of the completely randomized design, in the sense of having wider conditional intervals, we will not have guaranteed validity using a completely randomized design for analysis. See Pashley and Miratrix (2021) for further discussion on when a blocked design can result in higher variance of estimators than an unblocked design. This is a tricky case as typically analyzing a blocked experiment as completely randomized will lead to conservative estimation (due to inflated estimation of variance), but such a result is not guaranteed.

How to build a better conditional analysis
The original goal of the as-if analysis of Example 4 was to incorporate the observed covariate balance in the analysis to increase relevance. We have shown that the design map originally proposed was not a conditional design map. We now show how to construct a conditional design map, and therefore a valid procedure, for this problem. The idea is to partition the support Z η 0 into sets of assignments with similar covariate balance and then use the induced conditional design map, as prescribed by Corollary 1. Let {∆ X (z) : z ∈ Z η 0 } be the set of all possible covariate imbalance values achievable by the design η 0 , and G = {G (1) , . . . , G (K) } be a partition of that set into K ordered elements. That is, for any k, k with k < k , we have δ < δ for all δ ∈ G (k) , δ ∈ G (k ) . This induces a partition P = {Z (1) , . . . , Now we can directly apply the results of Corollary 1. This approach is similar in spirit to the conditional randomization tests proposed by Rosenbaum (1984); see also Branson and Miratrix (2019) and Hennessy et al. (2016). The resulting as-if analysis improves on the original analysis under η 0 by increasing its relevance. Indeed, suppose that the observed assignment has covariate balance δ. Then the confidence interval constructed using η 0 will involve all of the assignments in Z η 0 , including some whose covariate balances differ sharply from δ. In contrast, the procedure we just introduced restricts the randomization distribution to a subset of assignments Z (k) containing only assignments with balance close to δ. This does not, however, completely solve the original problem. Suppose, for instance, that by chance, ∆ X (Z) = max G (k) . By definition, the randomization distribution of the as-if analyses we introduced above will include the assignment z such that ∆ X (z) = min G (k) , but not z * such that ∆ X (z * ) = min G (k+1) even though z * might be more relevant to Z than z, in the sense that we may have |∆ X (Z) − ∆ X (z * )| ≤ |∆ X (Z) − ∆ X (z)|. This issue does not affect validity, but it raises concerns about relevance when the observed assignment is close to the boundary of a set Z (k) . Informally, we would like to choose Z (k) in such a way that the observed assignment Z is at the center of the set, as measured by covariate balance. For instance, fixing c > 0, we would like to construct an as-if procedure that randomizes within a set of the form B(Z) = {z : ∆ X (z) ∈ [∆ X (Z) − c, ∆ X (Z) + c]}, rather than Z (k) . A naive approach would be to use the design mapping H : Z → pr η 0 {z | z ∈ B(Z)}, but this is not a conditional design mapping. Branson and Miratrix (2019) discussed a similar approach in the context of randomization tests and also noted that it was not guaranteed to be valid.
Let's explore further why H : Z → pr η 0 {z | z ∈ B(Z)} does not have guaranteed validity. In this case, each assignment vector has an interval or window of acceptable covariate balances centered around it. The confidence interval for a given Z ∈ Z η 0 is guaranteed to have γ · 100% coverage over all assignments within the window of covariate balances defined by set B(Z). So, if we built a confidence interval for each assignment in B(Z), using the design conditioning on B(Z), γ · 100% of those intervals would cover the truth. However, we would only ever observe these intervals for those assignments with exactly the same covariate balance as Z; other assignments would get mapped to different B(Z ). Furthermore, there are no guarantees about which assignments will result in a confidence interval covering the truth. Over the smaller subset of assignments with exactly the same covariate balance as Z, which lead to the same design over B(Z), the coverage may be less than γ · 100%.
To build a solution with guaranteed validity, we need more flexible tools. The following section will discuss how we can be more flexible, while still guaranteeing validity, by introducing some randomness.

Stochastic conditional as-if 4.1 Stochastic design maps
The setting of Example 4 posed a problem of how to build valid procedures that allow the design mapping to vary based on the assignment. That is, we want to avoid making a strict partition of the assignment space but still guarantee validity. We can do this by introducing some randomness into our design map.
Definition 4. [Stochastic conditional design map] Consider an experiment with design η 0 . For observed assignment Z ∈ Z η 0 , draw an additional bit of randomness w ∼ m(w | Z) from a given distribution m(· | Z), indexed by Z and with support on some set Ω, and consider the design η w (z) = pr η 0 {z | w} ∝ m(w | z)pr η 0 {z}.
The mapping H : Z → D, with H(Z) = η w and w ∼ m(w | Z), is called a stochastic design mapping.
This w is our bit of randomness that will allow us to blend our conditional maps to regain validity. In the special case where the distribution m(w | Z) degenerates into δ w=w(Z) = 1(w = w(Z)), Definition 4 is equivalent to Definition 3. When m is non-degenerate, the stochastic design map H becomes a random function.
Before stating our theoretical result for stochastic design maps, we first examine how the added flexibility that these maps afford can be put to use in the context of Example 4. Let c > 0, Ω = R, and define Our m(w | Z) selects a w obs near the observed imbalance. Having observed z ∈ Z η 0 and drawn w obs ∼ m(w | Z = z), we then consider the design with normalizing factor ν(w) = pr η 0 {w}/m(w | Z) = 2cpr η 0 {w} (with some adjustment due to truncation at the extremes). In other words, we analyze the experiment by restricting the randomization to a set A(w) = {z : ∆ X (z) ∈ [w − c, w + c]}. Comparing A(w) to our original randomization set B(Z), we see that while B(Z) is a set of w imbalances centered on observed ∆ X (Z), A(w) is only centered on ∆ X (Z) on average over draws of w. The following theorem guarantees that this stochastic procedure is valid. The proof is in appendix A.
Theorem 2. Consider a design η 0 and a variable w, with conditional distribution m(w | Z).
Then an oracle procedure built at level γ with the stochastic conditional design map H(Z), which draws w obs and maps to η w obs = pr η 0 {z | w obs }, is η 0 -valid at level γ.
Stochastic conditional design maps mirror conditioning mechanisms introduced by Basse et al. (2019) in the context of randomization tests. Inference is also stochastic here in the sense that a single draw of w determines the final reference distribution to calculate the confidence interval. We discuss next discuss a way to aggregate results from multiple draws of w.

Aggregating random confidence intervals
The randomness intrinsic to the method introduced in this section is similar to the introduction of randomness into uniformly most powerful (UMP) tests (see Lehmann and Romano, 2005). Instead of having one's inference depend on a single draw of w, one can use fuzzy confidence intervals (see Geyer and Meeden, 2005) to marginalize over the distribution m. In a fuzzy interval, similar to fuzzy sets, membership in the interval is not binary but rather allowed to take values in [0,1].
Consider an oracle confidence procedure C(τ (Z); H(Z)) built with a stochastic conditional design map H. The observed confidence interval for a fixed, observed z, C(z) = C(τ (z); H(z)), is still a random variable because H is stochastic. Specifically, C(z) = C(z, w) depends on the draw w ∼ m(w | z), so for a given observed dataset, we obtain a distribution P (C(z, w)) of confidence intervals, where the randomness is that induced by w ∼ m(w | z). One useful way to summarize this information is to construct a fuzzy confidence interval (Geyer and Meeden, 2005), where P is with respect to w ∼ m(w | Z). For any θ, the value I(θ; z) is the fraction of the random confidence intervals C(z, w), for a fixed z, that contain θ; in particular, I(θ; z) ∈ [0, 1]. A value of 1 represents values of θ that are covered by all the random intervals, while a value of zero represents values of θ covered by none. This approach has the benefit of summarizing all the information we get into a single function.

Discussion: Implications for Matching
The as-if framework and theory we have developed is a useful tool for evaluating a given analysis method. As an example, we consider analyzing data matched post-randomization as if it was pair randomized. Matching is a powerful tool for extracting quasi-randomized experiments from observational studies (Ho et al., 2007;Rubin, 2007;Stuart, 2010). To highlight the conceptual difficulty with post-matching analysis, we consider the idealized setting where treatment is assigned according to a known Bernoulli randomization mechanism, η 0 , and matching is performed subsequently. Specifically, units are assigned to treatment independently with probability p i = logit −1 (α 0 + α 1 X i ), where X i = (X i,1 , X i,2 ) ∈ R 2 is a unit-level covariate vector. For simplicity, we focus on pair matching, where treated units are paired to control units with similar covariate values. One way to analyze the pairs is as if the randomization had been a matched pairs experiment. Although this method of analysis has already received scrutiny in the literature (see, e.g., Schafer and Kang, 2008;Stuart, 2010), it is worth asking: is this a conditional design map with guaranteed validity? If we can exactly match on X i , then the situation is identical to that of Example 2; the as-if pair randomized design map is a conditional design map, and the procedure is therefore guaranteed to be valid. Exact matching is, however, often hard to achieve in practice. Instead, we generally rely on approximate matching, in which the covariate distance between the units within a pair is small, but not zero. Unfortunately, with approximate matching, the as-if pair randomized design map is not a conditional design map. To show this formally, let R be a matching algorithm which, given an assignment and fixed covariates, returns a set of L pairs which we denote M , M = { (i 1,1 , i 1,2 ), . . . , (i L,1 , i L,2 )}. Assuming a deterministic matching algorithm, let M obs = R(z) be the matching obtained from an observed assignment z. Treating the matched data as a matched pairs experiment implies analyzing over all assignment vectors that permute the treatment assignment within the pairs. A necessary condition for pairwise randomization to be a conditional procedure is that R(z * ) = M obs for all z * in the set of pairwise permuted assignments; that is, for any permutation of treatment within pairs, the matching algorithm R must return the original matches.
This condition is not guaranteed. To illustrate, consider the first three steps inside the light grey box in Figure 2, in which we consider a greedy matching algorithm. If we analyze the matched data as if it were a matched pairs design, then the permutation shown is allowable by the design. However, we see in the dotted rectangle of Figure 2 that if we had observed that permutation as the treatment assignment, we would have ended up matching the units differently, and therefore would have conducted a different analysis. This is essentially the issue we encountered earlier in Section 4; we have not created a partition of our space. The upshot is that when matching is not exact, analyzing the data as if it came from a paired-randomized study cannot be justified by a conditioning argument. A proper conditional analysis would need to take into account the matching algorithm. Specifically, let P(M ) be the set of all treatment assignment vectors that are permutations of treatment within a set of matches M and let V(M ) be the set of assignments that would lead to matches M using algorithm R. Then pr η 0 (z * | R(z * ) = R(z)) = pr η 0 (z * | z * ∈ V(R(z))) .
Note that this is equal to Unif (V(R(z))) if assignments in V(R(z)) have equal probability under the original design. If we condition on having observed a certain matched set, we would not use all within pair permutations of those matches, but rather would use all permutations that would have resulted in the same matches given our matching algorithm. Still, this distinction appears to matter more in theory than in practice. Matching has been shown to be, in practice, a reliable tool to achieve covariate balance such that an observational study resembles an experiment and thus gives us hope of obtaining good inference. Further, different inference frameworks such as those based on superpopulations with assumptions on the relationship between covariates and outcomes may provide stronger theoretical bases for matching. See Abadie and Imbens (2006) for large-sample properties of matching.
6 Discussion: Estimators, designs, and practical considerations In this section we discuss some practical considerations and limitations of conditional inference and as-if analyses. We start by discussing the impact of the choice of estimators, followed by an exploration of limitations when we do not have oracles to produce confidence intervals, and end by emphasizing difficulties regarding choices of what to condition on.

Estimators and power
An important consideration in practice is that the estimators we should use may change based on the design we use. For instance, in post-stratification we would use the blocking estimator, which averages block treatment effect estimates, rather than the completely randomized simple difference estimator. These two estimators are identical when the proportion treated within each block is the same but otherwise the completely randomized estimator will be biased for the post-stratified design. Although this is not an issue when we have an oracle, because it will still correctly identify the distribution ofτ (Z)−τ for any estimatorτ (Z), thus accounting for any bias, in practice we would want to use unbiased estimators. Our validity result still holds even if we change the estimator for each different blocked design. This validity holds because we will have conditional validity for each blocked design and therefore will have validity over the entire completely randomized design. See proof of Theorem 1, which shows that we just need conditional validity to get overall validity. But in practice, a full comparison of a conditional and non-conditional analysis may require a comparison of relevant estimators as well. Moreover, it is often this changing of estimators that leads to confusion about power changes for conditional analyses. In general, a conditional analysis will not result in higher precision or smaller confidence intervals than an unconditional one, on average, if we keep our estimators the same. This can be seen through a simple heuristic argument concerning the variance of our estimators. Although the mode of inference in this paper does not rely strictly on the variance of our estimators, considering the variance is a proxy for considering the length of the confidence intervals. Consider restricting randomization by conditioning on some information about the assignment given by random variable w. We have the following well known variance decomposition: var η 0 (τ ) = E η 0 [var η 0 (τ | w)] + var η 0 (E η 0 [τ | w]). It is easy to see that if 3 E η 0 [τ | w] = τ then var η 0 (τ ) = E η 0 [var η 0 (τ | w)]. This implies that on average, over the distribution of w, the conditional, or restricted, variance is equal to the unconditional, or unrestricted, variance. This argument was made by Sundberg (2003) under a predictive view. Sundberg (2003) argued for the use of conditioning for lowering the mean-squared error of the predicted squared error for variance estimation.
Of course, if we allow different estimators to be used for the conditional and unconditional designs, we do not have such guarantees and in fact may see a precision gain through changing the estimator. For instance, in post-stratification we may see gains by using the stratified adjusted blocking estimator rather than the simple difference in means estimator. In particular, consider two estimatorsτ * andτ † which both have similar conditional variances such that

Inference without oracles
Inference will necessarily be more complicated without oracles. An obvious challenge in the absence of an oracle is that we may not have good analytical control over the randomization distribution ofτ (Z) − τ in the sense of being able to derive and estimate quantities such as its variance. For instance, we may have good control over the distribution ofτ (Z) − τ under η 0 but not under conditional design η w(Z) , so that even though Theorem 1 guarantees the theoretical validity of this conditional as-if analysis, there is no way to implement it in practice. Therefore, we will often want to choose an estimator that has estimable variance and is unbiasedness under the conditional distribution η w(Z) .
A more subtle issue is that we will typically need to estimate variance in practice to construct confidence intervals and we often only have conservative estimators of the variance. In fact, estimators of the variance may often be even more conservative for conditional analyses than unconditional analyses. There may also be degrees of freedom losses under a conditional analysis, which can occur in post-stratification, for example. Thus, we can find situations in which a conditional interval is smaller than the marginal interval under the oracle, but the estimated conditional interval is larger than the marginal conditional interval. Therefore, in practice the theoretical benefits of conditioning must be weighed against the drawbacks of excessively conservative variance estimation.
Three examples of conditional as-if analyses that we have identified as currently feasible without oracles are (i) analyzing a Bernoulli randomized experiment as completely randomized (Example 1), (ii) analyzing a completely randomized experiment as blocked randomized (Example 2), and (iii) analyzing a marginally randomized experiment with two treatments as a factorial (Example 3). These may be simple examples but they are still pertinent; for example, online experiments (whether A/B tests run by companies or researchers on platforms such as Qualtrics) often employ Bernoulli randomization. There are some practical limitations, even for these designs. As an example, one cannot use the standard completely randomized variance estimator for a Bernoulli assignment with 1 treated unit and 99 control units. However, even the unconditional analysis would be fraught at best in this case. Further, this would occur with low probability if assignment probability is close to 0.5. This problem becomes more salient with post-stratification, when strata can be made very fine, as discussed in the following section.

What to condition on
Other questions of practical importance when discussing conditioning are what and how much to condition on. The theory indicates that more is always better; the more variables we condition on, the more relevant our analysis becomes. But, as we condition on more information, the partition induced on the set of all assignments becomes increasingly fine. In the extreme case, each element of the partition may contain a single assignment. This poses a philosophical problem from the point of view of randomization-based inference because there is no randomness left! This is not a problem under an oracle, which would give us a single point at the true τ in this case, but of course this precludes any analysis in practice.
If we cannot condition on everything, we must choose what to condition on, given multiple options. This is analogous to the well known fact in the frequentist literature that many ancillary statistics may exist in a given inference problem, and that two ancillary statistics may cease to be ancillary if taken jointly (Basu, 1964;Ghosh et al., 2010). Similarly, one must choose how fine to make the conditioning. For instance, in our stochastic design map example of Section 4, one must choose the bandwidth c of covariate closeness to use. The simple but vague answer is that one should condition on the quantity, or few quantities, that affect relevance the most, while remaining mindful of the analytical issues described previously. This issue of what to condition on is especially salient in the use of post-stratification, when researchers must decide what variables to use to create strata and how fine those strata should be made. This is a fundamental statistical problem that is increasingly being highlighted in areas such as high-dimensional inference, where choices must be made in terms of which covariates to use. Even running a simple regression requires these difficult choices to be made when the number of covariates is high compared to the number of observations. See Liu and Meng (2016) for a more detailed discussion on the trade-offs of relevance and robustness.

Conclusions
In providing a justification for as-if analyses, our goal was not to contradict the injunction "analyze the way you randomize", but to complement it. Our theory suggests that one can, and in fact should, analyze an experiment in a way that is both compatible with the original randomization and also relevant to the observed data. As we have argued, this is in essence how the conditionality principle manifests itself in the context of randomizationbased inference. Our line of argumentation in this paper has primarily been theoretical and conceptual rather than the practical.
Although theoretically justified -and, in many cases, even desirable -valid as-if analyses may be impractical, as discussed in Section 6. Practicalities currently limit the scope of conditional analyses to those we already have tools for, such as post-stratification and complete randomization, and make the benefits of conditional analyses more modest. Despite these limitations, this paper contributes to the causal literature by (i) aiding causal methodologists in critiquing and reflecting on complex inference strategies that exist (e.g., matching), (ii) providing a framework for developing new methods (e.g., helping to analyze data from more complicated randomization schemes, such as in Basse et al. (2019)) and (iii) providing a framework for understanding why some styles of inference may be preferable (e.g., conditional standard errors for post stratification).
We have Hence H : Z → η w(Z) leads to an η 0 -valid procedure, as a consequence of Theorem 1.
Proof of Unique Covariate Balance for Rerandomized As-if Design. We have that X i is continuous for each i. For example, X i could be normally distributed. So for any random finite population and any fixed vector a with ith entry a i , such that a i = 0 for some i, The difference in covariate means (balance) is The difference in the imbalance between assignment Z and z is Hence, This implies that for any random finite population, the probability of any given assignment having the same covariate balance measure as another assignment is zero. Or, in other words, with probability one each covariate balance is unique.
Proof of Special Case of 0 Coverage for Rerandomized As-if Design. We now consider the special case of Example 4, with strict inequality for the rerandomization. That is, we have an original design of complete randomization and design map H : Z → P (z|z ∈ A(Z)) where A(Z) = {z : |∆ X (z)| < |∆ X (Z)|} is the set of assignments with covariate balance strictly better than the observed covariate balance. Consider the special case where Y i (0) = Y i (1) = X i (i = 1, ..., N ), so τ = 0. In that case, we haveτ (Z) = ∆ X (Z), and so for Z ∼ η 0 , ∀z ∈ A(Z), |τ (Z)| > |τ (z)|.
Hence, analyzing the experiment as if it came from a stricter rerandomized design, whose non-inclusive maximal imbalance is that of the observed assignment, leads to a coverage of 0.
Proof of Theorem 2. We have Again, the key is to notice that by Proposition 1, P ηw τ ∈ C(τ (Z); η w ) ≥ γ. Hence, which concludes the proof.
B A simple example of relevance

B.1 Relevance and betting
First we review how the concepts of validity and relevance as formalized by Buehler (1959) and Robinson (1979) is made visual by the concept of betting. We follow broadly the setup of Buehler (1959), with some modifications to accommodate our notation. Consider a betting game between two players: 1. Player 1 chooses a distribution η * and a confidence level β for the confidence procedure. Intuitively, this corresponds to the claim P (τ ∈ C(τ (Z); η * )) = β.
2. Player 2 selects two disjoint subsets of Z, A + and A − . If Z ∈ A + , Player 2 will bet that the confidence interval captured τ . If Z ∈ A − , Player 2 will bet that the confidence interval did not capture τ . If Z / ∈ A + ∪ A − , Player 2 does not make a bet. We denote this strategy S(A + , A − ) and mathematically define it as follows: • If Z ∈ A + , bet that τ ∈ C(τ (Z); η * ), betting β to win 1.
The return of this game, for Player 2, is and the expected return is obtained by integrating over the design η. Define We then have We can cast the validity criteria defined in Section 2.3 in terms of this betting (Buehler (1959) uses the term weak exactness). Consider the strategy S(Z, ∅), where Z is the set of all assignments. Clearly, P (Z ∈ A − ) = P (Z ∈ ∅) = 0 and P (Z ∈ A + ) = P (Z ∈ Z) = 1. Moreover, β + = P (τ ∈ C(τ (Z); η * ) | Z ∈ A + ) = P (τ ∈ C(τ (Z); η * )), so It is then easy to see that the expected return for this strategy is null if and only if the confidence interval of Player 1 has the advertised coverage β, which corresponds to our frequentist strict validity criterion. We state the following proposition: Proposition 2. The following assertions are equivalent: 1. The procedure C(τ (Z); η * ) is strictly valid, in the frequentist sense.
2. The expected return of strategy S(Z, ∅) is zero.
3. The expected return of strategy S(∅, Z) is zero.
In order to better understand Proposition 2, suppose that the choice of design η * leads to an interval with coverage below the advertised level β. That is, such that P (τ ∈ C(τ (Z); η * )) < β. Now, under the strategy S(∅, Z) we have E(R) = β − P (τ ∈ C(τ (Z); η * )) > 0 and so Player 2 can make money by betting against Player 1. Similarly, it is easy to verify that if the interval has coverage greater than advertised, the strategy S(Z, ∅) has positive return. The general idea here is that if Player 1 truly believes his confidence assertion, he should be willing to play against the strategies S(∅, Z) or S(Z, ∅).
One insight from the betting perspective is that ensuring the strategies S(Z, ∅) and S(∅, Z) have zero expected return may not be stringent enough a criterion for procedures. Suppose that the design η * is such that the procedure C(τ (Z); η * ) has coverage probability β, as advertised. Now suppose that there is a set of assignments A such that P (τ ∈ C(τ (Z); η * ) | Z ∈ A) = β * > β.
The key question is the following: if the observed assignment Z ∈ A, should Player 1 report the confidence β * or β? This is equivalent to asking, if you have information based on your observed assignment that you should expect better or worse coverage using the standard method, should you use that information to determine your actual confidence level? The betting framework offers one perspective on the problem. Consider the strategy S(A, ∅). The expected return is E(R) = (β * − β)P (Z ∈ A) > 0.
So the strategy S(A, ∅) leads to a positive expected gain, and Player 2 can make money off of Player 1 by exploiting this strategy. Following the nomenclature in Buehler (1959), we give a definition of relevance.
Definition 5. A subset of assignment A ⊂ Z is said to be relevant for the pair {C(τ (Z); η * Z ), β} if either S(A, ∅) or S(∅, A) have non-zero expected revenue. More generally a strategy S(A + , A − ) is said to be relevant if it has non-zero expected revenue.
This suggests that the existence of relevant strategies against {C(τ (Z); η * Z ), β} is problematic for the procedure, even if the procedure is valid. In fact, the notion of relevance relates to that of conditional validity.

B.2 Simple example
Consider the usual causal inference setup with N units, assuming SUTVA. The design η 0 is Bernoulli, excluding Z = (1, . . . , 1) and Z = (0, . . . , 0). We consider the difference in means estimatorτ . We assume a constant additive treatment effect such that for all i, Y i (1) − Y i (0) = τ . Now, conditional on N 1 = k, this is a completely randomized experiment, and E(τ | N 1 = k) = τ . The well known result for the variance of a completely randomized design yields where V * is the sample variance of the potential outcomes under one treatment, which is the same as under the other treatment due to the additive treatment effect. The estimator is also unbiased unconditionally, E(τ ) = τ , but the unconditional variance becomes where the expectation is with respect to the distribution of N 1 induced by the design η 0 . We assume that we've reached the asymptotic regime, and soτ ∼ N (τ, V ), where randomness is induced by design η 0 . See Section 6 of Li and Ding (2017) for an argument for why the CLT holds here. Consider the following confidence intervals: By construction, we have P (τ ∈ C(τ (Z); η 0 )) = β = 0.95. There exist winning betting strategies against this interval. Define K = {k : v(k) < V }. Note that we have |N 1 = k ∼ N (0, 1). And so P (τ ∈ C(τ (Z); η 0 )|N 1 = k) = P −1.96 Now the key is to notice the following: Proposition 4. For all k ∈ K, we have β k > β and similarly, for all k ∈ K, we have β k ≤ β where the strictness in the first equation comes for the the strictness in the definition of K.