Marginal structural models (MSMs) can be used to estimate the causal effect of a potentially time-varying treatment in the presence of time-dependent confounding via weighted regression. The standard approach of using inverse probability of treatment weighting (IPTW) can be sensitive to model misspecification and lead to high-variance estimates due to extreme weights. Various methods have been proposed to partially address this, including covariate balancing propensity score (CBPS) to mitigate treatment model misspecification, and truncation and stabilized-IPTW (sIPTW) to temper extreme weights. In this article, we present kernel optimal weighting (KOW), a convex-optimization-based approach that finds weights for fitting the MSMs that flexibly balance time-dependent confounders while simultaneously penalizing extreme weights, directly addressing the above limitations. We further extend KOW to control for informative censoring. We evaluate the performance of KOW in a simulation study, comparing it with IPTW, sIPTW, and CBPS. We demonstrate the use of KOW in studying the effect of treatment initiation on time-to-death among people living with human immunodeficiency virus and the effect of negative advertising on elections in the United States.
Marginal structural models (MSMs) offer a successful way to estimate the causal effect of a time-varying treatment on an outcome of interest from longitudinal data in observational studies [1,2]. For example, they have been used to estimate the optimal timing of human immunodeficiency virus (HIV) treatment initiation , to evaluate the effect of hormone therapy on cardiovascular outcomes , and to evaluate the impact of negative advertising on election outcomes . The increasing popularity of MSMs among applied researchers derives from their ability to control for time-dependent confounders, which are confounders that are affected by previous treatments and affect future ones. In particular, as shown by Robins et al.  and Blackwell , standard methods, such as regression or matching, fail to control for time-dependent confounding, introducing post-treatment bias. In contrast, MSMs consistently estimate the causal effect of a time-varying treatment via inverse probability of treatment weighting (IPTW), which controls for time-dependent confounding by weighting each subject under study by the inverse of their probability of being treated given covariates, i.e., the propensity score , mimicking a sequential randomized trial. In other words, IPTW creates a hypothetical pseudo-population where time-dependent confounders are balanced over time.
Despite their wide range of applications, the use of these methods in observational studies may be jeopardized by their considerable dependence on positivity. This assumption requires that, at each time period, the probability of being assigned to the treatment, conditional on the history of treatment and confounders, is not 0 or 1 . Even if positivity holds theoretically in the population, when propensities are close to 0 or 1, it can be easily practically violated, i.e., when some combinations of confounders and treatment are rare in the observed data . Practical positivity violations lead to extreme and unstable weights, which in turn yield very low precision and misleading inferences [8,9,10]. This is a particular concern with longitudinal data, since IPTW requires conditioning on past treatment sequences in addition to time-dependent confounders thus having greater risk of running into near-zero denominators. In addition, MSMs using IPTW are highly sensitive to misspecification of the treatment assignment model, which can lead to biased estimates [8,11,12].
Various statistical methods have been proposed in an attempt to overcome these challenges. To deal with extreme weights, several authors [12,13] have suggested truncation, whereby outlying weights are replaced with less extreme ones. Santacatterina et al.  proposed to use shrinkage instead of truncation as a more direct way to control the bias-variance trade-off. Robins et al.  recommended the use of stabilized-IPTW (sIPTW) where inverse probability weights are normalized by the marginal probability of treatment. To mitigate misspecification of the treatment assignment model, Imai and Ratkovic  proposed to use the covariate balance propensity score (CBPS), which instead of plugging in a logistic regression estimate of propensity into IPTW finds the logistic model that balances covariates via the generalized method of moments. The method tries to balance the first, and possibly higher, moment of each covariate even if a logistic model is misspecified . The authors also provided balancing conditions for longitudinal studies. Yiu and Su  proposed a joint calibration approach to covariate balancing weight estimation for MSMs. The method aims to compute weights that jointly eliminate covariate associations with both treatment assignment and censoring processes. Other methods have been proposed in the literature to manage the trade-off between balance and precision [18,19,20, among others], and that discuss reproducing kernel Hilbert spaces (RKHSs) [21,22,23, among others]. These methods, however, do not directly deal with time-dependent confounders and informative censoring.
In this article, we present and apply kernel optimal weighting (KOW), which provides weights for fitting an MSM that balance time-dependent confounders while controlling for precision. Specifically, by solving a quadratic optimization problem over weights, the proposed method directly minimizes imbalance, defined as the sum of discrepancies between the weighted observed data and the counterfactual of interest over all treatment regimes, while penalizing extreme weights.
This extends the kernel optimal matching method of Kallus  and Kallus et al.  to the longitudinal setting and to dealing with time-dependent confounders, where, similar to regression and matching, it cannot be applied without introducing post-treatment bias.
The proposed method has several attractive characteristics. First, KOW can balance non-additive covariate relationships by using kernels, which generalize the structure of conditional expectation functions, and does not restrict weights to follow a fixed logistic (or other parametric) form. By doing so, KOW mitigates the effects of possible misspecification of the treatment model. In the simulation study presented in Section 5, we show that KOW is more robust to model misspecification compared with the other methods. In Section 5, we also show how KOW compares favorably with the aforementioned methods in all nonlinear scenarios, and in Section 7.2 we use KOW to balance non-additive covariate relationships estimating the effect of negative advertising on election outcomes. Second, by balancing time-dependent confounders using kernels while penalizing extreme weights, KOW leads to better accuracy, precision, and total error. In Section 5, we show that the mean squared error (MSE) of the estimated effect of a time-varying treatment obtained by using KOW is lower than that obtained by using IPTW, sIPTW, and CBPS in all considered simulated scenarios. Third, differently from ref. , where the number of covariate-balancing conditions grows exponentially in the number of time periods, KOW only needs to minimize a number of discrepancies that grows linearly in the number of time periods. This feature leads to a lower computational time of KOW compared with CBPS when the total number of time periods increases, as shown in our simulation study in Section 5.5 and in our study on the effect of negative advertising on election outcomes in Section 7.2. Fourth, KOW can be easily generalized to other settings, such as informative censoring. We do just that in Section 6, and in Section 7.1, we use this extension to study the effect of HIV treatment on time to death among people living with HIV (PLWH). Finally, KOW can be solved by using off-the-shelf solvers for quadratic optimization.
In the next section, we briefly introduce the literature of MSMs (Section 2). In Section 3, we develop and define KOW. We then discuss some practical guidelines on the use of KOW (Section 4). In Section 5, we report the results of a simulation study aimed at comparing KOW with IPTW, sIPTW, and CBPS. In Section 6, we extend KOW to control for informative censoring. We then present two empirical applications of KOW in medicine and political science (Section 7). We offer some concluding remarks in Section 8.
2 MSMs for longitudinal data
In this section, we briefly review MSMs [1,2]. Suppose we have a simple random sample with replacement of size from a population. For each unit and time period , we denote the binary time-varying treatment variable by , with meaning not being treated at time and being treated at time , and time-dependent confounders . We denote by the treatment history up to time and by the history of confounders up to time . represents the time-invariant confounders, i.e., confounders that do not depend on past treatments. We denote by and possible realizations of the treatment history and the confounder history , respectively. We use to denote the indicator so that is the variable that is 1 if and 0 otherwise. To streamline notation, we will refer to as , as , as , and to as . For each unit , we denote by the outcome variable observed at the end of the study. Using the potential outcome framework , we denote by the potential outcome we would see if we were to apply the treatment regime to the ith unit, where is the space of treatment regimes. Throughout, we drop the subscripts on these variables to refer to a generic unit.
We impose the assumptions of consistency, non-interference, positivity, and sequential ignorability [26,27]. Consistency and non-interference [also known as SUTVA; 28] can be encapsulated in that the potential outcomes are well-defined and the observed outcome corresponds to the potential outcome of the treatment regime applied to that unit, i.e., . As previously introduced, positivity states that, for each time , the probability of being treated at time conditioned on the treatment history up to time and the confounder history up to time, , is not 0 or 1, i.e.,
Sequential ignorability states that the potential outcome is independent of treatment assignment at time , given the treatment history up to time and the confounder history up to time . Formally, sequential ignorability is defined as
An MSM is a model for the marginal causal effect of a time-varying treatment regime on the mean of , that is,
where is some known function class parameterized by . For example, a commonly used MSM is based on additive effects with a common coefficient: , where the parameter is the causal parameter of interest. Usually, is computed by a weighted regression of the outcome on the treatment regime alone using weighted least squares (WLSs), i.e., , and Wald confidence intervals are constructed using robust (sandwich) standard errors [1,29,30]. In order to consistently estimate , the weights , must account for the non-randomness of the treatment assignment mechanism, i.e., the confounding. Robins  showed that the set of inverse probability weights and stabilized inverse probability weights achieve this objective. These weights are defined as follows:
where is a known function of treatment history. The set of inverse probability weights is obtained by setting , while the set of stabilized inverse probability weights is obtained by setting . To estimate weights of the form of equation (4), one first estimates the conditional probability models using either parametric methods such as logistic regression or other machine learning methods [31,32,33] and then these estimates are plugged in directly into equation (4) to derive weights, which are then plugged into the WLS. Stabilized weights seek to attenuate the variability of inverse probability weights by normalizing them by the marginal probability of treatment. Since the additional factor is a function of treatment regime alone, it does not affect the consistency of the WLS if the MSM is well specified. Both sets of weights, however, rely on plugging in an estimate of a probability into the denominator, meaning that when the true probability is even modestly close to 0, any small error in estimating it can translate to very large errors in estimating the weights and to estimated weights that are extremely variable. Furthermore, both sets of weights rely on the correct specification of the conditional probability models used to estimate the weights in equation (4).
To overcome this issue, Imai and Ratkovic  proposed to estimate weights of the form of equation (4) that improve balance of confounders by generalizing the covariate balancing propensity score (CBPS) methodology. Instead of plugging in probability estimates based on logistic regression, CBPS uses the generalized method of moments to find the logistic regression model that if plugged in would lead to weights, , that approximately solve a subset of the moment conditions that the true inverse probability weights, equation (4), satisfy.
Differently than IPTW, sIPTW, and CBPS, in the next section, we characterize imbalance as the discrepancies in observed average outcomes due to confounding, consider their worst case values, and use quadratic optimization to obtain weights that directly optimize the balance of time-invariant and time-dependent confounders over all possible weights while controlling precision.
In this section, we present a convex-optimization-based approach that obtains weights that minimize the imbalance due to time-dependent confounding (i.e., maximize balance thereof) while controlling precision. Toward that end, in Section 3.1, we provide a definition of imbalance. Specifically, we define imbalance as the sum of discrepancies between the weighted observed data and the unobserved counterfactual of interest over all treatment regimes. Since this imbalance depends on unknown functions, in Section 3.2 we consider the worst case imbalance, which guards against all possible realizations of the unknown functions. We also show that the worst case imbalance has the attractive characteristic that the number of discrepancies considered grows linearly in the number of time periods and not exponentially like the number of treatment regimes. We finally show how to minimize this quantity while controlling precision using kernels, RKHS and off-the-shelf solvers for quadratic optimization (Sections 3.3 and 3.4).
3.1 Defining imbalance
Consider any population weights , where is a function that depends on the treatment and confounder histories. In this section, we will show that, under consistency and Assumptions (1)–(2), we can decompose the difference between the weighted average outcome among the -treated units, , and the average potential outcome of , , into a sum over time points of discrepancies involving the values of treatment and confounder histories up to time .
To build intuition we start by explaining this decomposition in the case of two time periods . Assuming consistency and Assumptions (1)–(2), for each , we can decompose the weighted average outcome among the -treated units as follows:
where the first equality follows from iterated expectation, the second from sequential ignorability, the fourth from iterated expectation and sequential ignorability, and the third and fifth from the following definitions, which exactly capture the difference between the two sides of the third and fifth equalities,
Note our use of as a generic dummy function and as a specific function that depends on the particular (unknown) distribution of .
This gives a definition of discrepancy, , where the subscript refers to the treatment assigned at time , is a population weight, and is a given function of interest of the treatment and confounder history up to , . The function is one such function. In particular, for every , the quantity is the discrepancy between the -moments of the baseline confounder distribution in the weighted -treated population and of the distribution in the whole population. Similarly, for every , is a discrepancy in the -moment of treatment and confounder histories at the start of time step 2. What we have shown above is how these discrepancies directly relate to the difference between weighted averages of observed outcomes and true averages of unknown counterfactuals of interest. Specifically, we have shown that when we measure these discrepancies with respect to the specific function , then their sum gives that difference.
We can extend this decomposition to general horizons . Let us define the same discrepancies for any time as
The following result gives the general decomposition of the difference between weighted average of observed outcomes and true average of counterfactuals as the sum of discrepancies, one for every time step:
Based on the results of Theorem 1, it is clear that if we want the difference between average counterfactual outcomes and average weighted factual outcomes to be small for all treatment regimes , then we should seek weights that make
small for all , where we write for any set of functions.
The empirical counterparts to are the sample moment discrepancies for a given set of sample weights :
Thus, we will seek sample weights that make small for all treatment regimes . Toward that end, for any set of given functions , we define imbalance of a set of weights as the average squared discrepancy over treatment regimes:
The particular imbalance of interest is given when we consider . One way to control this imbalance, , and consequently control the empirical discrepancies of interest, , is by using inverse probability weights. If known, these weights make this quantity a sample average of mean-zero variables and thus close to zero for large . However, the difficulties are that (a) even mild practical violations of positivity can lead to large variance of each of these terms and (b) we need to correctly estimate the sequential propensities.
Differently, we will seek to find weights that directly minimize imbalance. There are two main challenges in this task. The first challenge is that the imbalance of interest depends on some unknown functions . The second is that the number of treatment regimes grows exponentially in the number of time periods. In the next section, we show how the proposed methodology overcomes these two challenges.
3.2 Worst case imbalance
To overcome the fact that we do not actually know the functions on which imbalance depends, we will guard against all possible realizations of the unknown functions. Specifically, since scales linearly with , we will consider its magnitude relative to that of . We therefore need to define a magnitude. In particular, let us define
where are some given extended seminorms on functions from the space of time-dependent confounders and treatment histories up to time to the space of outcomes. Compared to a norm, an extended seminorm may also assign the values of 0 and to nonzero elements but must still satisfy triangle inequality and absolute homogeneity. We will discuss specific choices of such seminorms in Section 3.4.
Given these, we can define the worst case discrepancies,
Note that depends only on the treatment at time , , and not the whole treatment regime, .
Then the worst case imbalance is given by
What is important to note is that this shows that the discrepancies of interest are essentially the same regardless of the particular treatment regime trajectory . That is, to control the discrepancies for all trajectories for all possible realizations of , at any time point , we are only concerned with the discrepancies of histories for those units treated at time , , and for those not, . So, while the number of treatment regimes grows exponentially in the number of periods, we need only to keep track of and minimize a number of discrepancies growing linearly in the number of periods . By eliminating each of these linearly-many imbalances, any time-dependent confounding would necessarily be removed, as shown by Theorem 1. In Section 5.5, we show how this feature also translates to favorable computational time when dealing with many time periods.
3.3 Minimizing imbalance while controlling precision
We can obtain minimal imbalance by minimizing . However, to control for extreme weights we propose to regularize the weight variables . We therefore wish to find weights that minimize plus a penalty for deviations from uniform weighting. Formally, we want to solve
where is the vector of ones and is the space of nonnegative weights . The squared distance of the weights from uniform weights here serves as a convex surrogate for the variance of the resulting MSM (assuming homoskedasticity or bounded residual variances) and in equation (10) can be interpreted as a penalization parameter that controls the trade off between imbalance and precision. When is equal to zero, the obtained weights provide minimal imbalance. When , the weights become uniformly distributed leading to an ordinary least squares estimator for the MSM.
In the next section, we discuss a specific choice of the norm that specified the worst case discrepancies , presented in Section 3.2. Specifically, we show that by choosing an RKHS to specify the norm, we can express the optimization problem in equation (10) as a convex-quadratic function in , which can be easily solved by using off-the-shelf solvers for quadratic optimization.
3.4 RKHS and quadratic optimization to balance time-dependent confounders
An RKHS is a Hilbert space of functions which is associated with a kernel (the reproducing kernel). Specifically, any positive semi-definite kernel on a ground space defines a Hilbert space given by (the unique completion of) the span of all functions for , endowed with the inner product . Kernels are widely used in machine learning to generalize the structure of conditional expectation functions with many applications in statistics [24,34, 35,36]. Commonly used kernels are the polynomial, Gaussian, and Matérn kernels .
The following theorem shows that if , the norm that specified the worst case discrepancies , is an RKHS norm given by the kernel , then we can express it as a convex-quadratic function in .
Define the matrix as
and note that it is positive semidefinite by definition. Then, if the norm is the RKHS norm given by the kernel , the squared worst case discrepancies are
where is the identity matrix and is the diagonal matrix with in its ith diagonal entry.
As an aside, we note that, when is given by an RKHS norm, our worst case squared discrepancies can in fact equivalently also be written as average square discrepancies over random function following a Gaussian process with covariance operator given by the same kernel as reproduces the RKHS (see ref. , Proposition 16). This is a consequence of the norm being given by an inner product in a Hilbert space.
Based on Theorem 2, we can now express the worst case imbalance, , defined in equation (9), as a convex-quadratic function. Specifically, let , which is given by setting every entry of to 0 whenever , and . We then get that
Finally, to obtain weights that balance covariates to control for time-dependent confounding while controlling precision we solve the quadratic optimization problem,
where , . We call this proposed methodology and the result of equation (12), KOW.
4 Practical guidelines
Solutions to the quadratic optimization problem (12) depend on several factors. First, they depend on the choice of the kernel and its hyperparameters. There are some existing practical guidelines on these choices [34,37], on which we rely as explained below. Second, they depend on the penalization parameter . Finally, solutions to equation (12) depend on the chosen set of lagged covariates to include in each kernel. In this section, we introduce some practical guidelines on how to apply KOW in consideration of these factors.
For each , the unknown function has two distinct inputs: the treatment history and the confounder history. To reflect this structure, we suggest to specify the kernel as a product kernel, i.e., given a treatment history kernel and a confounder history kernel . This simplifies the process of specifying the kernels without placing strong restrictions. We further suggest that for the treatment history to use a linear kernel involving lagged treatments, and for the confounder history to use a polynomial kernel involving the time-invariant confounders and lagged time-dependent confounders, , where and are hyperparameters. The linear kernel in treatments includes linear functions in the treatment, and the product specification for the whole kernel covers forms that include interactions between the lagged treatments and up-to-order- terms in the covariates. We discuss the choice of the number of lags and the hyperparameters below. In our simulation study in Section 5, we show that the MSE of the MSM-estimated effect using KOW with a product of linear kernel and a quadratic kernel ( ) outperforms estimates using weights obtained by IPTW, sIPTW, and CBPS in all considered simulated scenarios. We again use this choice of kernels in our empirical applications of KOW to real datasets in Section 7. Many other choices of kernel are also possible and may be more appropriate in a particular application, but we suggest the above combination as a generic and successful recipe.
When using kernels, preprocessing the data is an important step. In particular, normalization is employed to avoid unit dependence and covariates with high variance dominating those with smaller ones. Consequently, we suggest, beforehand, to scale the covariates related to the treatment and confounder histories to have mean 0 and variance 1.
To tune the kernels’ hyperparameters and the penalization parameter , we follow ref.  and use the empirical Bayes approach of marginal likelihood . We postulate a Gaussian process prior to , where is a constant function and is a kernel that depends on some set of hyperparameters . For each , we then maximize the marginal likelihood of seeing the data over and let . It would be more correct to consider the marginal likelihood of observing the partial means of outcomes, but we find that this much simpler approach suffices for learning the right representation of the data ( ) and the right penalization parameter ( ) and it enables the use of existing packages such as . We demonstrate this in the simulations presented in Section 5, and in particular in Figures 4 and 5 we see that this approach leads to a value of the penalization parameter that is near that which minimizes the resulting MSE of the MSM over possible parameters.
Another practical concern is how many lagged covariates to include in each of the kernels . When deriving inverse probability weights, it is common to model the denominator in equation (4) by fitting a pooled logistic model  including only the time-invariant confounders, , the time-dependent confounders at time , , and the one-time lagged treatment history, , rather than the entire histories, i.e., logit [30,40]. This can be understood as a certain Markovian assumption about the data generating process which simplifies the modeling when is large. The same can be done in the case of KOW, where we may assume that is only a function of the one-time lagged treatment, the time-dependent counfounders at time , and the time-invariant confounders, i.e., , and correspondingly let the kernel only depend on , , and . More generally, we can consider including any amount of lagged variables, as represented by the parameter in the above specification of the linear and polynomial kernels. In Section 7.2, we consider an empirical setting where is small and specify the kernels using the whole treatment and confounder histories ( ). However, in Section 7.1 we consider a setting where is large and, following previous approaches studying the same dataset using IPTW with a logistic model of only the one-time lags [30,40,41], we keep only the baseline and one-time-lagged data in each kernel specification ( ).
Certain datasets, such as the one we study in Section 7.1, have repeated observations of outcomes at each time . Thus, for each subject, we have observations to be used to fit the MSM. Correspondingly, each observation should be weighted appropriately. This can be seen as instances of the weighting problem. For sIPTW, this boils down to restricting the products in the numerator and denominator of equation (4) to be only up to for each . Similarly, in the case of KOW, we propose to solve equation (12) for each value of , producing weights, one for each of the outcome observations, to be used in fitting the MSM. This is demonstrated in Section 7.1.
In the case of a single, final observation of outcome, normalizing the weights, whether IPTW or KOW, does not affect the fitted MSM as it amounts to multiplying the least-squares loss by a constant factor. But in the repeated observation setting described above, normalizing each set of weights for each time period separately can help. Correspondingly, we can add a constraint to equation (12) that the mean of the weights must equal one for each time period separately, which we demonstrate in Section 7.1.
Similar to refs [1,29,30] we suggest using Wald confidence intervals constructed using robust (sandwich) standard errors.
In this section, we show the results of a simulation study aimed at comparing the bias and MSE of estimating the cumulative effect of a time-varying treatment on a continuous outcome by using an MSM with weights obtained by each of KOW, IPTW, sIPTW, and CBPS.
We considered two different simulated scenarios with time periods, (1) linear, where the treatment was modeled linearly, and (2) nonlinear, where it was modeled quadratically. In both scenarios, we modeled the outcomes nonlinearly so as not to favor our method unfairly. We tuned the kernel’s hyperparameters and the penalization parameter as presented in Section 4 and computed bias and MSE over 1,000 replications for each of the varying sample sizes, . In addition, to study the impact of the penalization parameter on bias and MSE, in both scenarios, we fixed the sample size to and considered a grid of 25 values for .
For the linear scenario, we drew the data from the following model:
where , , , , and
For the nonlinear scenario, we drew the data from the following model:
where , , , , and
The intercepts and are chosen so the marginal mean of is 0.
In each scenario and for each replication, we computed two sets of KOW weights. We obtain the first by using the product of two linear kernels ( ), one for the treatment history and one for the confounder history, and the second by using the product of a linear kernel for the treatment history and a quadratic kernel for the confounder history ( ). As presented in Section 4, we rescaled the variables before inputting them to the kernel and, for each replication, we tuned and the kernels’ hyperparameters by using Gaussian-process marginal likelihood. We also computed two sets of IPTW and sIPTW weights. We obtained the first by fitting for each a logistic regression model for the treatment conditioned on and their interactions, which is well-specified for (we term this the linear specification) and the second by adding all quadratic confounder terms and their interactions with which is well-specified for (we term this the non-linear specification). The numerator of sIPTW in either case was obtained by fitting a logistic regression on the treatment history alone. We obtain the final set of IPTW and sIPTW weights by multiplying the weights over as shown in equation (4). Finally, we computed two sets of weights using CBPS: one using the covariates as they are (linear CBPS) and one by augmenting the covariates with all quadratic monomials (non-linear CBPS). We used the full (non-approximate) version of CBPS, which outperformed the approximate version in all of our experiments.
We computed the causal parameter of interest by using WLS, regressing the outcome on the cumulative treatment and using weights computed by each of the methods. Specifically, in the linear scenario, we computed weights using (1) for KOW, the linear specification for IPTW and sIPTW, and linear CBPS, which we refer to as the correct case, and (2) for KOW, the nonlinear specification for IPTW and sIPTW, and the nonlinear CBPS, which we refer to as the overspecified case. In the nonlinear scenario, we again used each of the above, but refer to the first as the misspecified case and the second as the correct case. We highlight that these terms may only reflect the model specification for IPTW and sIPTW, as CBPS does not require a particular specification and the function need not necessarily be in the RKHS that either kernel specify.
We used and its interface  to solve equation (10) and optimize the KOW weights, the package  to perform the marginal likelihood estimation of hyperparameters, the package to call from within , the command to fit treatment models for IPTW and sIPTW, the package for CBPS, and the command to fit the MSM.
In this section, we discuss the results obtained in the simulation study across sample sizes and across values of the penalization parameter, . In summary, the proposed KOW outperformed IPTW, sIPTW, and CBPS with respect to MSE across all sample sizes and simulation scenarios. An important result is that, in the misspecified case, KOW showed a lower bias and MSE than that of IPTW, sIPTW, and CBPS across all considered sample sizes.
5.3 Across sample sizes
Figure 1 shows bias and MSE of the estimated time-varying treatment effect using KOW (solid), IPTW (dashed), sIPTW (dotted), and CBPS (dashed-dotted) when increasing the sample size from to . In the linear-correct scenario, IPTW had a lower bias compared with sIPTW, CBPS, and KOW in small samples (top-left panel of Figure 1). However, for larger samples, KOW had a smaller bias compared with IPTW, sIPTW and CBPS. KOW outperformed IPTW, sIPTW, and CBPS in terms of MSE across samples sizes (top-right panel of Figure 1). KOW outperformed the other methods with regard to MSE (bottom-right panel of Figure 1) across all sample sizes, in the linear-overspecified scenario. KOW and sIPTW performed similarly with respect to bias in the nonlinear-misspecified scenario (top-left panel of Figure 2), while KOW outperformed IPTW, sIPTW, and CBPS with respect to MSE in all sample sizes (top-right panel of Figure 2). KOW, IPTW, and sIPTW had similar bias in the nonlinear-correct scenario (bottom-left panel of Figure 2), with KOW outperforming the other methods, with respect to MSE, across all sample sizes (bottom-right panel of Figure 2). In summary, the MSE obtained by using KOW was lower than that of IPTW, sIPTW, and CBPS across all considered sample sizes. As the next section shows, the larger biases in some of the cases are driven by the choice of penalization parameter . Here we choose with an eye toward minimizing MSE. A smaller , it is shown next, can lead to KOW having both smaller bias and MSE than other methods, but the total benefit in MSE is smaller. Figure 3 shows coverage of the 95% Wald confidence interval using KOW across sample sizes and across simulation scenarios. Coverage was close to the nominal level under the linear-correct (solid), linear-overspecified (dot-dashed), and nonlinear-correct (dotted). Lower levels of coverage were obtained under the nonlinear-misspecified scenario (dashed).
5.4 Across values of the penalization parameter,
Figures 4 and 5 show the ratios of squared biases (left panels) and of MSEs (right panels) when comparing KOW (denominator) with IPTW (solid), sIPTW (dashed), and CBPS (dotted) (numerators) across different values of and in the linear and nonlinear scenarios, respectively. Since affects only KOW, squared biases and of MSEs obtained using IPTW, sIPTW, and CBPS were the same across values of . Values above 1 means that KOW had a lower bias or MSE. For zero or small , KOW significantly outperformed IPTW, sIPTW, and CBPS with respect to bias. In many cases, the MSE was also smaller for zero . But, the biggest benefit in MSE was seen for larger . The peaks of the right panels represent the points for which is optimal, i.e., the MSE of KOW is minimized. The solid vertical lines on the right panels show the mean values across replications of the value obtained by the procedure described in Sections 4 and 5.1 as done in the previous section. It can be seen that these are very near the points at which the MSE is minimized. The benefit in MSE both at and around this point was significant across all scenarios.
5.5 Computational time of KOW
In this section, we present the results of a simulation study aimed at comparing the mean computational time of KOW and CBPS. Compared to sIPTW based on pooled logistic regression, which is generally very fast, both KOW and CBPS have a nontrivial computational time that can grow with both the total number time periods and the number of covariates (which, for KOW, manifests as the complexity of the kernel functions). For KOW, the most time-consuming tasks are tuning by marginal likelihood and computing the matrices that define problem (12), which are affected by these two factors, while solving problem (12) is fast and does not depend on those factors. CBPS computational time is dominated by inverting a covariance matrix with dimensions increasing exponentially in and linearly in the number of covariates. Imai and Ratkovic  also propose using an approximate low-rank matrix that ignores certain covariance terms to make the matrix inversion faster.
Here we compare KOW, CBPS with full covariance matrix (CBPS-full), and CBPS with its low-rank approximation (CBPS-approx) when increasing the number of time periods and the number of covariates. Specifically, following the linear-correct scenario presented in Section 5.1, we fixed the sample size equal to and randomly generated 100 samples considering , and , where is the total number of covariates for each . We fixed the number of covariates to be equal to when evaluating the mean computational times over time periods, while we fixed the number of time periods to be equal to when analyzing over the number of covariates. For each sample, we computed the KOW weights by solving equation (12) using kernel . We used Gaussian process marginal likelihood to tune the kernels’ hyperparameters and penalization parameter. We computed CBPS weights using the linear CBPS as in Section 5.1. We used the package to compute the computational time on a PC with an i7-3770 processor, 3.4 GHz, 8GB RAM, and a Linux Ubuntu 16.04 operating system.
Solid lines of Figure 6 represent mean computational times for KOW, dashed for CBPS-full, and dotted for CBPS-approx. When the number of time periods was relatively small, the mean computational time of KOW was higher compared with both CBPS methods (left panel of Figure 6). However, the mean computation time of KOW over time periods increased linearly while that of both CBPS methods increased exponentially. This is due to the fact that, as presented in Section 3.1, the number of imbalances that we need to minimize grows linearly in the number of time periods. The mean computational time required by KOW when increasing the number of covariates remained constant, while it increased for both CBPS-full and CBPS-approx, with CBPS-full increasing more rapidly. In summary, KOW was less affected by the total number of time periods and covariates compared with CBPS with full and low-rank approximation matrix.
Computing KOW required three steps: tuning the parameters, constructing the matrices for problem (12), and solving problem (12). On average, for , the first step required 21% of the total computational time, the second 78.8%, and the last 0.2%. Thus, solving the optimization problem itself is very fast and is not the bottleneck.
6 KOW with informative censoring
In longitudinal studies, participants may drop out the study before the end of the follow-up time and their outcomes are, naturally, missing observations. When this missingness is due to reasons related to the study (i.e., related to the potential outcomes), selection bias is introduced. This phenomenon is referred to as informative censoring and it is common in the context of survival analysis where the interest is on analyzing time-to-event outcomes. Under the assumptions of consistency, positivity, and sequential ignorability of both treatment and censoring, Robins et al.  showed that a consistent estimate of the causal effect of a time-varying treatment can be obtained by weighting each subject at each time period by the product of inverse probability of treatment and censoring weights. Inverse probability of treatment weights is obtained as presented in Section 2, while inverse probability of censoring weights is usually obtained by inverting the probability of being uncensored at time , given the treatment and confounder history up to time .
In this section, we extend KOW to similarly handle informative censoring. We demonstrate that under sequentially ignorable censoring, minimizing the very same discrepancies as before at each time period, restricted to the units for which data are available, actually controls for both time-dependent confounding and informative censoring. Thus, KOW naturally extends to the setting with informative censoring.
Let for indicate whether unit is censored in time period and let . Note that implies that and that implies that . All we require is that we (at least) observe outcomes whenever , whenever , and whenever . Note we might observe more, such as the treatment at time for a unit with , or perhaps only some of the data after censoring is corrupted, but that is not required. We summarize the assumption of sequentially ignorable censoring as
Let us redefine
Similar to Theorem 1, the following theorem shows that we can write the difference between the weighted average outcome among the uncensored -treated units, , and the true average potential outcome of , , as the sum over time points of discrepancies involving the values of treatment and confounder histories up to time .
We then define the empirical counterparts to as before in equation (7) but limit ourselves to uncensored units, as in equation (14). We similarly define imbalance, , and the worst case imbalance , as before in equations (8) and (9). Finally, again using kernels to specify norms, we obtain weights that balance covariates to control for time-dependent confounding and account for informative censoring while controlling precision by solving the quadratic optimization problem,
where , , , , is a semidefinite positive matrix defined as , is the diagonal matrix with in its ith diagonal entry (recall for all ), and is the vector of all ones.
In this section, we present two empirical applications of KOW. In the first, we estimate the effect of treatment initiation on time to death among PLWH. In the second, we evaluate the impact of negative advertising on election outcomes.
7.1 Effect of HIV treatment on time to death
In this section, we analyze data from the Multicenter AIDS Cohort Study (MACS) to study the effect of the initiation time of treatment on time to death among PLWH. Indeed, due to the longitudinal nature of HIV treatment and the presence of time-dependent confounding, MSMs have been widely used to study causal effects in this domain [3,30,41,44,45, among others]. As an example of time-dependent confounding, CD4 cell count, a measurement used to monitor immune defenses in PLWH and to make clinical decisions, is a predictor of both treatment initiation and survival, as well as being itself influenced by prior treatments. Recognizing the censoring in the MACS data, Hernán et al.  showed how to estimate the parameters of the MSM by inverse probability of treatment and censoring weighting (IPTCW).
Here, we apply KOW as proposed in Section 6 to handle both time-dependent confounding and informative censoring while controlling precision. We considered the following potential time-dependent confounders associated with the effect of treatment initiation and the risk of death: CD4 cell count, white blood cell count, red blood cell count, and platelets. We also identified the age at baseline as a potential time-invariant confounding factor. We considered only recently developed HIV treatments, thus, including in the analysis only PLWH that started treatment after 2001. The final sample was composed of a total of people and 760 visits, with a maximum of visits per person. We considered two sets of KOW weights, either obtained by using a product of (1) two linear kernels, one for the treatment history and one for the confounder history ( ) or (2) a linear kernel for the treatment history and a polynomial kernel of degree 2 for the confounder history ( ). We scaled the covariates related to the treatment and confounder history, and tuned the kernels’ hyperparameters and the penalization parameter by using Gaussian processes marginal likelihood as presented in Section 4. Following previous approaches studying the HIV treatment using IPTCW that modeled treatment and censoring using single time lags [30,40,41], we included in each kernel the time-invariant confounders, the previous treatment, , and the time-dependent confounders at time , , instead of the entire histories. As described in Section 4, since we have repeated observations of outcomes, we compute a set of KOW weights by solving the optimization problem (16) for each horizon up to . In addition, as described in Section 4, we constrained the mean of the weights to be equal to one.
We compared the results obtained by KOW with those from IPTCW and sIPTCW. The latter sets of weights were obtained by using a logistic regression on the treatment history and the aforementioned time-invariant and time-dependent confounders and using only one time lag for each of the treatment and time-dependent confounders as done in previous approaches studying the HIV treatment using IPTCW [30,40,41]. The numerator of sIPTCW was computed by modeling in equation (4) with a logistic regression on the treatment history only using one time lag. We modeled the inverse probability of censoring weights similarly. The final sets of IPTCW and sIPTCW weights were obtained by multiplying inverse probability of treatment and censoring weights. We did not compare the results with those of CBPS because it does not handle informative censoring. In particular, CBPS requires a complete matrix of observed time-dependent confounders, while in the MACS dataset many entries are missing.
We estimated the hazard ratio of the risk of death by using a weighted Cox regression model  weighted by KOW, IPTCW, or sIPTCW and using robust standard errors . We used and its interface to solve equation (16) and obtain the KOW weights, the package to perform the marginal likelihood estimation of hyperparameters, the package to call from within , the package  to fit the treatment models for IPTCW and sIPTCW, and the command (with robust variance estimation) to fit the outcome model. It took 13.5 s to obtain a solution for KOW. Table 1 summarizes the result of our analysis. Both KOW ( ) and ( ) showed a significant protective effect of HIV treatment on time to death among PLWH. IPTCW showed a similar effect but with lower precision, resulting in a non-significant effect. With similar precision obtained by KOW, sIPTCW showed a non-significant effect of HIV treatment on time to death. Whereas analyses based on IPTCW and sIPTCW lead to non-significant and inconsistent conclusions, the results we obtained by using KOW show that PLWH can benefit from HIV treatment, as shown in independent randomized placebo-controlled trials [47,48].
Note: is the estimated hazard ratio of the effect of HIV treatment initiation on time to death. SE is the estimated robust standard error. Weights were obtained by using KOW ( ): a product of two linear kernels, one for the treatment history and one for the confounder history; KOW ( ): a product between a linear kernel for the treatment history and a polynomial kernel of degree 2 for the confounder history; IPTCW: a logistic regression on the treatment history and the time-invariant and time-dependent confounders (using only one time lag for each of the treatment and time-dependent confounders); sIPTCW: stabilized IPTCW. indicates statistical significance at the 0.05 level.
7.2 Impact of negative advertising on election outcomes
In this section, we analyze a subset of the dataset from the study by Blackwell  to estimate the impact of negative advertising on election outcomes. Because of the dynamic and longitudinal nature of the problem and presence of time-dependent confounders, MSMs have been previously used to study the question . Specifically, poll numbers are time-dependent confounders as they might both be affected by negative advertising and might also affect future poll numbers. We constructed the subset of the data from the study by Blackwell  by considering the 5 weeks leading up to each of 114 elections held 2000–2006 (58 US Senate, 56 US gubernatorial). Differently from Section 7.1 in which the outcome was observed at each time period, in this analysis, the binary election outcome was observed only at the end of each 5-week trajectory. In addition, all units were uncensored.
We estimated the parameters of two MSMs, the first having separate coefficients for negative advertising in each time period and the second having one coefficient for the cumulative effect of negative advertising. Each MSM was fit using weights given by each of KOW, IPTW, sIPTW, and CBPS (both full and approximate). We used the following time-dependent confounders: Democratic share of the polls, proportion of undecided voters, and campaign length. We also used the following time-invariant confounders: baseline Democratic vote share, proportion of undecided voters, status of incumbency, and election year and type of office. We obtained two sets of KOW weights by using a product of (1) two linear kernels, one for the history of negative advertising and one for the confounder history ( ) and (2) a linear kernel for the history of negative advertising and a polynomial kernel of degree 2 for the confounder history ( ). The kernels were over the complete confounder history up to time , , and two time-lags of treatment history, . We scaled the covariates and tuned the kernels’ hyperparameters and the penalization parameter by using Gaussian processes marginal likelihood. We obtained the final set of KOW weights by solving equation (12). We compared the results obtained by KOW with those from IPTW, sIPTW, CBPS-full, and CBPS-approx. To obtain the sets of IPTW, sIPTW, and CBPS weights, we used logistic models conditioned on the confounder history and two time-lags from the treatment history. To compute the numerator of sIPTW weights, we used a logistic regression conditioned only on two time-lags from the treatment history. We used and its interface to solve equation (16) and obtain the KOW weights, the package to perform the marginal likelihood estimation of hyperparameters, the package to call from within , the command to fit the treatment models for IPTW and sIPTW, the package for CBPS, the command to fit the outcome model, and the package to estimate robust standard errors. The computational time to obtain a solution was equal to 12.6 s for KOW, while it was equal to 104 s for CBPS-full and 3.8 s for CBPS-approx.
Table 2 summarizes the results of our analysis, reporting robust standard errors . The first six rows of Table 2 show the effect of the time-specific negative advertising. The last two rows present the effect of the cumulative effect of negative advertising. KOW ( and ) and IPTW showed similar effects, with increased precision when using KOW except for time 4, in which both methods showed a significant negative effect but with greater precision when using IPTW. sIPTW, CBPS-full, and CBPS-approx showed a significant negative effect at time 3 with similar precision. No significant results were obtained when considering the cumulative effect of negative advertising. All except sIPTW showed a negative cumulative effect. KOW ( ) was the most precise. We conclude that the impact of negative advertising in the majority of the time periods and its cumulative effect on election outcomes are not statistically significant.
Note: is the estimated effect of negative advertising. SE is the estimated robust standard error. Weights were obtained by using, KOW ( ): a product of two linear kernels, one for the history of negative advertising and one for the confounder history; KOW ( ): a product between a linear kernel for the history of negative advertising and a polynomial kernel of degree 2 for the confounder history; IPTW: a logistic model conditioned on the confounder history and two time-lags from the treatment history; sIPTW: stabilized IPTW; CBPS-full: CBPS with full covariance matrix; CBPS-approx: CBPS with low-rank approximation. indicates statistical significance at the 0.05 level.
In this article, we presented KOW, which finds weights for fitting an MSM with the aim of balancing time-dependent confounders while controlling for precision. That KOW uses mathematical optimization to directly and fully balance covariates as well as optimize precision explains the better performance of KOW over IPTW, sIPTW, and CBPS observed in our simulation study. In addition, as shown in Sections 3.2, 5, and 6, the proposed methodology only needs to minimize a number of discrepancies that grow linearly in the number of time periods, mitigates the possible misspecification of the treatment assignment model, allows balancing non-additive covariate relationships, and can be extended to control for informative censoring, which is a common feature of longitudinal studies.
Alternative formulations of our imbalance-precision optimization problem, equation (10), may be investigated. For example, additional linear constraints can be added to the optimization problem, as shown in the empirical application of Section 7.1, and different penalties can be considered to control for extreme weights. For instance, in equation (10), at the cost of no longer being able to use convex-quadratic optimization, one may directly penalize the covariance matrix of the WLS estimator rather than use a convex-quadratic surrogate as we do.
One may also change the nature of precision control. Here, we suggested penalization in an attempt to target total error. Alternatively, similar to ref. , we may reformulate equation (10) as a constrained optimization problem where the precision of the resulting estimator is constrained by an upper bound , thus seeking to minimize imbalances subject to having a bounded precision. In our convex formulation, the two are equivalent by Lagrangian duality in that for every precision penalization there is an equivalent precision bound . However, it may make specifying the parameters easier depending on the application as it may be easier for a practitioner to conceive of a desirable bound on precision. There may also be other ways to choose the penalization parameter. Here we suggested using maximum marginal likelihood but cross validation based on predicting outcomes and their partial means may also be possible. In this article, we based the penalty on extreme weights on the norm of (additive) deviations from uniformity. While we found this to be one reasonable choice, there are many other options, e.g., negative entropy or negative empirical likelihood.
The flexibility of our approach is that any of these changes amount to simply modifying the optimization problem that is fed to an off-the-shelf solver. Indeed, we were able to extend KOW from the standard longitudinal setting to also handle both repeated observations of outcomes and informative censoring. In addition to offering flexibility, the optimization approach we took, which directly and fully minimized our error objective phrased in terms of covariate imbalances, was able to offer improvements on the state of the art.
This article is based upon work supported by the National Science Foundation under Grants Nos. 1656996 and 1740822.
Funding information: This article is based upon work supported by the National Science Foundation under Grants Nos. 1656996 and 1740822.
Conflict of interest: Authors state no conflict of interest.