Many studies aim to learn about the causal effects of longitudinal exposures or interventions using data in which these exposures are not randomly assigned. Specifically, consider a study in which baseline covariates, time-varying exposures or treatments, time-varying covariates, and an outcome of interest, such as death, are observed on a sample of subjects followed over time. The exposures of interest can both depend on past covariates and affect future covariates, as well as the outcome. Censoring may also occur, possibly in response to past treatment and covariates. Such data structures are ubiquitous in observational cohort studies. For example, a sample of HIV-infected patients might be followed longitudinally in clinic and data collected on antiretroviral prescriptions, determinants of prescription decisions including CD4+ T cell counts and plasma HIV RNA levels (viral loads), and vital status. Such data structures also occur in randomized trials when the exposure of interest is (non-random) compliance with a randomized exposure or includes non-randomized mediators of an exposure’s effect.
The causal effects of longitudinal exposures (as well as the effects of single time point exposures when the outcome is subject to censoring) can be formally defined by contrasting the distribution of a counterfactual outcome under different “interventions” to set the values of the exposure and censoring variables. For example, the counterfactual survival curve of HIV-infected subjects following immunological failure of antiretroviral therapy might be contrasted under a hypothetical intervention in which all subjects were switched immediately to a new antiretroviral regimen versus an intervention in which all subjects remained on their failing therapy . In the presence of censoring due to losses to follow up, these counterfactuals of interest might be defined under a further intervention to prevent censoring. Interventions such as these, under which all subjects in a population are deterministically assigned the same vector of exposure and censoring decisions (for example, do not switch and remain under follow up) are referred to as “static regimes.”
More generally, counterfactuals can be defined under interventions that assign a treatment or exposure level to each subject at each time point based on that subject’s observed past. For example, counterfactual survival might be compared under interventions to switch all patients to second line antiretroviral therapy the first time their CD4+T cell count crosses a certain threshold, for some specified set of thresholds . Such subject-responsive treatment strategies have been referred to as individualized treatment rules, adaptive treatment strategies, or “dynamic regimes” (see, for example, Robins ; Murphy et al. ; Hernan et al. ). Additional examples include strategies for deciding when to start antiretroviral therapy [6, 7] and strategies for modifying dose or drug choice based on prior response and adverse effects. Investigation of the effects of such dynamic regimes makes it possible to learn effective strategies for assigning an intervention based on a subject’s past and is thus relevant to any discipline that seeks to learn how best to use past information to make decisions that will optimize future outcomes.
The static and dynamic regimes described above are longitudinal – they involve interventions to set the value of multiple treatment and censoring variables over time. For example, counterfactual survival under no switch to second line therapy corresponds to a subject’s survival under an intervention to prevent a patient from switching at each time point from immunologic failure until death or the end of the study. A time-dependent causal dose–response curve, which plots the mean of the intervention-specific counterfactual outcome at time t as a function of the interventions through time t, can be used to summarize the effects of these longitudinal interventions. For example, a plot of the counterfactual survival probability as a function of time since immunologic failure, for a range of alternative CD4+ T cell count thresholds used to initiate a switch captures the effect of alternative switching strategies on survival.
Formal causal frameworks provide a tool to establish the conditions under which such causal dose–response curves can be identified from the observed data. Longitudinal static and dynamic regimes are often subject to time-dependent confounding – time-varying variables may confound the effect of future treatments while being affected by past treatment . Traditional approaches to the identification of point treatment effects, which are based on selection of a single set of covariates for regression or stratification-based adjustment, break down when such time-dependent confounding is present. However, the mean counterfactual outcome under a longitudinal static or dynamic regime may still be identified under the appropriate sequential randomization and positivity assumptions (reviewed in Robins and Hernan ).
Under these assumptions, causal dose–response curves can be estimated by generating separate estimates of the mean counterfactual outcome for each time point and intervention (or regime) of interest. For example, one could generate separate estimates of the counterfactual survival curve for each CD4-based threshold for switching to second line therapy. In this manner, one obtains fits of the time-dependent causal dose–response curve for each of a range of possible thresholds, which together summarize how the mean counterfactual outcome at time t depends on the choice of threshold.
A number of estimators can be used to estimate intervention-specific mean counterfactual outcomes. These include inverse probability weighted (IPW) estimators (for example, [3, 5, 10]), “G-computation” estimators (typically based on parametric maximum likelihood estimation of the non-intervention components of the data generating process) (for example, [7, 11, 12]), augmented-IPW estimators (for example, [13–16, 31]), and targeted maximum likelihood (or minimum loss) estimators (TMLEs) (for example, [17, 18]). In particular, van der Laan and Gruber  combine the targeted maximum likelihood framework [20, 21] with important insights and the iterated conditional expectation estimators established in Robins [3, 29] and Bang and Robins .
Both the theoretical validity and the practical utility of these estimators rely, however, on reasonable support for each of the interventions of interest, both in the true data generating distribution and in the sample available for analysis. For example, in order to estimate how survival is affected by the threshold CD4 count used to initiate an antiretroviral treatment switch, a reasonable number of subjects must in fact switch at the time indicated by each threshold of interest. Without such support, estimators of the intervention-specific outcome will be ill-defined or extremely variable. Although one might respond to this challenge by creating coarsened versions of the desired regimes, so that sufficient subjects follow each coarsened version, such a method introduces bias and leaves open the question of how to choose an optimal degree of coarsening.
Since adequate support for every intervention of interest is often not available, Robins  introduced marginal structural models (MSMs) that pose parametric or small semiparametric models for the counterfactual conditional mean outcome as a function of the choice of intervention and time. For example, static MSMs have been used to summarize how the counterfactual hazard of death varies as a function of when antiretroviral therapy is initiated  and when an antiretroviral regimen is switched . The extrapolation assumptions implicitly defined by non-saturated MSMs make it possible to estimate the coefficients of the model, and thereby the causal dose–response curve, even when few or no subjects follow some interventions of interest.
While MSMs were originally developed for static interventions [8, 10, 23, 24] they naturally generalize to classes of dynamic (or even more generally, stochastic) interventions as shown in van der Laan and Petersen  and Robins et al. . Dynamic MSMs have been used, for example, to investigate how counterfactual hazard of death varies as a function of CD4+ T cell count threshold used to initiate antiretroviral therapy  or to switch antiretroviral therapy regimens . Because the true shape of the causal dose–response curve is typically unknown, we have suggested that MSMs be used as working models. The target causal coefficients can then be defined by projecting the true causal dose–response curve onto this working model [20, 27].
The coefficients of both static and dynamic MSMs are frequently estimated using IPW estimators [2, 8, 10, 26]. These estimators have a number of attractive qualities: they can be intuitively understood, they are easy to implement, and they provide an influence curve-based approach to standard error estimation. However, IPW estimators also have substantial shortcomings. In particular, they are biased if the treatment mechanism used to construct the weights is estimated poorly (for example, using a misspecified parametric model). Further, IPW estimators are unstable in settings of strong confounding (near or partial positivity violations) and the resulting bias in both point and standard error estimates can result in poor inference (for a review of this issue see Petersen et al. ). Dynamic MSMs can exacerbate this problem, as the options for effective weight stabilization are limited [6, 26].
Asymptotically efficient and double robust augmented-IPW estimators of the estimand corresponding to longitudinal static MSM parameters were developed by Robins and Rotnitzky , Robins , Robins et al. . These estimators are defined as a solution of an estimating equation, and as a result may be unstable due to failure to respect the global constraints implied by the model and the parameter. Robins [13, 29] and Bang and Robins  introduced an alternative double robust estimating equation-based estimator of longitudinal MSM parameters based on the key insight that both the statistical target parameter and the corresponding augmented-IPW estimating function (efficient influence curve) for MSMs on the intervention-specific mean can be represented as a series of iterated conditional expectations. In addition, they proposed a targeted sequential regression method to estimate the nuisance parameters of the augmented-IPW estimating equation. This innovative idea allowed construction of a double robust estimator that relies only on estimation of minimal nuisance parameters beyond the treatment mechanism.
In this paper, we describe a double robust substitution estimator of the parameters of a longitudinal marginal structural working model. The estimator presented incorporates the key insights and prior estimator of Robins [13, 29] and Bang and Robins  into the TMLE framework. Specifically, we expand on this prior work in several ways. We propose a TMLE for marginal structural working models for longitudinal dynamic regimes, possibly conditional on pre-treatment covariates. The TMLE described is defined as a substitution estimator rather than as solution to an estimating equation and incorporates data-adaptive/machine learning methods in generating initial fits of the sequential regressions. Finally, we further generalize the TMLE to apply to a larger class of parameters defined as arbitrary functions of intervention-specific means across a user-supplied class of interventions.
TMLE for the parameters of a MSM for “point treatment” problems, in which adjustment for a single set of covariates known not to be affected by the intervention of interest is sufficient to control for confounding, including history-adjusted MSMs, have been previously described [30, 31]. However, the parameter of a longitudinal MSM on the intervention-specific mean under sequential interventions subject to time-dependent confounding is identified as a distinct, and substantially more complex, estimand than the estimand corresponding to a point treatment MSM, and thus requires distinct estimators. An alternative TMLE for longitudinal static MSMs, which we refer to as a stratified TMLE, was described by Schnitzer et al. . The stratified TMLE uses the longitudinal TMLE of van der Laan and Gruber  for the intervention-specific mean to estimate each of a set of static treatments and combines these estimates into a fit of the coefficients of a static longitudinal MSM on both survival and hazard functions. The stratified TMLE  resulted in substantially lower standard error estimates than an IPW estimator in an applied data analysis and naturally generalizes to dynamic MSMs. However, it remains vulnerable when there is insufficient support for some interventions of interest. In contrast, the TMLE we describe here pools over the set of dynamic or static interventions of interest as well as optionally over time when updating initial fits of the likelihood. It thus substantially relaxes the degree of data support required to remain an efficient double robust substitution estimator.
In summary, a large class of causal questions can be formally defined using static and dynamic longitudinal MSMs, and the parameters of these models can be identified from non-randomized data under well-studied assumptions. This article describes a TMLE that builds on the work of Robins [13, 29] and Bang and Robins  in order to directly target the coefficients of a marginal structural (working) model for a user-supplied class of longitudinal static or dynamic interventions. The theoretical properties of the pooled TMLE are presented, its implementation is reviewed, and its practical performance is compared to alternatives using both simulated and real data. R code  implementing the estimator and evaluating it in simulations is provided in online supplementary materials and as an open source R library ltmle .
1.1 Organization of paper
In Section 2, we define the observed data and a statistical model for its distribution. We then specify a non-parametric structural equation model for the process assumed to generate the observed data. We define counterfactual outcomes over time based on static or dynamic interventions on multiple treatment and censoring nodes in this system of structural equations. Our target causal quantity is defined using a marginal structural working model on the mean of these intervention-specific counterfactual outcomes at time t. The general case we present includes marginal structural working models on both the counterfactual survival and the hazard. We briefly review the assumptions under which this causal quantity is identified as a parameter of the observed data distribution. The statistical estimation problem is thus defined in terms of the statistical model and statistical target parameter.
Section 3 presents the TMLE defined by (a) representation of the statistical target parameter in terms of an iteratively defined set of conditional mean outcomes, (b) an initial estimator for the intervention mechanism and for these conditional means, (c) a submodel through this initial estimator and a loss function chosen so that the generalized score of the submodel with respect to this loss spans the efficient influence curve, (d) a corresponding updating algorithm that updates the initial estimator and iterates the updating till convergence, and (e) final evaluation of the TMLE as a plug-in estimator. We also present corresponding influence curve-based confidence intervals for our target parameter.
Section 4 illustrates the results presented in Section 3 using a simple three time point example and focusing on a marginal structural working model for counterfactual survival probability over time. This example is used to clarify understanding of notation and to provide a step-by-step overview of implementation of the pooled TMLE.
Section 5 compares the pooled TMLE described in this paper with alternative estimators for the parameters of longitudinal dynamic MSMs for survival. We provide a brief overview of the stratified TMLE , discuss scenarios in which each estimator may be expected to offer superior performance, and illustrate the breakdown of the stratified TMLE in a finite sample setting in which some interventions of interest have no support. As IPW estimators are currently the most common approach used to fit longitudinal dynamic MSMs, we also discuss two IPW estimators for these parameters.
Section 6 presents a simulation study in which the pooled TMLE is implemented for a marginal structural working model for survival at time t. Its performance is compared to IPW estimators and to the stratified TMLE for a simple data generating process and in a simulation designed to be similar to the data analysis presented in the following section, which includes time-dependent confounding and right censoring.
Section 7 presents the results of a data analysis investigating the effect of switching to second line therapy following immunologic failure of first line therapy using data from HIV-infected patients in the International Epidemiological Databases to Evaluate AIDS (IeDEA), Southern Africa. Throughout the paper, we illustrate notation and concepts using a simplified data structure based on this example.
Appendices contain a derivation of the efficient influence curve, further simulation details, an alternative TMLE, and reference table for notation. In online supplementary files, we present R code that implements the pooled TMLE, the stratified TMLE, and two IPW estimators for a marginal structural working model of survival. A corresponding publicly available R-package, ltmle, was released in May 2013 (http://cran.r-project.org/web/packages/ltmle/).
2 Definition of statistical estimation problem
Consider a longitudinal study in which the observed data structure O on a randomly sampled subject is coded as where are baseline covariates, denotes an intervention node at time t, and denotes covariates measured between intervention nodes and . Assume that there is an outcome process for , where is the final outcome measured after the final treatment . The intervention node has a treatment node and a censoring indicator , where indicates that the subject is right censored by time t. We observe n independent and identically distributed (i.i.d.) copies of O, and we will denote the probability distribution of O with , or more simply, as . Throughout, we use subscript 0 to denote the true distribution.
Running example. Here and in subsequent sections, we illustrate notation using an example in which n i.i.d. HIV-infected subjects with immunological failure on first line therapy are sampled from some target population. Here denotes time of immunological failure. denotes time-varying covariates, and includes CD4+ T cell count at time t and , an indicator of death by time t. In addition to baseline values of these time-varying covariates, includes non-time-varying covariates such as sex. The intervention nodes of interest are , where is defined as an indicator of switch to second line therapy by time t; in our simplified example, we assume no right censoring. For notational convenience, after a subject dies all variables for that subject are defined as equal to their last observed value.
2.1 Statistical model
We use the notation to denote the history of time-dependent variable L from . Define the “parents” of a variable , denoted , as those variables that precede (i.e., ). Similarly, is used to denote the history of the intervention process and to denote a specified subset of the variables that precede such that the distribution of given the whole past is equal to the distribution of given its parents (). Under our causal model, which we introduce below, these parent sets and correspond to the set of variables that may affect the values taken by and , respectively.
We use to denote the conditional distribution of , given , and, to denote the conditional distribution of , given . We also use the notation: , and define and . In our example, denotes the joint conditional distribution of CD4 count and death at time k, given the observed past (including past CD4 count and switching history), and denotes the conditional probability of having switched to second line by time k given the observed past (deterministically equal to one for those time points at which a subject has already switched).
The probability distribution of O can be factorized according to the time-ordering as We consider a statistical model for that possibly assumes knowledge on the intervention mechanism . For example, the treatment of interest, such as switch time, may be known to be randomized, or to be assigned based on only a subset of the observed past. If is the set of all values for and the set of possible values of , then this statistical model can be represented as . In this statistical model, puts no restrictions on the conditional distributions , .
2.2 Causal model and counterfactuals of interest
By specifying a structural causal model [35, 36] or equivalently, a system of non-parametric structural equations, it is assumed that each component of the observed longitudinal data structure (e.g. or ) is a function of a set of observed parent variables and an unmeasured exogenous error term. Specifically, consider the non-parametric structural equation model (NPSEM) defined by and in terms of a set of deterministic functions , and a vector of unmeasured random errors or background factors [35, 36].
To continue our HIV example, we might specify a causal model in which both time-varying CD4 count, death, and the decision to switch potentially depended on a subject’s entire observed past, as well as unmeasured factors. Alternatively, if we knew that switching decisions were made only in response to a subject’s most recent CD4 count and baseline covariates, the parent set of could be restricted to exclude earlier CD4 count values.
This causal model represents a model for the distribution of and provides a parameterization of the distribution of the observed data structure O in terms of the distribution of the random variables modeled by the system of structural equations. Let denote the latter distribution. The causal model encodes knowledge about the process, including both measured and unmeasured variables, that generated the observed data. It also implies a model for the distribution of counterfactual random variables under specific interventions on (or changes to) the observed data generating process. Specifically, a post-intervention (or counterfactual) distribution is defined as the distribution that O would have had under a specified intervention to set the value of the intervention nodes .
The intervention of interest might be static, with replaced by some constant for all subjects. For example, an intervention to set for corresponds to a static intervention to delay switching indefinitely for all subjects. Alternatively, the intervention might be dynamic, with replaced by some specified function of a subject’s observed covariates. For example, an intervention could set to 1 the first time a subject’s CD4 count drops below some threshold. As static regimes are a special case of dynamic regimes, in the following sections we define the statistical estimation problem and develop our estimator for the more general dynamic case. Throughout, we use “rule” to refer to a specific intervention, static or dynamic, that sets the values of .
Given a rule d, the counterfactual random variable is defined by deterministically setting all the nodes equal to in the system of structural equations. The probability distribution of this counterfactual is called the post-intervention or counterfactual distribution of L and is denoted with . Causal effects are defined as parameters of a collection of post-intervention distributions under a specified set of rules. For example, we might compare mean counterfactual survival over time under a range of possible switch times.
2.3 Marginal structural working model
Our causal quantity of interest is defined using a marginal structural working model to summarize how the mean counterfactual outcome at time t varies as a function of the intervention rule d, time point t, and possibly some baseline covariate V that is a function of the collection of all baseline covariates . Specifically, given a class of dynamic treatment rules , we can define a true time-dependent causal dose–response curve for some subset . Note that choice of V (as well as choice of and ) depends on the scientific question of interest. In many cases V will be defined as the empty set. In other cases, it may be of interest to estimate how the causal dose–response curve varies depending on the value of some subset of baseline variables.
We specify a working model for this true time-dependent causal dose–response curve. Our causal quantity of interest is then defined as a projection of the true causal dose–response curve onto this working model, which yields a definition representing this projection. For example, if , we may use a logistic working model for a set of basis functions, and define our causal quantity of interest as where is a user-specified weight function. We discuss choice of further below.
Such a solves the equation This equation can be replaced by which corresponds with In this case we have that .
To be completely general, we will define our causal quantity of interest as a function f of across and the distribution of . Thus we define In addition to including the above example, this general formulation allows us to include marginal structural working models on continuous outcomes and on the intervention-specific hazard.
2.3.1 Choice of a weight function
Unless one is willing to assume that the MSM is correctly specified, choice of the weight function changes the target quantity being estimated. Choice of the weight function should thus be guided by the motivating scientific question. For example, the simple weight function gives equal weight to all time points and switch times. Alternatively, choice of a weight function equal to the marginal probability of following rule d through time t within strata of V gives greater weight to those rule, time, and baseline strata combinations with more support in the data, and zero weight to values without support. As discussed further below, choice of a weight function can thus also affect both identifiability of the target parameter and the asymptotic and finite sample properties of IPW and TML estimators.
2.3.2 Running example
Continuing our HIV example, recall that static regimes are a special case of dynamic regimes and define the set of treatment rules of interest as the set of possible switch times (switch at time 0, switch at time 1,, never switch). We might focus on the marginal counterfactual survival curves under a range of switch times (with V defined as the empty set). Alternatively, we might investigate how survival under a specific switch time differs among subjects that have a CD4+ T cell count versus cells/ at time of failure ( where ). For simplicity, for the remainder of the paper we use a running example in which we avoid conditioning on baseline covariates (i.e. ).
The true time-dependent causal dose–response curve corresponds to the set of counterfactual survival curves (through time ) that would have been observed for the population as a whole under each possible switch time. In this example, each rule d implies a single vector ; we use to refer to the value implied by rule d and to refer to the switch time assigned by rule d. One might then specify the following marginal structural working model to summarize how the counterfactual probability of death by time t varies as a function of t and assigned switch time: where is time since switch for subjects who have switched by time , and otherwise 0. For simplicity, we choose and define the target causal quantity of interest as the projection of onto according to (1)
2.4 Identifiability and definition of statistical target parameter
We assume the sequential randomization assumption  (2)(noting that weaker identifiability assumptions are also possible; see, for example, Robins and Hernan ). In our HIV example, the plausibility of this assumption would be strengthened by measuring all determinants of the decision to switch to second line therapy that also affect mortality via pathways other than switch time.
We further assume positivity, informally an assumption of support for each rule of interest across covariate histories compatible with that rule of interest. Specifically, for each for which , we assume (3)In our HIV example, in which , a subject who has not already switched should have some positive probability of both switching and not switching regardless of his covariate history. Under these assumptions, the counterfactual probability distribution of is identified from the true observed data distribution and given by the G-computation formula : (4)where . Thus this G-computation formula is defined by the product over all -nodes of the conditional distribution of the -node, given its parents, and given . If identifiability assumptions (2) and (3) hold for each rule , then the time-dependent causal dose–response curve is also identified from through the collection of G-computation formulas . For the remainder of the paper, we choose and at times suppress the index set .
Let denote a random variable with probability distribution , which includes as a component the process . The above-defined causal quantities can now be defined as a parameter of . For example, if and the causal parameter of interest is a vector of coefficients in a logistic MSM, then we have The estimand solves the equation The causal identifiability assumptions put no restrictions on the probability distribution so that our statistical model is unchanged, with the exception that we now also assume positivity (3). The statistical target parameter is now defined as a mapping that maps a probability distribution of O into a vector of parameter values .
The statistical estimation problem is now defined: We observe n i.i.d. copies of and we want to estimate for a defined target parameter mapping . For this estimation problem, the causal model plays no further role – even when one does not believe any of the causal assumptions, one might still argue that the statistical parameter represents an effect measure of interest controlling for all the measured confounders.
3 Pooled TMLE of working MSM for dynamic treatments and time-dependent outcome process
The TMLE algorithm starts out with defining the target parameter as a for a particular choice that is easier to estimate than the whole likelihood Q. It requires the derivation of the efficient influence curve which can also be represented as . Subsequently, it defines a loss function for and a submodel through at , indexed by the intervention mechanism g, chosen so that spans the efficient influence curve . Given these choices, it remains to define the updating algorithm which simply uses the submodel through the initial estimator to determine the update by fitting with minimum loss based estimation (MLE), and this updating step is iterated till convergence at which point the MLE of equals 0. By the fact that an MLE solves its score equation, it then follows that the final update also solves the efficient influence curve equation , which provides the foundation for its asymptotic linearity and efficiency. The remainder of this section presents each of these steps in detail.
An estimator of is efficient among the class of regular estimators if and only if it is asymptotically linear with influence curve . The efficient influence curve can thus be used as an ingredient for the construction of an efficient estimator. One approach is to represent the efficient influence curve as an estimating function and define an estimator as the solution of , given initial estimators . This is referred to as the estimating equation methodology for construction of locally efficient estimators . Here, we instead use the efficient influence curve to define a targeted maximum likelihood (substitution) estimator that, as a by-product of the procedure, satisfies and thus also solves the efficient influence curve estimating equation. Under regularity conditions, one can now establish that, if consistently estimates , then is asymptotically linear with influence curve equal to the efficent influence curve, so that is asymptotically efficient. In addition, robustness properties of the efficient influence curve are naturally inherited by the TMLE.
Robins [13, 29] and Bang and Robins  reformulate the statistical target parameter and corresponding efficient influence curve for longitudinal MSMs on the intervention-specific mean as a series of iterated conditional expectations. For completeness, and to generalize to dynamic marginal structural working models possibly conditional on baseline covariates, as well as to general functions of the intervention-specific mean across a user-supplied class of interventions, we present this reformulation of the statistical target parameter below. The corresponding efficient influence curve is given in Appendix B. We will use the common notation for the expectation of a function with respect to P.
3.1 Reformulation of the statistical target parameter in terms of iteratively defined conditional means
For the case in Section 2.4 we defined as (5) where . Thus, only depends on P through and . Therefore, we will also refer to the statistical target parameter as where we redefine . For each given t, we can use the following recursive definition of : for we have where we define . This defines as an iteratively defined conditional mean .
To obtain we simply put , combined with the marginal distribution of into the above representation . As mentioned in the previous section, we have that solves the score equations given by where we defined The TMLE for the linear working model using the squared error loss function is obtained by simply redefining .
In general, the above shows that we can represent as a function , where and we have an explicit representation of the derivative equation corresponding with f.
3.2 Estimation of intervention mechanism
The log-likelihood loss function for is . Specifically, we can factorize the likelihood as where represents the treatment mechanism and represents the censoring mechanism. Both mechanisms can be estimated separately with a log-likelihood based logistic regression estimator, either according to parametric models, or preferably using the state of the art in machine learning. In particular, we can use the log-likelihood based super learner based on a library of candidate machine learning algorithms, which uses cross-validation to determine the best performing weighted combination of the candidate machine learning algorithms . Use of such aggressive data-adaptive algorithms is recommended in order to ensure consistency of .
If there are certain variables in the that are known to be instrumental variables (variables that affect future Y nodes only via their effects on ), then these variables should be excluded from our estimates of in the TMLE procedure. In that case our estimate of the conditional distribution of is in fact not estimating the conditional distribution of given its parents; however, for simplicity we do not make this explicit in our notation.
3.3 Loss functions and initial estimator of
We will alternate notation and . Recall that depends on Q through , and . Note is a function of , , , . We will use the following loss function for : This is an application of the log-likelihood loss function for the conditional mean of given past covariates and given that past treatment has been assigned according to rule d. For example, fitting a parametric logistic regression model of on past covariates among subjects with would minimize the empirical mean of this loss function over the unknown parameters of the logistic regression model. Alternatively, one could use loss-based machine learning algorithms, such as loss-based super learning, with this loss function.
In this loss function, the outcome is treated as known. In implementation of our estimator, it will be replaced by an estimate; we thus refer to as a nuisance parameter in this loss function. The collection of loss functions from implies a sequential regression procedure where one starts at and sequentially fits for . We describe this procedure in greater detail in the next subsection, for a sum-loss function that sums the above loss function over a collection of rules .
By summing over , the time points t, and , we obtain the loss function for the whole .
We will use the log-likelihood loss as loss function for the distribution of , but this loss will play no role since we will estimate with the empirical distribution function . To conclude, we have presented a loss function for all components of our target parameter depends on, and the sum-loss function is a valid loss function for as a whole.
3.4 Non-targeted substitution estimator
These loss functions imply a sequential regression methodology for fitting each of the required components of . These initial fits can then be used to construct a non-targeted plug-in estimator of the target parameter . As noted, we estimate the marginal distribution of with the empirical distribution. We now describe how to obtain an estimator , , , for any given . We define for all d, and recall that is the regression of on and . This latter regression can be carried out conditional on , stratifying only on not being censored through time (i.e. ). The resulting fit for all values can then be evaluated at . In this manner, if certain rules have little support, one can still obtain an initial estimator that smoothes across all observations.
Given the regression fit , for a , we regress onto and evaluate it at , giving us . This is carried out for each , giving us for each . Again, given this regression , we regress this on , and evaluate it at , giving us . We carry this out for each , giving us , for each . This process is iterated until we obtain an estimator of for each . Since this process is carried out for each , this results in an estimator for each and . We denote this estimator of with . Note that a plug-in estimator of is now obtained by regressing onto according to the working marginal structural model using weighted logistic regression based on the pooled sample , , , with weight .
The pooled TMLE presented below utilizes this same sequential regression algorithm and makes use of these initial fits of . In order to provide a consistent initial estimator of and thereby improve the efficiency of the TMLE, use of an aggressive data-adaptive algorithm such as super learning  when generating the initial regression fits is recommended. These initial fits are then updated to remove bias in a series of targeting steps that rely on the fit of . The updating steps involve submodels whose score spans the efficient influence curve.
3.5 Loss function and least favorable submodel that span the efficient influence curve
Recall that we use the notation for the cumulative product of conditional intervention distributions. Consider the submodel with parameter defined by This parameter is of same dimension as and . This defines a submodel with parameter through . Note that This shows that where is the efficient influence curve as presented in Corollary (1), Appendix (B), and we define giving In other words, the sum-loss function and submodel through generates the component of the efficient influence curve .
Consider also a submodel of with score , but this submodel and loss will play no role in the TMLE algorithm since we will estimate with its NPMLE, the empirical distribution of , , so that the MLE of will be equal to zero. This defines our submodel . The sum-loss function and this submodel satisfy the condition that the generalized score spans the efficient influence curve: (6)
3.6 Pooled TMLE
We now describe the TMLE algorithm based on the above choices of (1) the representation of as , (2) the loss function for , and (3) the least favorable submodels through at for fluctuating these parameters . We utilize the same sequential regression approach described in Section 3.4, but now incorporate sequential targeted updating of the initial regression fits. We assume an estimator of . We first specify where in the algorithm updating occurs and then describe the updating process.
Recall that we define for all d and that is the regression of on . For any given , the initial estimator is first updated to using a logistic regression fit of our least favorable submodels, as described below. For a , we then regress the updated regression fit onto , and evaluate it at , giving us . This is carried out for each , giving us for each . The regressions are then updated for each , as described below, giving us for each . For a , we then regress the updated regression fit on and evaluate it at , giving us . We again carry this out for each , giving us for each and again update the resulting regressions, giving us , for each . This process is iterated until we obtain an updated estimator for each . Since this process is carried out for each , this results in an estimator for each and . We denote this estimator of with .
The updating steps are implemented as follows: for each , and for to , we compute and compute the corresponding update , for all . Note that Thus can be obtained by fitting a logistic regression of the outcome with offset on multivariate covariate using a data set pooled across (consisting of observations).
This defines the TMLE . In particular, is the TMLE of . This defines now the TMLE of , where is the empirical distribution of .
The TMLE of is the plug-in estimator corresponding with and : This plug-in estimator of is obtained by regressing onto according to the marginal structural working model in the pooled sample , , , using weights .
An alternative pooled TMLE that only fits a single to compute the update is described in Appendix C.
3.7 Statistical inference for pooled TMLE
By construction, the TMLE solves the efficient influence curve equation , thereby making it a double robust locally efficient substitution estimator under regularity conditions, and positivity (3) (van der Laan , theorem 8.5, appendix A.18). Here, we provide standard error estimates and thereby confidence intervals for the case that is a maximum likelihood estimator for using a correctly specified semiparametric model for .
Specifically, if is a maximum likelihood estimator of according to a correctly specified semiparametric model for , and converges to some possibly misspecified , then under regularity conditions the TMLE is asymptotically linear with an influence curve given by minus its projection onto the tangent space of this semiparametric model for . As a consequence, the asymptotic variance of is more spread-out or equal to the covariance matrix . A consistent estimator of this asymptotic variance is given by As a consequence, is an asymptotically conservative 95% confidence interval for , and we can also use this multivariate normal limit result, , to construct a simultaneous confidence interval for and to test null hypotheses about . This variance estimator treats weight function h as known. If h is estimated, then this variance estimator still provides valid statistical inference for the statistical target parameter defined by the estimated h.
In the case that is a data-adaptive estimator converging to , we suggest (without proof), that this variance estimator will still provide an asymptotically conservative confidence interval under regularity conditions. However, ideally the data-adaptive estimator should also be targeted . An approach to valid inference in the case where is inconsistent but is consistent is also discussed in van der Laan ; however, it remains to be generalized to the parameters in this paper.
4 Implementation of the pooled TMLE
The previous section reformulated the statistical parameter in terms of iteratively defined conditional means and described a pooled TMLE for this representation. In this section, we illustrate notation and implementation of this TMLE to estimate the parameters of a marginal structural working model on counterfactual survival over time.
4.1 The statistical estimation problem
We continue our motivating example, in which the goal is to learn the effect of switch time on survival. For illustration, focus on the two time point case where . Let the observed data consist of n i.i.d. copies of . Let , where is an indicator of death by time t, and is CD4 count at time t. Assume all subjects are alive at baseline (). As above, is an indicator of switch to second line by time t. We assume no right censoring so that all subjects are followed until death or the end of the study (for convenience define variable values after death as equal to their last observed value). We specify a NPSEM such that each variable may be a function of all variables that precede it and an independent error, and assume the corresponding non-parametric statistical model for .
Define the set of treatment rules of interest as the set of all possible switch times (where 2 corresponds to no switch). Each rule d implies a single vector ; we use to refer to the value implied by rule d, and to refer to the switching time implied by rule d. We specify the following marginal structural working model for counterfactual probability of death by time t under rule d: (7)The target causal parameter is defined as the projection of onto according to eq. (1).
Under the sequential randomization (2) and positivity (3) assumptions, . The target statistical parameter is defined as the projection of , onto the marginal structural working model , according to eq. (5) with .
4.1.1 Reformulation of the statistical target parameter
Note that for rule d (denoted ) can be expressed in terms of iteratively defined conditional means: while (denoted ) equals . The statistical target parameter is defined by plugging , and the marginal distribution of into eq. (5).
4.2 Estimator implementation
We begin by describing implementation of a simple plug-in estimator of .
4.2.1 Non-targeted substitution estimator
For each rule of interest , corresponding to each possible switch time, generate a vector of length n for :
Fit a logistic regression of on and generate a predicted value for each subject by evaluating this regression fit at . Note , so the regression need only be fit and evaluated among subjects who remain alive at time 1. This gives a vector of length n.
Fit a logistic regression of the predicted values generated in the previous step on . Generate a new predicted value for each subject by evaluating this regression fit at . This gives a vector of length n.
For each rule of interest generate a vector of length n for : Fit a logistic regression of on and generate a predicted value for each subject by evaluating this regression fit at .
The previous steps generated . Stack these vectors to give a single vector with length equal to the number of subjects n times the number of rules times the number of time points . Fit a pooled logistic regression of on according to model (eq. 7), with weights given by (here equal to 1). This gives an estimator of the target parameter .
We now describe how the pooled TMLE modifies this algorithm to update the initial estimator . In the following section, we compare the pooled TMLE to this non-targeted substitution estimator and with other available estimators.
4.2.2 Pooled TMLE
Estimate and . Denote these estimators and , respectively, and let denote their product. In our example, this step involves estimating the conditional probability of switching at time 0 given baseline CD4 count, and estimating the conditional probability of switching at time 1, given a subject did not switch at time 1, did not die at time 1, and CD4 count at times 0 and 1.
Generate a vector of length for , :
Fit a logistic regression of on . Generate a predicted value for each subject and each by evaluating this regression fit at . Note that , so the regression need only be fit and evaluated among subjects who remain alive at time 1. This gives a vector of initial values of length .
For each subject, , create a vector consisting of one copy of for each . Stack these copies to create a single vector of length , denoted .
For each subject and each , create a new multidimensional weighted covariate:
In our example, , and equals 1, t, and for the derivative taken with respect to and , respectively. The following matrix would thus be generated for each subject i, with rows corresponding to switch at time 0, time 1, or do not switch:
Stack these matrices to create a matrix with rows and one column for each component of (here, ).
Among those subjects still alive at the previous time point (), fit a pooled logistic regression of (the vector) on the weighted covariates created in the previous step, suppressing the intercept and using as offset , the logit of the initial predicted values for and . This gives a fit for multivariate . Denote this fit .
Generate by evaluating the logistic regression fit in the previous step at each among those subjects for whom . For subject i and rule d, evaluate
For subjects with , . This gives an updated vector of length .
Generate a vector of length for , :
Fit a logistic regression of (generated in the previous step) on . Generate a predicted value for each subject and each by evaluating this regression fit at . This gives a vector of initial values of length .
For each subject and each , create the multidimensional weighted covariate as above, now for :
The following matrix would thus be generated for each subject i, with rows corresponding to switch at time 0, time 1, or don’t switch:
Stack these matrices to create a matrix with rows and one column for each dimension of .
Fit a pooled logistic regression of (the updated fit generated in step 2) on these weighted covariates, suppressing the intercept and using as offset , the logit of the initial predicted values for and . This gives a fit for multivariate . Denote this fit .
Generate by evaluating the logistic regression fit in the previous step at each . For subject i and rule d, evaluate
This gives an updated vector of length .
Generate a vector of length for , :
Fit a logistic regression of on . Generate a predicted value for each subject and each by evaluating this regression fit at . This gives a vector of initial values of length .
For each subject, , create a vector consisting of one copy of for each . Stack these copies to create a single vector of length , denoted .
For each subject and each , create a new multidimensional weighted covariate, for :
The following matrix would thus be generated for each subject i, with rows corresponding to switch at time 0, time 1, or don’t switch:
Stack these matrices to create a matrix with rows and one column for each component of .
Fit a pooled logistic regression of (the vector) on these weighted covariates, suppressing the intercept and using as offset , the logit of the initial predicted values for and . This gives a fit for multivariate . Denote this fit .
Generate by evaluating the logistic regression fit in the previous step at each . For subject i and rule d, evaluate
This gives an updated vector of length .
The previous steps generated . Stack these vectors to give a single vector with length equal to the number of subjects n times the number of rules times the number of time points . Fit a pooled logistic regression of on according model (eq. 7), with weights given by (here equal to 1). This gives the pooled TMLE of the target parameter .
5 Comparison with alternative estimators
In this section we compare the TMLE described with several alternative estimators available for dynamic MSMs for survival: non-targeted substitution estimators, IPW estimators, and the stratified TMLE of Schnitzer et al. .
5.1 Non-targeted substitution estimator
The consistency of non-targeted substitution estimators of relies entirely on consistent estimation of the Q portions of the observed data likelihood. For estimators based on the parametric G-formula this requires correctly specifying parametric estimators for the conditional distributions of all non-intervention nodes given their parents [7, 11, 12, 41]. For the non-targeted estimator described in Section (3.4), this requires consistently estimating the literately defined conditional means .
Correct a priori specification of parametric models for in either case is rarely possible, rendering such non-targeted plug-in estimators susceptible to bias. Further, while machine learning methods, such as Super Learning, can be used to estimate Q non-parametrically, the resulting plug-in estimator has no theory supporting its asymptotic linearity, and will generally be overly biased for the target parameter .
5.2 Inverse probability weighted estimators
The IPW estimator described in van der Laan and Petersen , Robins et al.  is commonly used to estimate the parameters of a dynamic MSM. In brief, this estimator is implemented by creating one data line for each subject i, for each t, and for each d for which . Each data line consists of , any functions of included as covariates in the MSM, and a weight . A weighted logistic regression is then fit, pooling over time and rules d.
The parameter mapping presented here for the TMLE suggests an alternative IPW estimator for dynamic MSM – namely, implement an IPW estimator for (possibly within strata of V if V is discrete) for and project these estimates onto . The IPW estimator employed could be either the standard Horvitz–Thompson estimator: or its bounded counterpart: Robins and Rotnitzky .
The consistency of both IPW estimators relies on having a consistent estimator of ; further, even if is estimated consistently, neither will be asymptotically efficient. Both also suffer from the general sensitivity of IPW estimators to strong confounding (data sparsity or near positivity violations). Standard IPW estimators for dynamic MSMs are typically more susceptible to instability in such settings than their counterparts for static MSMs, due to the limited ability to stabilize weights (with stabilizing function restricted to versus .
5.3 Stratified targeted maximum likelihood estimator
Similar to the pooled TMLE, the stratified TMLE  also relies on reformulating the statistical target parameter in terms of iteratively defined conditional means and updating initial fits of these conditional means using covariates that are functions of an estimator of the intervention mechanism. The stratified and pooled differ, however, in several respects. In particular, in the pooled TMLE the update step is accomplished by fitting a single multivariate for each time point t and non-intervention node k, pooling across all rules of interest . In contrast, the stratified TMLE fits a separate for each time point t, non-intervention node k, and rule of interest . Specifically, the stratified TMLE consists of implementing the longitudinal TMLE of van der Laan and Gruber  for separately for each time point t and each rule of interest , and then combining these estimates into a fit of .
Let denote the initial estimator of the iteratively defined conditional means that forms the basis of the pooled and the stratified TMLEs. Let denote the targeted update of for the two estimators (noting that the update is accomplished differently for the pooled and stratified estimators). As long as their corresponding update solves the efficient influence curve equation , both the stratified and the pooled estimators will share the desirable asymptotic properties of a TMLE. Both estimators will be consistent if either or is a consistent estimator of or , respectively. Further, both the pooled and the stratified estimators will be asymptotically efficient if both the initial estimator and the estimator are consistent.
The pooled and stratified estimators may nonetheless differ in both their asymptotic and finite sample performance. The stratified TMLE uses a more saturated model when updating than does the pooled TMLE. Thus if the initial estimator is misspecified, the update of this initial estimator will be more extensive (the update will be further from the initial misspecified estimator) for the stratified, as compared to the pooled, TMLE, resulting asymptotically in a that is closer to the true and thus improving efficiency (recall that the efficiency bound is achieved at ). The extent to which this asymptotic property translates into meaningful finite sample gains in settings where is misspecified remains to be investigated.
On the other hand, in some cases, it is no longer clear how to implement the stratified TMLE. For example, the target parameter may be defined using a MSM , conditional on a subset of baseline covariates V. If V is discrete with adequate finite sample support at each value, the stratified TMLE can be applied by estimating within each stratum v. When V is continuous, however, or has levels which are not represented in a given finite sample, such an approach will break down for many choices of weight function . Similarly, whenever there are some rules with no support in a given finite sample, no data will be available to fit for some rules of interest, and the key update step will no longer be possible using the stratified TMLE. Note that such lack of support in finite samples can occur even when the assumption of positivity in the observed data distribution (3) is satisfied.
In cases where V is discrete and there is support for some but not all rules within each stratum of V, one option is to define a stratified quasi-TMLE using the initial fit for those rules, time points, and strata of V where no data are available to fit . The estimator, implemented in the simulations below, remains defined even when not all rules of interest are supported within each stratum of V in a given sample. However, in such cases, the initial estimator is only partially updated, and thus may no longer solve the efficient influence curve equation , even if is a consistent estimator of . If the initial estimator is poor (for example, if it is a misspecified parametric model), the resulting estimator of will be biased. In contrast, the pooled TMLE retains the ability to fit and thus update by pooling over rules d. In other words, both estimators rely on the theoretical positivity assumption on (3) for identifiability; however, they may respond differently to practical positivity violations in finite samples.
5.3.1 Numerical illustration
We use a simple simulation to illustrate a setting in which the positivity assumption holds, but many rules of interest have no support in a given finite sample. In this setting, the stratified estimator will be biased if the initial estimator is misspecified. In contrast, the pooled TMLE remains asymptotically linear if is a consistent estimator of . Simulation studies comparing the performance of the pooled and stratified TMLEs as well as IPW estimators under more realistic scenarios are provided in the following section.
We implemented a simulation with observed data consisting of n i.i.d. copies of , where is a baseline covariate, is a binary treatment assigned randomly at seven time points (), and Y is a binary outcome. The data for a given individual i were generated by drawing sequentially from the following distributions: The target parameter was defined as the projection of onto marginal structural working model according to eq. (5), with weight function , equal to the possible values of A, and denoting the treatment level assigned by a given rule d at time t.
Stratified and pooled TMLEs for were implemented using estimators and based on intercept only logistic regressions; thus was an inconsistent estimator of . Table 1 shows estimated bias, bias to standard error ratio, variance, mean squared error, and 95% confidence interval coverage (using the variance estimator described in Section 3.7) based on 500 samples of size . Note that at this sample size many of the 128 rules of interest have no support in the data in a given sample, while the remainder have few observations available to fit in the stratified TMLE. As predicted by its double robust property, the pooled TMLE remains without meaningful bias despite use of a poor initial estimator . In contrast, the stratified TMLE has bias of approximately double the magnitude of its standard error, posing a substantial threat to valid inference. The stratified TMLE also exhibits markedly lower variance than its pooled counterpart, explained by the fact that for those rules d without support in the data, the stratified estimator uses an intercept only model to estimate . Although unbiased, the pooled TMLE provides anti-conservative confidence interval coverage; we return to this point below.
This breakdown of the stratified TMLE in settings with no support for some rules of interest will not occur if the function h is chosen to give a weight of 0 to any rule (or more generally, any combination) without support in the sample. For example, in the illustration above we could have defined and estimated it using the empirical distribution. Unless one is willing to assume that the MSM is correctly specified, however, choice of h changes the target parameter being estimated . Further, even with this choice of weight function, the estimators may still exhibit different finite sample performance in setting with marginal data support. We investigate this possibility further in the following section.
6 Simulation study
In this section, we investigate the relative performance of the pooled TMLE, stratified TMLE, and IPW estimators for the parameters of a marginal structural working model. For each candidate estimator, we report bias, variance, MSE, and 95% confidence interval coverage estimates based on influence curve variance estimators. We note that our influence curve-based estimators assume the weight function is known; if the weight function is estimated, the influence curve should be corrected for this additional estimated component. We investigate two basic data generating processes. Simulation 1 investigates a simple process, in which the effect of the longitudinal treatment (time to switch) is confounded by baseline variables only, the outcome is observed at a single time point, and there is no censoring. Simulation 2 introduces more realistic complexity, designed to resemble the data analysis presented in the following section.
6.2 Simulation 1: baseline confounding only
6.2.1 Data generating process
We implemented a simulation with observed data consisting of n i.i.d. copies of , where is a baseline covariate, is a binary treatment assigned at time point , and Y is a binary outcome. The data for a given individual were generated by drawing sequentially from the following distributions: In order to investigate the impact of decreasing support in the data (practical violations or near violations of the positivity assumption), we considered two versions of this data generating process, with (Simulation 1a, lower bound on of 0.05) and (Simulation 1b, lower bound on of 0.001).
6.2.2 Target parameter
The target parameter was defined as the projection of onto marginal structural working model according to eq. (5), with equal to the possible values of A, equal to the treatment level assigned by a given rule d at time, t and weight function . In the case that some rules of interest had no support in a given sample, this choice of weight function (when estimated as the empirical proportion of subjects that followed rule d) ensured that the IPW estimator remained defined and that the updated fit used by the stratified TMLE solved the efficient influence curve equation when was a consistent estimator of .
This is a static point treatment problem, and thus a number of additional estimators are available. However, we use this as a special case of longitudinal dynamic MSMs and investigate the relative performance of three estimators described in Section 5: the pooled TMLE, the stratified TMLE, and the standard IPW estimator. All estimators were implemented using two estimators of : an estimator based on a correctly specified model for the conditional distribution of A given and an estimator using an intercept only model. The estimators were bounded from below at 0.001. TMLEs were implemented using two estimators of : an estimator based on main terms logistic regression models using the correct set of parents for a given node as independent variables (in a slight abuse, we refer to these as “correctly specified”) and an estimator using intercept only models. Performance was evaluated across 500 samples of size ; 95% confidence interval coverage was based on the variance estimator described in Section 3.7.
Results for Simulation 1a are shown in Table 2. When both and were based on correctly specified models, all three estimators were unbiased, had similar variance, and achieved close to nominal coverage. Table 2 also demonstrates double robustness; when and were based on a misspecified model, both TMLEs remained without meaningful bias. In this simulation, both TMLEs continued to achieve close to nominal coverage even when was an inconsistent estimator of . In contrast and as expected, the IPW estimator was substantially biased with poor coverage when was based on an intercept only model.
Results for Simulation 1b, with both and based on correctly specified models, are shown in Table 3. In this simulation, in which the lower bound for is 0.001, the IPW estimator was minimally biased and retained good 95% confidence interval coverage. Interestingly, the performance of the stratified TMLE suffered in this setting, with bias approximately equal to the standard error, and 95% confidence interval coverage of 83% and 80% for and , respectively. The pooled TMLE remained unbiased and retained good confidence interval coverage.
6.3 Simulation 2: resembling data analysis
In this simulation, we used a data generating process designed to resemble the data analysis presented in the following section, in which the goal was to investigate the effect of delayed switch to a new antiretroviral regimen on mortality among HIV-infected patients who have failed first line therapy. The data generating process thus contains both baseline and time-dependent confounders of a longitudinal binary treatment (time to switch), a repeated measures binary outcome (survival over time), and informative right censoring due to two causes (database closure and loss to follow up).
6.3.1 Data generating process
We implemented a simulation with observed data consisting of n i.i.d. copies of for . Here, and , where W was a non-time-varying baseline covariate (, representing baseline age, sex, and disease stage), was a time-varying covariate representing most recent measured CD4 count at time t (square root transformed), and was an indicator of death by time t. The intervention nodes for a given time point t were , where was an indicator of database closure by time t, was an indicator of loss to follow up by time t, and was an indicator of having switched to second line therapy by time t. In brief, the data for a given individual were generated by first drawing baseline characteristics W, then for each time point t, for as long as the subject remained alive and uncensored,
Drawing a time updated CD4 count given W, prior CD4 counts, and regimen status at the prior time point ()
Determining censoring due to database closure using a Bernoulli trial with probability dependent on W.
If still uncensored, determining censoring due to loss to follow up using a Bernoulli trial with probability dependent on W, prior CD4 count and regimen status at the prior time point ().
If still uncensored and not yet switched, determining switching using a Bernoulli trial with probability dependent on W and prior CD4 count.
Determining death using a Bernoulli trial with probability dependent on W, prior CD4 counts and regimen status
6.3.2 Target parameter
The target parameter was defined as the projection of the counterfactual survival curve for each switch time, onto marginal structural working model according to eq. (5), where we use to denote the value assigned by rule d, to denote the switch time assigned by rule d, and with consisting of each possible switch time, combined with an intervention to prevent censoring. We used the following weight function: where was the number of unique values compatible with , is the first time point at which all subjects have either died or been censored, and both and were estimated with their empirical counterparts. This weight function gave 0 weight to rules without support in a given sample. It further avoided up-weighting specific values proportional to the number of rules (assigned switch times) that they were compatible with.
We implemented the pooled TMLE, the stratified TMLE, the IPW estimator based on estimating using the bounded Horvitz–Thompson estimator and projecting the resulting estimates onto the model (referred to as “Stratified IPW”) and the standard IPW estimator for dynamic MSM (referred to as a “Standard IPW” estimator).
The intervention rules could be considered static, in that they assign a fixed vector of treatment decisions irrespective of a subject’s covariate values. However, when the target parameter is defined using a MSM on survival, a static IPW estimator cannot be implemented in the standard way (fitting a weighted pooled regression of on observed treatment history ) because the full is not observed for subjects who die before time t. As noted by Picciotto et al. , one option, adopted by us here, is to instead define the interventions of interest as dynamic (switch at time if still alive).
All estimators were implemented using an estimator based on a correctly specified parametric model for , but bounding the resulting estimates from below at 0.01 in order to ensure that the denominator in the covariate used in the updating step remained bounded away from 0. The TMLEs were implemented using estimators of based on main terms logistic regression models using the correct set of parents for a given node as independent variables (not equivalent to use of a correctly specified parametric model to estimate ). The performance of each estimator was evaluated across 500 samples of size , corresponding to the sample size in the data analysis. 95% confidence interval coverage was based on the variance estimator described in Section 3.7; calculation of non-parametric bootstrap-based coverage was computationally prohibitive.
Results for Simulation 2 are shown in Table 4. In this simulation, both the pooled and the stratified TMLEs were essentially unbiased for all coefficients, and the two TMLEs had comparable MSEs. Both TMLEs exhibited less than nominal 95% confidence interval coverage when using influence curve-based variance estimators. The anti-conservative performance of the influence curve-based variance estimator is likely due to the presence of practical positivity violations and relatively rare outcomes; the fact that the weight function was treated as known may also make a small contribution. Further work is needed to develop improved diagnostics and variance estimators in these settings.
In contrast, both IPW estimators were substantially biased for , which reflected the treatment effect, despite use of an estimator based on a correctly specified parametric model. Both IPW estimators also showed higher MSE for , and achieved 95% confidence interval coverage for substantially below that of the TMLEs. This finding is consistent with the known susceptibility of IPW estimators to positivity violations and data sparsity, exacerbated by the limited ability when using a dynamic regime formulation to choose an effectively stabilizing weight function. Across simulations, the median minimum value of used by IPW prior to bounding at 0.01 was 0.000297; 1.55% of values of used by IPW were less than 0.01. Tables 8 and 10, provide further details on data support and number of events.
7 Data analysis
We analyzed data from the International Epidemiological Databases – Southern Africa in order to investigate the effect of switching to second line therapy on mortality among HIV-infected patients with immunological failure on first line antiretroviral therapy. The data set and clinical question are described in detail in Gsponer et al. . In brief, data were drawn from clinical care facilities in Zambia and Malawi, in which HIV-infected patients were followed longitudinally in clinic and data were collected on baseline demographic and clinical variables (sex, age, and baseline disease stage), time-varying CD4 count, and time-varying treatment, summarized here as switch to second line therapy. Death was independently reported. The 2,627 subjects meeting WHO immunological failure criteria were included in the current analysis beginning at time of immunologic failure. Following common practice and prior analysis, time was discretized into 3-month intervals; time updated CD4 count was coded such that CD4 count for an interval preceded switching decisions in that interval. Data on a subject were censored at time of database closure or after four consecutive intervals without clinical contact.
The data structure and target parameter were identical to those described in Simulation 2, with , equal to sex, and representing two levels of a three-level categorical age variable (), and equal to disease stage. The analysis was implemented under the causal model assumed for Simulation 2; in particular assuming that monitoring times did not affect the outcome other than via effects on switching. We acknowledge that this assumption may not hold; however, relaxing it introduces a number of additional complications, as described in the Appendix. We implemented the estimators described in Simulation 2: pooled and stratified TMLEs and standard and stratified IPW estimators. Estimators of and were based on main term logistic regression models, analogous to Simulation 2. Given the results in Simulation 2 suggesting the poor performance of the influence curve-based variance estimator, we also estimated the variance using a non-parametric bootstrap.
Results are given in Table 5. The IPW estimates for the effect of switching on mortality () are close to zero and non-significant. Both TMLE point estimates suggest a 0.88 relative odds of death per 3-month earlier switch, and all except for the stratified TMLE combined with bootstrap-based variance estimation were significant at the level. Such a protective effect of switching is consistent with clinical knowledge. Interestingly, these results appear consistent with those of Simulation 2, which suggested that the IPW estimator was substantially positively biased, underestimating the harm of delayed switch, while both TMLEs performed well in terms of bias. In summary, our results in both the simulation and data analysis are consistent with the TMLEs controlling for measured confounders more completely than the corresponding IPW estimators.
The poor coverage observed in Simulation 2, despite absence of bias for the TMLEs, suggests that the influence curve-based variance estimators may be systematically underestimating the true variance in this analysis. While the non-parametric bootstrap offers an alternative approach, it is not expected to resolve the challenge of anti-conservative variance estimation in the setting of practical positivity violations. Intuitively, rare treatment/covariate combinations, despite being theoretically possible, may simply not occur in a given finite sample and as a result, the corresponding extreme weights implied by these combinations will not occur. Because the non-parametric bootstrap resamples from the same finite sample, it fails to address the underlying problem. Indeed, the bootstrap-based confidence intervals in the data analysis were slightly smaller than confidence intervals based on the influence curve. Thus in this realistic setting of rare outcomes and moderately strong confounding, our results caution against reliance on either approach to variance estimation, for either IPW or TML estimators, and suggest that additional work developing robust variance estimators in this setting is urgently needed.
In addition to the issues raised above, limitations of the analysis include the potential for unmeasured confounding by factors such as unmeasured health status and adherence, as well as bias due to incomplete health reporting, resulting in censoring due to loss to follow up that directly depends on death [43, 44]. These results also do not contradict the previous published results of Gsponer et al. , which found a protective effect of switching using a static IPW estimator for a hazard MSM; such an IPW estimator might perform substantially better than the dynamic IPW estimator for a survival MSM implemented here.
In summary, we have presented a pooled TMLE for the parameters of static and dynamic longitudinal MSMs that builds on prior work by Robins [13, 29] and Bang and Robins . We evaluated the performance of this estimator using simulated data and applied it, together with alternatives, in a data analysis. Both theory and simulations suggest settings in which the pooled TMLE offers advantages over alternative estimators. Software implementing this estimator, together with competitors, is included in supplementary files and is publicly available as part of R library ltmle (http://cran.r-project.org/web/packages/ltmle/).
The pooled TMLE presented in this paper, together with corresponding open source software, provides a new tool for estimation of the parameters of static or dynamic MSMs. It has clear theoretical advantages over available alternatives. Unlike IPW and augmented-IPW estimators, it is a substitution estimator. Unlike IPW estimators, it is double robust and asymptotically efficient depending on the initial estimators of g and Q. Unlike the previously proposed stratified TMLE, it does not require support in the data for every rule of interest and remains defined in the case that the target parameter is defined using a marginal structural working model conditional on a continuous baseline covariate.
In settings where some subset of discrete intervention rules has adequate support in a given sample, an alternative approach is to compare only this subset of rules . However, in many settings smoothing over a large number of poorly supported rules is appropriate. For example, for a set of rules indexed by a continuous or multiple level ordered variable, smoothing over this variable provides a way to define a causal effect of interest despite inadequate support to estimate the counterfactual outcome under any of these rules individually.
The TMLE presented in this paper was developed for a causal model in which the non-intervention variables may be a function of the entire observed past () and the intervention variables may be a function of some subset of the observed past () – in other words, for a model in which exclusion restrictions were assumed, if at all, only for the intervention variables. In some cases, a causal model that also restricts the parent set of the non-intervention variables to a subset of the observed past may be appropriate. This smaller model is included in the larger model assumed in the current paper. As a result, while the TMLE developed for the larger model will still be valid, it will no longer be efficient.
Our simulations suggest that both stratified and pooled TMLEs may outperform both the IPW estimator typically used for dynamic MSMs, as well as an alternative “stratified” IPW estimator, in some settings with sparse data/near positivity violations. However, further work is needed to confirm this preliminary observation. Although the theory in this paper was developed for the general case of dynamic MSMs, including models of the time-specific hazard or survival functions, we focused our software implementation, examples, and simulations on MSMs for survival. The practical performance of the pooled TMLE relative to alternative estimators also remains to be investigated for the case of static and dynamic MSMs on the hazard. It also remains to implement and evaluate the relative performance of the alternative pooled TMLE described in Appendix C, in which the updating step pools not only over all rules of interest but also over all time points. Finally, the relative performance of the TMLE compared to double robust efficient estimating equation-based estimators for longitudinal MSM parameters, including those of Robins [13, 29] and Bang and Robins , remains to be evaluated.
Importantly, our simulations and data analysis illustrate the need for improved variance estimators for both TMLE and IPW in settings with moderately strong confounding, multiple time points, and relatively rare outcomes. Improved approaches to variance estimation and valid inference, as well as appropriate diagnostics to warn applied practitioners of settings in which coverage is likely to be poor, are crucial research priorities.
This work was supported by NIH Grant #U01AI069924 (NIAID, NICHD, and NCI) (PIs: Egger and Davies), the Doris Duke Charitable Foundation Grant #2011042 (PI: Petersen), and NIH Grant #R01AI074345-06 (NIAID) (PI: van der Laan).
The efficient influence curve of the statistical target parameter
We have the following theorem for a general parameter .
Theorem 1 Consider a parameter that can be represented as , where . Let . Assume that is such that is pathwise differentiable. Let be the partial derivative of f with respect to at Q. Let be the influence curve of as an estimator of , where is the empirical distribution of , .
Then, the efficient influence curve of at P can be represented as follows: Proof: The efficient influence curve of the parameter (assuming discrete random variable ) is given by (appendix A3, van der Laan and Rose ). By the delta-method, the efficient influence curve of is thus given by (appendix A3, van der Laan and Rose ). This completes the proof. ⃞
In order to determine the partial derivative of the function f and the following is useful. Suppose that, as in our examples, for some function M, and suppose that . Then solves the equation . By the implicit function theorem we have that In particular, and We can now apply our general Theorem 1 to the example with and the logistic regression working MSM in which solves the equation with The same equation applies for the linear working MSM but with the other definition of as mentioned above. Define Note that . Thus, and This proves the following corollary [13, 22, 29].
Corollary 1 Consider the target parameter defined by eq. (5). This target parameter is pathwise differentiable at P with efficient influence curve given by The efficient influence curve is double robust. In other words we have that , so that, in particular, if , then . As a consequence, our TMLE will be a consistent estimator of if either is consistent for or is consistent for .
An alternative pooled TMLE that only fits a single to compute the update
The TMLE described in the main text relies on a separate for each and for each resulting in a collection of estimators of that define the TMLE. A nice feature of this TMLE is that it exists in closed form. The following alternative TMLE only relies on fitting a single , but in this case the updating needs to be iterated until convergence. First construct an initial estimator of as described above. Now, consider the above-presented submodel through at . Compute where the nuisance parameters of the loss function are estimated with the initial estimator . Note that can be fit with a pooled logistic regression as stated above. This yields an update . In general, at the mth step, given the estimator , we compute and the resulting update . This updating process is iterated until . The resulting final update is denoted with and is the TMLE of . By construction, we have that this TMLE also solves the efficient influence curve equation with arbitrary precision. The TMLE of is now computed with the corresponding plug-in estimator , as above. The potential advantage of this alternative TMLE is that it is able to smooth across all time points t and k when computing the update, while the closed form TMLE presented above only smoothes over the rules .
Supplementary material, Simulation 2
Below we describe the true data generating process used in Simulation 2 in greater detail. We also provide a summary table comparing the support and number of events over time in the simulated data and in the real data set it was designed to resemble.
In our presentation of the simulation below we have altered our notation slightly from that presented in the paper to match notation in the accompanying R code. Specifically, is used to refer to time varying CD4 count . The observed data generated on a given subject consisted of Further, the data generating process included a non-monotone monitoring process (denoted ) designed to mimic when subjects come into clinic, have their CD4 counts measured, and have an opportunity to switch regimens. This adds several additional complexities. First, observed CD4 count, denoted here as (and in the main text as ), is only updated to reflect the true underlying CD4 count process when a patient is seen. Below, is used to denote the true underlying CD4 value. Subsequent CD4 values and death are functions of this true underlying value, while the intervention nodes are functions of the observed values only. Because both switching and monitoring are generated only in response to the observed past, however, this time-dependent non-monotone monitoring process is a multivariate instrumental variable, warranting its exclusion from the adjustment set. Further, its inclusion would both be expected to harm efficiency and introduce positivity violations (for example, subjects not seen in clinic at a given time point have zero probability of switch at that time point). The non-monotone monitoring process, while retained to mimic the data analysis, is thus omitted from the observed data and presentation in the main text in order to simplify discussion. Second, in accordance with common practice in clinical cohort data, censoring due to loss to follow up is defined deterministically based on not being seen in clinic for a certain number of consecutive time points. Finally, a subject can only switch treatment when seen ( is only at risk of jumping when ). As above, and denotes an indicator of death by time t.
Data generating process
Data were generated for a given individual according to the following process, where and are draws from a standard normal distribution, and all binary variables were drawn from a Bernoulli distribution with the conditional probabilities given below. Data for a given subject were drawn sequentially until either jumped to one, jumped to 1, jumped to 1, or was generated. Tables 7 and 8 compare the number of deaths in the simulated data (median of 501 samples) and actual data among patients following a given regime.
Tables 9 and 10 compare the number of patients in the simulated data and actual data who are uncensored and following a given regime. In the data analysis, for example, in the first 3-month interval following immunologic failure (time = 1) there were no patient deaths among the 137 uncensored patients who switched immediately (switch time = 0) and 13 deaths among the 2,285 uncensored patients who did not switch immediately (switch time = 1). In the second 3-month interval following immunologic failure (time = 2), there was one patient death among the 120 uncensored patients who switched immediately (switch time = 0), no patient deaths among the 88 uncensored patients who switched during the first 3-month interval (switch time = 1) and 8 deaths among the 1,962 uncensored patients who did not switch immediately or during the first 3-month interval (switch time = 2).
Gsponer T, Petersen M, Egger M, Phirid S, Maathuise M, Boulle A, et al., and O. Keiser for IeDEA Southern Africa. The causal effect of switching to second line ART in programmes without access to routine viral load monitoring. AIDS 2012;26:57–65. [Web of Science] [Crossref] [PubMed]
Robins JM. Analytic methods for estimating HIV treatment and cofactor effects. In: Ostrow DG, Kessler R, editors. Methodological issues of AIDS Mental Health Research. New York: Plenum Publishing, 1993;213–90.
Murphy SA, van der Laan MJ, Robins JM. Marginal mean models for dynamic regimes. J Am Stat Assoc 2001;960:1410–23. [Crossref]
Hernan MA, Lanoy E, Costagliola D, Robins JM. Comparison of dynamic treatment regimes via inverse probability weighting. Basic Clin Pharmacol 2006;98:237–42. [Crossref]
Cain LE, Robins JM, Lanoy E, Logan R, Costagliola D, Hernn MA. When to start treatment? A systematic approach to the comparison of dynamic regimes using observational data. Int J Biostat 2010;6(2):Article 18. [Crossref] [Web of Science] [PubMed]
Schomaker M, Egger M, Ndirangu J, Phiri S, Moultrie H, Technau K, et al.. When to start antiretroviral therapy in children aged 2–5 years: A collaborative causal modelling analysis of cohort studies from Southern Africa. PLoS Med 2013;100:e1001555. [Crossref] [Web of Science]
Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology 2000;110:550–60. [Crossref]
Robins JM, Hernan MA. Estimation of the causal effects of time-varying exposures. In: Fitzmaurice G, Davidian MVerbeke MG, Molenberghs G, editors. Advances in longitudinal data analysis. New York: Chapman and Hall/CRC Press, 2009:553–599.
Robins JM. Marginal structural models versus structural nested models as tools for causal inference. In: Halloran ME, Berry D, editors. Statistical models in epidemiology, the environment, and clinical trials. New York: Springer, 2000:95–133.
Robins JM. A new approach to causal inference in mortality studies with sustained exposure periods – application to control of the healthy worker survivor effect. Math Model 1986;7:1393–512. [Crossref]
Taubman SL, Robins JM, Mittleman MA, Hernan MA. Intervening on risk factors for coronary heart disease: an application of the parametric g-formula. Int J Epidemiol 2009;380:1599–611. [Crossref] [Web of Science]
Robins JM. Robust estimation in sequentially ignorable missing data and causal inference models. In: Proceedings of the American Statistical Association on Bayesian Statistical Science, 1999, 2000:6–10.
Robins JM, Rotnitzky A. Recovery of information and adjustment for dependent censoring using surrogate markers. In: Nicholas P. Jewell, Klaus Dietz, Vernon T. Farewell, editors, AIDS epidemiology. Boston: Birkhäuser, 1992:297–331.
Robins JM, Rotnitzky A. Comment on the Bickel and Kwon article, “Inference for semiparametric models: some questions and an answer”. Stat Sin 2001;110:920–36.
Robins JM, Rotnitzky A, van der Laan MJ. Comment on “On profile likelihood”. J Am Stat Assoc 2000;450:431–5.
Rosenblum M, van der Laan MJ. Simple examples of estimating causal effects using targeted maximum likelihood estimation. UC Berkeley Division of Biostatistics Working Paper Series, Working Paper 209. Available at: http://biostats.bepress.com/jhubiostat/paper209, 2011.
Stitelman OM, De Gruttola V, van der Laan MJ. A general implementation of TMLE for longitudinal data applied to causal inference in survival analysis. UC Berkeley Division of Biostatistics Working Paper Series, Working Paper 281. Available at: http://biostats.bepress.com/ucbbiostat/paper281, 2011.
van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data. Berlin/Heidelberg/New York: Springer, 2011. [Web of Science]
van der Laan MJ, Rubin D. Targeted maximum likelihood learning. Int J Biostat 2006;6(2):Article 2. [Crossref]
Robins JM. Marginal structural models. In: 1997 Proceedings of the American Statistical Association, Section on Bayesian Statistical Science, 1998:1–10.
Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology 2000;110:561–70. [Crossref]
Petersen ML, van der Laan MJ, Napravnik S, Eron J, Moore R, Deeks S. Long term consequences of the delay between virologic failure of highly active antiretroviral therapy and regimen modification: a prospective cohort study. AIDS 2008;22:2097–106. [Crossref] [Web of Science]
Neugebauer R, van der Laan M. Nonparametric causal effects based on marginal structural models. J Stat Plann Inference 2007;137:419–34. [Crossref]
Petersen ML, Porter KE, Gruber S, Wang Y, van der Laan MJ. Diagnosing and responding to violations in the positivity assumption. Stat Methods Med Res 2012;21:31–54. [Crossref] [Web of Science] [PubMed]
Robins JM. Commentary on “using inverse weighting and predictive inference to estimate the effects of time-varying treatments on the discrete-time hazard. Stat Med 2002;210:1663–80. [Crossref]
Scharfstein DO, Rotnitzky A, Robins JM. Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion and rejoinder). J Am Stat Assoc 1999;940:1096–1120 (1121–1146). [Crossref]
Schnitzer ME, Moodie EM, van der Laan MJ, Platt RW, Klein MB. Modeling the impact of hepatitis C viral clearance on end-stage liver disease in an HIV co-infected cohort with targeted maximum likelihood estimation. Biometrics 2014;70(1):144–52. [Web of Science] [Crossref]
R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, 2013. Available at: http://www.R-project.org/. ISBN 3-900051-07-0.
Schwab J, Lendle S, Petersen M, van der Laan M. LTMLE: longitudinal targeted maximum likelihood estimation, 2013. Available at: http://cran.r-project.org/web/packages/ltmle/. R package version 0.9.3.
Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669–710. [Crossref]
Pearl J. Causality: models, reasoning, and inference. Cambridge: Cambridge University Press, 2000.
Bickel PJ, Klaassen CA, Ritov Y, Wellner JA. Efficient and adaptive estimation for semiparametric models. Baltimore, MD: Johns Hopkins University Press, 1993.
van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. Berlin/Heidelberg/New York: Springer, 2003.
van der Laan MJ, Polley E, Hubbard A. Super learner. Stat Appl Genet Mol Biol 2007;6:Article 25. [Web of Science]
van der Laan MJ. Statistical inference when using data adaptive estimators of nuisance parameters. UC Berkeley Division of Biostatistics Working Paper Series, Working Paper 302, 2012. Available at: http://www.bepress.com/ucbbiostat/paper302.
Young JG, Cain LE, Robins JM, OÕReilly EJ, Hernan MA. Comparative effectiveness of dynamic treatment regimes: an application of the parametric g-formula. Stat Biosci 2011;30:119–43. [Crossref]
Picciotto S, Hernan M, Page J, Young J, Robins J. Structural nested cumulative failure time models to estimate the effects of interventions. J Am Stat Assoc 2012;1070:866–900. [Web of Science]
Geng EH, Glidden D, Bangsberg DR, Bwana MB, Musinguzi N, Metcalfe J, et al. Causal framework for understanding the effect of losses to follow-up on epidemiologic analyses in clinic based cohorts: the case of HIV-infected patients on antiretroviral therapy in Africa. Am J Epidemiol 2012;175:1080–7. [PubMed] [Web of Science] [Crossref]
Schomaker M, Gsponer T, Estill J, Fox M, Boulle A. Non-ignorable loss to follow-up: correcting mortality estimates based on additional outcome ascertainment. Stat Med 2014;330:129–42. [Crossref] [Web of Science]