Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Journal of Causal Inference

Ed. by Imai, Kosuke / Pearl, Judea / Petersen, Maya Liv / Sekhon, Jasjeet / van der Laan, Mark J.

2 Issues per year

Online
ISSN
2193-3685
See all formats and pricing
More options …

Targeted Maximum Likelihood Estimation for Dynamic and Static Longitudinal Marginal Structural Working Models

Maya Petersen / Joshua Schwab / Susan Gruber / Nello Blaser / Michael Schomaker / Mark van der Laan
Published Online: 2014-06-18 | DOI: https://doi.org/10.1515/jci-2013-0007

Abstract

This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart.

This article offers supplementary material which is provided at the end of the article.

Keywords: dynamic regime; semiparametric statistical model; targeted minimum loss based estimation; confounding; right censoring

1 Introduction

Many studies aim to learn about the causal effects of longitudinal exposures or interventions using data in which these exposures are not randomly assigned. Specifically, consider a study in which baseline covariates, time-varying exposures or treatments, time-varying covariates, and an outcome of interest, such as death, are observed on a sample of subjects followed over time. The exposures of interest can both depend on past covariates and affect future covariates, as well as the outcome. Censoring may also occur, possibly in response to past treatment and covariates. Such data structures are ubiquitous in observational cohort studies. For example, a sample of HIV-infected patients might be followed longitudinally in clinic and data collected on antiretroviral prescriptions, determinants of prescription decisions including CD4+ T cell counts and plasma HIV RNA levels (viral loads), and vital status. Such data structures also occur in randomized trials when the exposure of interest is (non-random) compliance with a randomized exposure or includes non-randomized mediators of an exposure’s effect.

The causal effects of longitudinal exposures (as well as the effects of single time point exposures when the outcome is subject to censoring) can be formally defined by contrasting the distribution of a counterfactual outcome under different “interventions” to set the values of the exposure and censoring variables. For example, the counterfactual survival curve of HIV-infected subjects following immunological failure of antiretroviral therapy might be contrasted under a hypothetical intervention in which all subjects were switched immediately to a new antiretroviral regimen versus an intervention in which all subjects remained on their failing therapy [1]. In the presence of censoring due to losses to follow up, these counterfactuals of interest might be defined under a further intervention to prevent censoring. Interventions such as these, under which all subjects in a population are deterministically assigned the same vector of exposure and censoring decisions (for example, do not switch and remain under follow up) are referred to as “static regimes.”

More generally, counterfactuals can be defined under interventions that assign a treatment or exposure level to each subject at each time point based on that subject’s observed past. For example, counterfactual survival might be compared under interventions to switch all patients to second line antiretroviral therapy the first time their CD4+T cell count crosses a certain threshold, for some specified set of thresholds [2]. Such subject-responsive treatment strategies have been referred to as individualized treatment rules, adaptive treatment strategies, or “dynamic regimes” (see, for example, Robins [3]; Murphy et al. [4]; Hernan et al. [5]). Additional examples include strategies for deciding when to start antiretroviral therapy [6, 7] and strategies for modifying dose or drug choice based on prior response and adverse effects. Investigation of the effects of such dynamic regimes makes it possible to learn effective strategies for assigning an intervention based on a subject’s past and is thus relevant to any discipline that seeks to learn how best to use past information to make decisions that will optimize future outcomes.

The static and dynamic regimes described above are longitudinal – they involve interventions to set the value of multiple treatment and censoring variables over time. For example, counterfactual survival under no switch to second line therapy corresponds to a subject’s survival under an intervention to prevent a patient from switching at each time point from immunologic failure until death or the end of the study. A time-dependent causal dose–response curve, which plots the mean of the intervention-specific counterfactual outcome at time t as a function of the interventions through time t, can be used to summarize the effects of these longitudinal interventions. For example, a plot of the counterfactual survival probability as a function of time since immunologic failure, for a range of alternative CD4+ T cell count thresholds used to initiate a switch captures the effect of alternative switching strategies on survival.

Formal causal frameworks provide a tool to establish the conditions under which such causal dose–response curves can be identified from the observed data. Longitudinal static and dynamic regimes are often subject to time-dependent confounding – time-varying variables may confound the effect of future treatments while being affected by past treatment [8]. Traditional approaches to the identification of point treatment effects, which are based on selection of a single set of covariates for regression or stratification-based adjustment, break down when such time-dependent confounding is present. However, the mean counterfactual outcome under a longitudinal static or dynamic regime may still be identified under the appropriate sequential randomization and positivity assumptions (reviewed in Robins and Hernan [9]).

Under these assumptions, causal dose–response curves can be estimated by generating separate estimates of the mean counterfactual outcome for each time point and intervention (or regime) of interest. For example, one could generate separate estimates of the counterfactual survival curve for each CD4-based threshold for switching to second line therapy. In this manner, one obtains fits of the time-dependent causal dose–response curve for each of a range of possible thresholds, which together summarize how the mean counterfactual outcome at time t depends on the choice of threshold.

A number of estimators can be used to estimate intervention-specific mean counterfactual outcomes. These include inverse probability weighted (IPW) estimators (for example, [3, 5, 10]), “G-computation” estimators (typically based on parametric maximum likelihood estimation of the non-intervention components of the data generating process) (for example, [7, 11, 12]), augmented-IPW estimators (for example, [1316, 31]), and targeted maximum likelihood (or minimum loss) estimators (TMLEs) (for example, [17, 18]). In particular, van der Laan and Gruber [19] combine the targeted maximum likelihood framework [20, 21] with important insights and the iterated conditional expectation estimators established in Robins [3, 29] and Bang and Robins [22].

Both the theoretical validity and the practical utility of these estimators rely, however, on reasonable support for each of the interventions of interest, both in the true data generating distribution and in the sample available for analysis. For example, in order to estimate how survival is affected by the threshold CD4 count used to initiate an antiretroviral treatment switch, a reasonable number of subjects must in fact switch at the time indicated by each threshold of interest. Without such support, estimators of the intervention-specific outcome will be ill-defined or extremely variable. Although one might respond to this challenge by creating coarsened versions of the desired regimes, so that sufficient subjects follow each coarsened version, such a method introduces bias and leaves open the question of how to choose an optimal degree of coarsening.

Since adequate support for every intervention of interest is often not available, Robins [23] introduced marginal structural models (MSMs) that pose parametric or small semiparametric models for the counterfactual conditional mean outcome as a function of the choice of intervention and time. For example, static MSMs have been used to summarize how the counterfactual hazard of death varies as a function of when antiretroviral therapy is initiated [24] and when an antiretroviral regimen is switched [25]. The extrapolation assumptions implicitly defined by non-saturated MSMs make it possible to estimate the coefficients of the model, and thereby the causal dose–response curve, even when few or no subjects follow some interventions of interest.

While MSMs were originally developed for static interventions [8, 10, 23, 24] they naturally generalize to classes of dynamic (or even more generally, stochastic) interventions as shown in van der Laan and Petersen [2] and Robins et al. [26]. Dynamic MSMs have been used, for example, to investigate how counterfactual hazard of death varies as a function of CD4+ T cell count threshold used to initiate antiretroviral therapy [6] or to switch antiretroviral therapy regimens [2]. Because the true shape of the causal dose–response curve is typically unknown, we have suggested that MSMs be used as working models. The target causal coefficients can then be defined by projecting the true causal dose–response curve onto this working model [20, 27].

The coefficients of both static and dynamic MSMs are frequently estimated using IPW estimators [2, 8, 10, 26]. These estimators have a number of attractive qualities: they can be intuitively understood, they are easy to implement, and they provide an influence curve-based approach to standard error estimation. However, IPW estimators also have substantial shortcomings. In particular, they are biased if the treatment mechanism used to construct the weights is estimated poorly (for example, using a misspecified parametric model). Further, IPW estimators are unstable in settings of strong confounding (near or partial positivity violations) and the resulting bias in both point and standard error estimates can result in poor inference (for a review of this issue see Petersen et al. [28]). Dynamic MSMs can exacerbate this problem, as the options for effective weight stabilization are limited [6, 26].

Asymptotically efficient and double robust augmented-IPW estimators of the estimand corresponding to longitudinal static MSM parameters were developed by Robins and Rotnitzky [14], Robins [13], Robins et al. [16]. These estimators are defined as a solution of an estimating equation, and as a result may be unstable due to failure to respect the global constraints implied by the model and the parameter. Robins [13, 29] and Bang and Robins [22] introduced an alternative double robust estimating equation-based estimator of longitudinal MSM parameters based on the key insight that both the statistical target parameter and the corresponding augmented-IPW estimating function (efficient influence curve) for MSMs on the intervention-specific mean can be represented as a series of iterated conditional expectations. In addition, they proposed a targeted sequential regression method to estimate the nuisance parameters of the augmented-IPW estimating equation. This innovative idea allowed construction of a double robust estimator that relies only on estimation of minimal nuisance parameters beyond the treatment mechanism.

In this paper, we describe a double robust substitution estimator of the parameters of a longitudinal marginal structural working model. The estimator presented incorporates the key insights and prior estimator of Robins [13, 29] and Bang and Robins [22] into the TMLE framework. Specifically, we expand on this prior work in several ways. We propose a TMLE for marginal structural working models for longitudinal dynamic regimes, possibly conditional on pre-treatment covariates. The TMLE described is defined as a substitution estimator rather than as solution to an estimating equation and incorporates data-adaptive/machine learning methods in generating initial fits of the sequential regressions. Finally, we further generalize the TMLE to apply to a larger class of parameters defined as arbitrary functions of intervention-specific means across a user-supplied class of interventions.

TMLE for the parameters of a MSM for “point treatment” problems, in which adjustment for a single set of covariates known not to be affected by the intervention of interest is sufficient to control for confounding, including history-adjusted MSMs, have been previously described [30, 31]. However, the parameter of a longitudinal MSM on the intervention-specific mean under sequential interventions subject to time-dependent confounding is identified as a distinct, and substantially more complex, estimand than the estimand corresponding to a point treatment MSM, and thus requires distinct estimators. An alternative TMLE for longitudinal static MSMs, which we refer to as a stratified TMLE, was described by Schnitzer et al. [32]. The stratified TMLE uses the longitudinal TMLE of van der Laan and Gruber [19] for the intervention-specific mean to estimate each of a set of static treatments and combines these estimates into a fit of the coefficients of a static longitudinal MSM on both survival and hazard functions. The stratified TMLE [32] resulted in substantially lower standard error estimates than an IPW estimator in an applied data analysis and naturally generalizes to dynamic MSMs. However, it remains vulnerable when there is insufficient support for some interventions of interest. In contrast, the TMLE we describe here pools over the set of dynamic or static interventions of interest as well as optionally over time when updating initial fits of the likelihood. It thus substantially relaxes the degree of data support required to remain an efficient double robust substitution estimator.

In summary, a large class of causal questions can be formally defined using static and dynamic longitudinal MSMs, and the parameters of these models can be identified from non-randomized data under well-studied assumptions. This article describes a TMLE that builds on the work of Robins [13, 29] and Bang and Robins [22] in order to directly target the coefficients of a marginal structural (working) model for a user-supplied class of longitudinal static or dynamic interventions. The theoretical properties of the pooled TMLE are presented, its implementation is reviewed, and its practical performance is compared to alternatives using both simulated and real data. R code [33] implementing the estimator and evaluating it in simulations is provided in online supplementary materials and as an open source R library ltmle [34].

1.1 Organization of paper

In Section 2, we define the observed data and a statistical model for its distribution. We then specify a non-parametric structural equation model for the process assumed to generate the observed data. We define counterfactual outcomes over time based on static or dynamic interventions on multiple treatment and censoring nodes in this system of structural equations. Our target causal quantity is defined using a marginal structural working model on the mean of these intervention-specific counterfactual outcomes at time t. The general case we present includes marginal structural working models on both the counterfactual survival and the hazard. We briefly review the assumptions under which this causal quantity is identified as a parameter of the observed data distribution. The statistical estimation problem is thus defined in terms of the statistical model and statistical target parameter.

Section 3 presents the TMLE defined by (a) representation of the statistical target parameter in terms of an iteratively defined set of conditional mean outcomes, (b) an initial estimator for the intervention mechanism and for these conditional means, (c) a submodel through this initial estimator and a loss function chosen so that the generalized score of the submodel with respect to this loss spans the efficient influence curve, (d) a corresponding updating algorithm that updates the initial estimator and iterates the updating till convergence, and (e) final evaluation of the TMLE as a plug-in estimator. We also present corresponding influence curve-based confidence intervals for our target parameter.

Section 4 illustrates the results presented in Section 3 using a simple three time point example and focusing on a marginal structural working model for counterfactual survival probability over time. This example is used to clarify understanding of notation and to provide a step-by-step overview of implementation of the pooled TMLE.

Section 5 compares the pooled TMLE described in this paper with alternative estimators for the parameters of longitudinal dynamic MSMs for survival. We provide a brief overview of the stratified TMLE [32], discuss scenarios in which each estimator may be expected to offer superior performance, and illustrate the breakdown of the stratified TMLE in a finite sample setting in which some interventions of interest have no support. As IPW estimators are currently the most common approach used to fit longitudinal dynamic MSMs, we also discuss two IPW estimators for these parameters.

Section 6 presents a simulation study in which the pooled TMLE is implemented for a marginal structural working model for survival at time t. Its performance is compared to IPW estimators and to the stratified TMLE for a simple data generating process and in a simulation designed to be similar to the data analysis presented in the following section, which includes time-dependent confounding and right censoring.

Section 7 presents the results of a data analysis investigating the effect of switching to second line therapy following immunologic failure of first line therapy using data from HIV-infected patients in the International Epidemiological Databases to Evaluate AIDS (IeDEA), Southern Africa. Throughout the paper, we illustrate notation and concepts using a simplified data structure based on this example.

Appendices contain a derivation of the efficient influence curve, further simulation details, an alternative TMLE, and reference table for notation. In online supplementary files, we present R code that implements the pooled TMLE, the stratified TMLE, and two IPW estimators for a marginal structural working model of survival. A corresponding publicly available R-package, ltmle, was released in May 2013 (http://cran.r-project.org/web/packages/ltmle/).

2 Definition of statistical estimation problem

Consider a longitudinal study in which the observed data structure O on a randomly sampled subject is coded as O=L(0),A(0),,L(K),A(K),L(K+1),where L(0) are baseline covariates, A(t) denotes an intervention node at time t, and L(t) denotes covariates measured between intervention nodes A(t1) and A(t). Assume that there is an outcome process Y(t)L(t) for t=1,,K+1, where L(K+1)=Y(K+1) is the final outcome measured after the final treatment A(K). The intervention node A(t)=(A1(t),A2(t)) has a treatment node A1(t) and a censoring indicator A2(t), where A2(t)=1 indicates that the subject is right censored by time t. We observe n independent and identically distributed (i.i.d.) copies O1,,On of O, and we will denote the probability distribution of O with PO,0, or more simply, as P0. Throughout, we use subscript 0 to denote the true distribution.

Running example. Here and in subsequent sections, we illustrate notation using an example in which n i.i.d. HIV-infected subjects with immunological failure on first line therapy are sampled from some target population. Here t=0 denotes time of immunological failure. L(t) denotes time-varying covariates, and includes CD4+ T cell count at time t and Y(t), an indicator of death by time t. In addition to baseline values of these time-varying covariates, L(0) includes non-time-varying covariates such as sex. The intervention nodes of interest are A(t),t=0,K, where A(t) is defined as an indicator of switch to second line therapy by time t; in our simplified example, we assume no right censoring. For notational convenience, after a subject dies all variables for that subject are defined as equal to their last observed value.

2.1 Statistical model

We use the notation Lˉ(k)=(L(0),,L(k)) to denote the history of time-dependent variable L from t=0,,k. Define the “parents” of a variable L(k), denoted Pa(L(k)), as those variables that precede L(k) (i.e., Pa(L(k))=Lˉ(k1),Aˉ(k1)). Similarly, Aˉ(k)=(A(0),,A(k)) is used to denote the history of the intervention process and Pa(A(k)) to denote a specified subset of the variables that precede A(k) such that the distribution of A(k) given the whole past is equal to the distribution of A(k) given its parents (Pa(A(k))Lˉ(k),Aˉ(k1)). Under our causal model, which we introduce below, these parent sets Pa(L(k)) and Pa(A(k)) correspond to the set of variables that may affect the values taken by L(k) and A(k), respectively.

We use QL(k),0 to denote the conditional distribution of L(k), given Pa(L(k)), and, gA(k),0 to denote the conditional distribution of A(k), given Pa(A(k)). We also use the notation: g0:kj=0kgA(j),0, g0g0:K and define Q0:kj=0kQL(j),0 and Q0Q0:K+1. In our example, QL(k),0 denotes the joint conditional distribution of CD4 count and death at time k, given the observed past (including past CD4 count and switching history), and gA(k),0 denotes the conditional probability of having switched to second line by time k given the observed past (deterministically equal to one for those time points at which a subject has already switched).

The probability distribution P0 of O can be factorized according to the time-ordering as P0(O)=k=0K+1P0(L(k)|Pa(L(k)))k=0KP0(A(k)|Pa(A(k)))=k=0K+1QL(k),0(O)k=0KgA(k),0(O)=Q0(O)g0(O). We consider a statistical model M for P0 that possibly assumes knowledge on the intervention mechanism g0. For example, the treatment of interest, such as switch time, may be known to be randomized, or to be assigned based on only a subset of the observed past. If Q is the set of all values for Q0 and G the set of possible values of g0, then this statistical model can be represented as M={P=Qg:QQ,gG}. In this statistical model, Q puts no restrictions on the conditional distributions QL(k),0, k=0,,K+1.

2.2 Causal model and counterfactuals of interest

By specifying a structural causal model [35, 36] or equivalently, a system of non-parametric structural equations, it is assumed that each component of the observed longitudinal data structure (e.g. A(k) or L(k)) is a function of a set of observed parent variables and an unmeasured exogenous error term. Specifically, consider the non-parametric structural equation model (NPSEM) defined by L(k)=fL(k)PaL(k),UL(k),k=0,,K+1,and A(k)=fA(k)PaA(k),UA(k)k=0,,K,in terms of a set of deterministic functions (fL(k):k=0,,K+1),(fA(k):k=0,,K), and a vector of unmeasured random errors or background factors U=(UL(0),,UL(K+1),UA(0),,UA(K)) [35, 36].

To continue our HIV example, we might specify a causal model in which both time-varying CD4 count, death, and the decision to switch potentially depended on a subject’s entire observed past, as well as unmeasured factors. Alternatively, if we knew that switching decisions were made only in response to a subject’s most recent CD4 count and baseline covariates, the parent set of A(k) could be restricted to exclude earlier CD4 count values.

This causal model represents a model MF for the distribution of (O,U) and provides a parameterization of the distribution of the observed data structure O in terms of the distribution of the random variables (O,U) modeled by the system of structural equations. Let PO,U,0 denote the latter distribution. The causal model MF encodes knowledge about the process, including both measured and unmeasured variables, that generated the observed data. It also implies a model for the distribution of counterfactual random variables under specific interventions on (or changes to) the observed data generating process. Specifically, a post-intervention (or counterfactual) distribution is defined as the distribution that O would have had under a specified intervention to set the value of the intervention nodes Aˉ=(A(0),,A(K)).

The intervention of interest might be static, with fA(k),k=0,,K replaced by some constant for all subjects. For example, an intervention to set A(k)=0 for k=0,,K corresponds to a static intervention to delay switching indefinitely for all subjects. Alternatively, the intervention might be dynamic, with fA(k),k=0,,K replaced by some specified function dk(Lˉ(k)) of a subject’s observed covariates. For example, an intervention could set A(k) to 1 the first time a subject’s CD4 count drops below some threshold. As static regimes are a special case of dynamic regimes, in the following sections we define the statistical estimation problem and develop our estimator for the more general dynamic case. Throughout, we use “rule” to refer to a specific intervention, static or dynamic, that sets the values of Aˉ.

Given a rule d, the counterfactual random variable Ld=(L(0),Ld(1),,Ld(K+1)) is defined by deterministically setting all the A(k) nodes equal to dk(Lˉ(k)) in the system of structural equations. The probability distribution of this counterfactual Ld is called the post-intervention or counterfactual distribution of L and is denoted with Pd,0. Causal effects are defined as parameters of a collection of post-intervention distributions under a specified set of rules. For example, we might compare mean counterfactual survival over time under a range of possible switch times.

2.3 Marginal structural working model

Our causal quantity of interest is defined using a marginal structural working model to summarize how the mean counterfactual outcome at time t varies as a function of the intervention rule d, time point t, and possibly some baseline covariate V that is a function of the collection of all baseline covariates L(0). Specifically, given a class of dynamic treatment rules D, we can define a true time-dependent causal dose–response curve (EPd,0(Yd(t)|V):dD,tτ) for some subset τ{1,,K+1}. Note that choice of V (as well as choice of τ and D) depends on the scientific question of interest. In many cases V will be defined as the empty set. In other cases, it may be of interest to estimate how the causal dose–response curve varies depending on the value of some subset of baseline variables.

We specify a working model Θ{mβ:β} for this true time-dependent causal dose–response curve. Our causal quantity of interest is then defined as a projection of the true causal dose–response curve onto this working model, which yields a definition mβ0 representing this projection. For example, if Y(t)[0,1], we may use a logistic working model Logitmβ(d,t,V)=j=1Jβjϕj(d,t,V) for a set of basis functions, and define our causal quantity of interest as ΨF(PO,U,0)=argminβE0tτdDh(d,t,V){Yd(t)logmβ(d,t,V)},+(1Yd(t))log(1mβ(d,t,V))},where h(d,t,v) is a user-specified weight function. We discuss choice of h(d,t,v) further below.

Such a Ψ0F=β0 solves the equation 0=E0tτdDh(d,t,V)ddβ0mβ0(d,t,V)mβ0(1mβ0)(E0(Yd(t)V)mβ0(d,t,V)).This equation can be replaced by 0=E0tτdDh(d,t,V)ddβ0mβ0(d,t,V)mβ0(1mβ0)E0Yd(t)|L(0)mβ0(d,t,V),which corresponds with ΨF(PO,U,0)=argminβE0tτtDh(d,t,V){E0(Yd(t)|L(0))logmβ(d,t,V)+(1E0(Yd(t)|L(0)))log(1mβ(d,t,V))}.In this case we have that ddβ0mβ0(d,t,V)mβ0(1mβ0)=(ϕj(d,t,V):j=1,,J).

To be completely general, we will define our causal quantity of interest as a function f of E(Yd(t)|L(0)) across dD,tτ and the distribution QL(0) of L(0). Thus we define ΨF(PO,U,0)=fE0Yd(t)|L(0):dD,tτ,QL(0),0.In addition to including the above example, this general formulation allows us to include marginal structural working models on continuous outcomes and on the intervention-specific hazard.

2.3.1 Choice of a weight function h(d,t,V)

Unless one is willing to assume that the MSM mβ is correctly specified, choice of the weight function changes the target quantity being estimated. Choice of the weight function should thus be guided by the motivating scientific question. For example, the simple weight function h(d,t,V)=1 gives equal weight to all time points and switch times. Alternatively, choice of a weight function equal to the marginal probability of following rule d through time t within strata of V gives greater weight to those rule, time, and baseline strata combinations with more support in the data, and zero weight to values (d,t,v) without support. As discussed further below, choice of a weight function can thus also affect both identifiability of the target parameter and the asymptotic and finite sample properties of IPW and TML estimators.

2.3.2 Running example

Continuing our HIV example, recall that static regimes are a special case of dynamic regimes and define the set of treatment rules of interest D as the set of possible switch times (switch at time 0, switch at time 1,, never switch). We might focus on the marginal counterfactual survival curves under a range of switch times (with V defined as the empty set). Alternatively, we might investigate how survival under a specific switch time differs among subjects that have a CD4+ T cell count <50 versus 50 cells/μl at time of failure (V=I(CD4(0)<50) where CD4(0)L(0)). For simplicity, for the remainder of the paper we use a running example in which we avoid conditioning on baseline covariates (i.e. V={}).

The true time-dependent causal dose–response curve (E0(Yd(t)):dD,t1,,K+1) corresponds to the set of counterfactual survival curves (through time K+1) that would have been observed for the population as a whole under each possible switch time. In this example, each rule d implies a single vector aˉ; we use d(t) to refer to the value a(t) implied by rule d and sd to refer to the switch time assigned by rule d. One might then specify the following marginal structural working model to summarize how the counterfactual probability of death by time t varies as a function of t and assigned switch time: Logitmβ(d,t)=β0+β1t+β2d(t1)(tsd),where d(t1)(tsd) is time since switch for subjects who have switched by time t1, and otherwise 0. For simplicity, we choose h(d,t)=1 and define the target causal quantity of interest as the projection of (E0(Yd(t)):dD,t1,,K+1) onto mβ(d,t) according to ΨF(PO,U,0)=argminβE0tτdD{Yd(t)logmβ(d,t)+(1Yd(t))log(1mβ(d,t))}.(1)

2.4 Identifiability and definition of statistical target parameter

We assume the sequential randomization assumption [11] A(k)Ld|Pa(A(k)),k=0,,K(2)(noting that weaker identifiability assumptions are also possible; see, for example, Robins and Hernan [9]). In our HIV example, the plausibility of this assumption would be strengthened by measuring all determinants of the decision to switch to second line therapy that also affect mortality via pathways other than switch time.

We further assume positivity, informally an assumption of support for each rule of interest across covariate histories compatible with that rule of interest. Specifically, for each d,t,V for which h(d,t,V)0, we assume P0(A(k)=dk(Lˉ(k))|Lˉ(k),Aˉ(k1)=dˉ(Lˉ(k1))>0,k=0,,Kalmosteverywhere.(3)In our HIV example, in which h(d,t)=1, a subject who has not already switched should have some positive probability of both switching and not switching regardless of his covariate history. Under these assumptions, the counterfactual probability distribution of Ld is identified from the true observed data distribution P0 and given by the G-computation formula P0d [11]: P0d(l)=k=0K+1QL(k),0d(lˉ(k)),(4)where QL(k),0d(lˉ(k))=QL(k),0(l(k)|lˉ(k1),Aˉ(k1)=dˉ(Lˉ(k1)). Thus this G-computation formula P0d is defined by the product over all L(k)-nodes of the conditional distribution of the L(k)-node, given its parents, and given Aˉ(k1)=dˉ(Lˉ(k1)). If identifiability assumptions (2) and (3) hold for each rule dD, then the time-dependent causal dose–response curve (E0(Yd(t)|V):dD,tτ) is also identified from P0 through the collection of G-computation formulas (P0d:dD). For the remainder of the paper, we choose τ=1,,K+1 and at times suppress the index set τ.

Let Ld=(L(0),Ld(1),,Ld(K+1)) denote a random variable with probability distribution P0d, which includes as a component the process Yd=(Yd(0),Yd(1),,Yd(K+1)). The above-defined causal quantities can now be defined as a parameter of P0. For example, if Y(t)[0,1] and the causal parameter of interest is a vector of coefficients in a logistic MSM, then we have ΨF(PO,U,0)=argminβE0tdDh(d,t,V){E0(Yd(t)|L(0))logmβ(d,t,V)+(1E0(Yd(t)|L(0)))log(1mβ(d,t,V))}10Ψ(P0).The estimand ψ0=β0 solves the equation 0=E0tdDh(d,t,V)ddβ0mβ0(d,t,V)mβ0(1mβ0)(E0(Yd(t)|L(0))mβ0(d,t,V)).The causal identifiability assumptions put no restrictions on the probability distribution P0 so that our statistical model is unchanged, with the exception that we now also assume positivity (3). The statistical target parameter is now defined as a mapping Ψ:MIRJ that maps a probability distribution PM of O into a vector of parameter values Ψ(P).

The statistical estimation problem is now defined: We observe n i.i.d. copies O1,,On of OP0M and we want to estimate Ψ(P0) for a defined target parameter mapping Ψ:MIRJ. For this estimation problem, the causal model plays no further role – even when one does not believe any of the causal assumptions, one might still argue that the statistical parameter Ψ(P0)=ψ0 represents an effect measure of interest controlling for all the measured confounders.

3 Pooled TMLE of working MSM for dynamic treatments and time-dependent outcome process

The TMLE algorithm starts out with defining the target parameter as a Ψ(Qˉ,QL(0)) for a particular choice Qˉ that is easier to estimate than the whole likelihood Q. It requires the derivation of the efficient influence curve D(P) which can also be represented as D(Qˉ,QL(0),g). Subsequently, it defines a loss function L(Qˉ,QL(0)) for (Qˉ0,QL(0),0) and a submodel ((Qˉ(,g),QL(0)(0):,0) through (Qˉ,QL(0)) at (,0)=0, indexed by the intervention mechanism g, chosen so that dd(ϵ,ϵ0)L(Q¯(ϵ,g),QL(0)(ϵ0))|ϵ=0 spans the efficient influence curve D(Qˉ,QL(0),g). Given these choices, it remains to define the updating algorithm which simply uses the submodel through the initial estimator to determine the update by fitting (,0) with minimum loss based estimation (MLE), and this updating step is iterated till convergence at which point the MLE of (,0) equals 0. By the fact that an MLE solves its score equation, it then follows that the final update Qˉn,QL(0),n also solves the efficient influence curve equation iD(Qˉn,QL(0),n,gn)(Oi)=0, which provides the foundation for its asymptotic linearity and efficiency. The remainder of this section presents each of these steps in detail.

An estimator of ψ0 is efficient among the class of regular estimators if and only if it is asymptotically linear with influence curve D(Q0,g0)[37]. The efficient influence curve can thus be used as an ingredient for the construction of an efficient estimator. One approach is to represent the efficient influence curve as an estimating function D(Q,g,ψ) and define an estimator ψn as the solution of PnD(Qn,gn,ψ)=0, given initial estimators Qn,gn. This is referred to as the estimating equation methodology for construction of locally efficient estimators [38]. Here, we instead use the efficient influence curve to define a targeted maximum likelihood (substitution) estimator Ψ(Qn) that, as a by-product of the procedure, satisfies PnD(Qn,gn)=0 and thus also solves the efficient influence curve estimating equation. Under regularity conditions, one can now establish that, if D(Qn,gn) consistently estimates D(Q0,g0), then Ψ(Qn) is asymptotically linear with influence curve equal to the efficent influence curve, so that Ψ(Qn) is asymptotically efficient. In addition, robustness properties of the efficient influence curve are naturally inherited by the TMLE.

Robins [13, 29] and Bang and Robins [22] reformulate the statistical target parameter and corresponding efficient influence curve for longitudinal MSMs on the intervention-specific mean as a series of iterated conditional expectations. For completeness, and to generalize to dynamic marginal structural working models possibly conditional on baseline covariates, as well as to general functions of the intervention-specific mean across a user-supplied class of interventions, we present this reformulation of the statistical target parameter below. The corresponding efficient influence curve is given in Appendix B. We will use the common notation Ph=h(O)dP(O) for the expectation of a function h(O) with respect to P.

3.1 Reformulation of the statistical target parameter in terms of iteratively defined conditional means

For the case Yd(t)[0,1] in Section 2.4 we defined Ψ(P) as Ψ(Q)=argminβEtdDh(d,t,V)QˉL(1)d,tlogmβ(d,t,V)+(1QˉL(1)d,t)log(1mβ(d,t,V)),(5) where QˉL(1)d,t=EP(Yd(t)|L(0)). Thus, Ψ(P) only depends on P through QˉL(1)=(QˉL(1)d,t:dD,t) and QL(0). Therefore, we will also refer to the statistical target parameter Ψ(P) as Ψ(Q) where we redefine Q(QˉL(1),QL(0)). For each given t, we can use the following recursive definition of EP(Yd(t)|L(0)): for k=t,t1,,1 we have QˉL(k)d,t=EYd(t)|Lˉd(k1)=EL(k)QˉL(k+1)d,t|Lˉ(k1),Aˉ(k1)=dˉk1Lˉ(k1), where we define QˉL(t+1)d,t=Y(t). This defines QˉL(1)d,t as an iteratively defined conditional mean [22].

To obtain Ψ(Q) we simply put QˉL(1)=(QˉL(1)d,t:dD,t), combined with the marginal distribution of L(0) into the above representation Ψ(Q)=Ψ(QˉL(1),QL(0)). As mentioned in the previous section, we have that Ψ(Q) solves the score equations given by 0=EtdDh(d,t,V)ddβmβ(d,t,V)mβ(1mβ)EYd(t)|L(0)mβ(d,t,V)EtdDh1(d,t,V)EYd(t)|L(0)mβ(d,t,V), where we defined h1(d,t,V)h(d,t,V)ddβmβ(d,t,V)mβ(1mβ).The TMLE for the linear working model using the squared error loss function is obtained by simply redefining h1(d,t,V)h(d,t,V)ddβmβ(d,t,V).

In general, the above shows that we can represent Ψ(P)=f((E(Yd(t)|L(0)):t,d),QL(0)) as a function f(Q), where Q=(QˉL(1)=(E(Yd(t)|L(0):d,t),QL(0)) and we have an explicit representation of the derivative equation corresponding with f.

3.2 Estimation of intervention mechanism g0

The log-likelihood loss function for g0 is logg. Specifically, we can factorize the likelihood g0 as g0$=k=0Kg1,k(A1(k)|Pa(A1(k)))g2,k(A2(k)|Pa(A2(k))),where (g1,k:k) represents the treatment mechanism and (g2,k:k) represents the censoring mechanism. Both mechanisms can be estimated separately with a log-likelihood based logistic regression estimator, either according to parametric models, or preferably using the state of the art in machine learning. In particular, we can use the log-likelihood based super learner based on a library of candidate machine learning algorithms, which uses cross-validation to determine the best performing weighted combination of the candidate machine learning algorithms [39]. Use of such aggressive data-adaptive algorithms is recommended in order to ensure consistency of gn.

If there are certain variables in the Pa(A(k)) that are known to be instrumental variables (variables that affect future Y nodes only via their effects on A(k)), then these variables should be excluded from our estimates of g0 in the TMLE procedure. In that case our estimate of the conditional distribution of A(k) is in fact not estimating the conditional distribution of A(k) given its parents; however, for simplicity we do not make this explicit in our notation.

3.3 Loss functions and initial estimator of Q0

We will alternate notation Qˉkd,t and QˉL(k)d,t. Recall that Ψ(Q) depends on Q through QL(0), and Qˉ(Qˉkd,t:dD,tτ,k=1,,t). Note Qˉkd,t is a function of lˉ(k1), t=1,,K+1, k=1,,t, dD. We will use the following loss function for Qˉkd,t: Ld,t,k,Q¯k+1d,t(Q¯kd,t)=I(A¯(k1)=d¯k1(L¯(k1))){Q¯k+1d,tlogQ¯kd,t+(1Q¯k+1d,t)log(1Q¯kd,t)}.This is an application of the log-likelihood loss function for the conditional mean of Qˉk+1d,t given past covariates and given that past treatment has been assigned according to rule d. For example, fitting a parametric logistic regression model of Qˉk+1d,t on past covariates among subjects with Aˉ(k1)=dˉk1(Lˉ(k1)) would minimize the empirical mean of this loss function over the unknown parameters of the logistic regression model. Alternatively, one could use loss-based machine learning algorithms, such as loss-based super learning, with this loss function.

In this loss function, the outcome Qˉk+1d,t is treated as known. In implementation of our estimator, it will be replaced by an estimate; we thus refer to Qˉk+1d,t as a nuisance parameter in this loss function. The collection of loss functions from k=1,,t implies a sequential regression procedure where one starts at k=t and sequentially fits Qˉkd,t for k=t,,1. We describe this procedure in greater detail in the next subsection, for a sum-loss function that sums the above loss function over a collection of rules dD.

By summing over dD, the time points t, and k=1,,t, we obtain the loss function LQˉ(Qˉ)tk=1tdDLd,t,k,Qˉk+1d,t(Qˉkd,t)for the whole Qˉ=(Qˉkd,t:dD,k=1,,t,t=1,,K+1).

We will use the log-likelihood loss L(QL(0))=logQL(0) as loss function for the distribution Q0,L(0) of L(0), but this loss will play no role since we will estimate Q0,L(0) with the empirical distribution function QL(0),n. To conclude, we have presented a loss function for all components of (Qˉ,QL(0)) our target parameter depends on, and the sum-loss function LQ(Qˉ,QL(0))LQˉ(Qˉ)logQL(0) is a valid loss function for (Qˉ,QL(0)) as a whole.

3.4 Non-targeted substitution estimator

These loss functions imply a sequential regression methodology for fitting each of the required components of (Qˉ,QL(0)). These initial fits can then be used to construct a non-targeted plug-in estimator of the target parameter ψ0. As noted, we estimate the marginal distribution of L(0) with the empirical distribution. We now describe how to obtain an estimator Qˉk,nd,t, dD, k=1,,t, for any given t=1,,K+1. We define Qˉt+1d,t=Y(t) for all d, and recall that Qˉt,nd,t is the regression of Y(t) on Aˉ(t1)=dˉt1(Lˉ(t1)) and Lˉ(t1). This latter regression can be carried out conditional on Aˉ1(t1),Lˉ(t1), stratifying only on not being censored through time t1 (i.e. Aˉ2(t1)=0)). The resulting fit for all Aˉ1(t1) values can then be evaluated at Aˉ(t1)=dˉt1(Lˉ(t1)). In this manner, if certain rules have little support, one can still obtain an initial estimator that smoothes across all observations.

Given the regression fit Qˉt,nd,t, for a dD, we regress Qˉt,nd,t onto Aˉ(t2),Lˉ(t2) and evaluate it at Aˉ(t2)=dˉt2(Lˉ(t2)), giving us Qˉt1,nd,t. This is carried out for each dD, giving us Qˉt1,nd,t for each dD. Again, given this regression Qˉt1,nd,t, we regress this on Aˉ(t3),Lˉ(t3), and evaluate it at Aˉ(t3)=dˉt3(Lˉ(t3)), giving us Qˉt2,nd,t. We carry this out for each dD, giving us Qˉt2,nd,t, for each dD. This process is iterated until we obtain an estimator of Qˉ1,nd,t(L(0)) for each dD. Since this process is carried out for each t=1,,K+1, this results in an estimator Qˉ1,nd,t for each dD and t=1,,K+1. We denote this estimator of Qˉ1,0=(Qˉ1,0d,t:d,t) with Qˉ1,n=(Qˉ1,nd,t:d,t). Note that a plug-in estimator Ψ(Qˉ1,n,QL(0),n) of ψ0=Ψ(Qˉ1,0,QL(0)) is now obtained by regressing Qˉ1,nd,t onto d,t,V according to the working marginal structural model using weighted logistic regression based on the pooled sample (Qˉ1,nd,t(Li(0)),Vi,d,t), dD,i=1,,n, t=1,,K+1, with weight h(d,t,Vi).

The pooled TMLE presented below utilizes this same sequential regression algorithm and makes use of these initial fits of Q0. In order to provide a consistent initial estimator of Q0 and thereby improve the efficiency of the TMLE, use of an aggressive data-adaptive algorithm such as super learning [39] when generating the initial regression fits is recommended. These initial fits are then updated to remove bias in a series of targeting steps that rely on the fit gn of g0. The updating steps involve submodels whose score spans the efficient influence curve.

3.5 Loss function and least favorable submodel that span the efficient influence curve

Recall that we use the notation g0:k=j=0kgA(j) for the cumulative product of conditional intervention distributions. Consider the submodel Qˉkt(,g)$$=(Qˉkd,t(,g):dD) with parameter defined by LogitQˉL(k)d,t(,g)=LogitQˉL(k)d,t+h1(d,t,V)g0:k1,k=t,,1.This parameter is of same dimension as β and h1. This defines a submodel Qˉt(,g) with parameter through Qˉt=(Qˉkd,t:dD,k=1,,t). Note that ddϵLd,t,k,Q¯k+1d,t(Q¯kd,t(ϵ,g))|ϵ=0=h1(d,t,V)I(A¯(k1)=d¯k1(L¯(k1)))g0:k1(Q¯k+1d,tQ¯kd,t)This shows that ddt=1K+1dDk=1tLd,t,k,Q¯k+1d,t(Q¯kd,t(,g))|=0=t=1K+1dDk=1th1(d,t,V)I(A¯(k1)=d¯k1(L¯(k1)))g0:k1(Q¯k+1d,tQ¯kd,t)=c(Q)[D*(P)DL(0)*(Q)]where D(P) is the efficient influence curve as presented in Corollary (1), Appendix (B), and we define c(Q)EQL(0)t,dh1(d,t,V)ddβmβ(d,t,V),giving DL(0)(Q)=c(Q)1t,dh1(d,t,V)QˉL(1)d,tmβ(d,t,V).In other words, the sum-loss function LQˉ(Qˉ)=t=1K+1dDk=1tLd,t,k,Qˉk+1d,t(Qˉkd,t)and submodel Qˉ(,g)=(Qˉkd,t(,g):k,d,t) through Qˉ=(Qˉkd,t:k,d,t) generates the component D(P)DL(0)(Q) of the efficient influence curve D(P).

Consider also a submodel QL(0)(0) of QL(0) with score DL(0)(Q), but this submodel and loss will play no role in the TMLE algorithm since we will estimate QL(0) with its NPMLE, the empirical distribution of Li(0), i=1,,n, so that the MLE of 0 will be equal to zero. This defines our submodel (QL(0)(0),Qˉ(,g):0,). The sum-loss function LQˉ(QL(0),Qˉ)=L(Qˉ)logQL(0) and this submodel satisfy the condition that the generalized score spans the efficient influence curve: D*(Q,g)dd(,0)LQ¯(QL(0)(0),Q¯(,g))|(,0)=0.(6)

3.6 Pooled TMLE

We now describe the TMLE algorithm based on the above choices of (1) the representation of Ψ(P) as Ψ(Qˉ,QL(0)), (2) the loss function for (Qˉ,QL(0)), and (3) the least favorable submodels ((Qˉ(,g):),(QL(0)(0):0)) through (Qˉ,QL(0)) at (,0)=0 for fluctuating these parameters (Qˉ,QL(0)). We utilize the same sequential regression approach described in Section 3.4, but now incorporate sequential targeted updating of the initial regression fits. We assume an estimator gn of g0. We first specify where in the algorithm updating occurs and then describe the updating process.

Recall that we define Qˉt+1d,t=Y(t) for all d and that Qˉt,nd,t is the regression of Y(t) on Aˉ(t1)=dˉt1(Lˉ(t1)),Lˉ(t1). For any given t=1,,K+1, the initial estimator Qˉt,nd,t is first updated to Qˉt,nd,t, using a logistic regression fit of our least favorable submodels, as described below. For a dD, we then regress the updated regression fit Qˉt,nd,t, onto Aˉ(t2),Lˉ(t2), and evaluate it at Aˉ(t2)=dˉt2(Lˉ(t2)), giving us Qˉt1,nd,t. This is carried out for each dD, giving us Qˉt1,nd,t for each dD. The regressions Qˉt1,nd,t are then updated for each dD, as described below, giving us Qˉt1,nd,t, for each dD. For a dD, we then regress the updated regression fit Qˉt1,nd,t, on Aˉ(t3),Lˉ(t3) and evaluate it at Aˉ(t3)=dˉt3(Lˉ(t3), giving us Qˉt2,nd,t. We again carry this out for each dD, giving us Qˉt2,nd,t for each dD and again update the resulting regressions, giving us Qˉt2,nd,t,, for each dD. This process is iterated until we obtain an updated estimator Qˉ1,nd,t,(L(0)) for each dD. Since this process is carried out for each t=1,,K+1, this results in an estimator Qˉ1,nd,t, for each dD and t=1,,K+1. We denote this estimator of Qˉ1,0=(Qˉ1,0d,t:d,t) with Qˉ1,n=(Qˉ1,nd,t,:d,t).

The updating steps are implemented as follows: for each t{1,,K+1}, and for k=t to k=1, we compute k,nargminkPndDLd,t,k,Qˉk+1,nd,t,Qˉk,nd,t(k,gn),and compute the corresponding update Qˉk,nd,t,=Qˉk,nd,t(k,n,gn), for all dD. Note that k,n=argmindDLd,t,k,Q¯k+1,nd,t,*(Q¯k,nd,t(,gn))=argmindDi=1nI(A¯i(k1)=d¯k1(L¯i(k1))){Q¯k+1,nd,t,*(L¯i(k))logQ¯k,nd,t(,gn)(L¯i(k1))+(1Q¯k+1,nd,t,*(L¯i(k)))log(1Q¯k,nd,t(,gn)(L¯i(k1)))}k=1,,K+1. Thus k,n can be obtained by fitting a logistic regression of the outcome Qˉk+1,nd,t,(Lˉi(k)) with offset LogitQˉk,nd,t on multivariate covariate h1(d,t,Vi)I(Aˉi(k1)=dˉk1(Lˉi(k1)))/g0:k1(Oi),using a data set pooled across i=1,,n,dD (consisting of n×|D| observations).

This defines the TMLE Qˉn=(Qˉk,nd,t,:dD,t,k=1,,t). In particular, Qˉ1,n=(Qˉ1,nd,t,:dD,t) is the TMLE of Qˉ1,0=(E0(Yd(t)|L(0)):dD,t). This defines now the TMLE (QL(0),n,Qˉn) of (QL(0),0,Qˉ0), where QL(0),n is the empirical distribution of L(0).

The TMLE of ψ0 is the plug-in estimator corresponding with Qˉ1,n and QL(0),n: ψn=Ψ(Qˉ1,n,QL(0),n).This plug-in estimator Ψ(Qˉ1,n,QL(0),n) of ψ0=Ψ(Qˉ1,0,QL(0),0) is obtained by regressing Qˉ1,nd,t, onto d,t,V according to the marginal structural working model in the pooled sample (Qˉ1,nd,t,(Li(0)),Vi,d,t), dD,i=1,,n, t=1,,K+1, using weights h(d,t,Vi).

An alternative pooled TMLE that only fits a single to compute the update is described in Appendix C.

3.7 Statistical inference for pooled TMLE

By construction, the TMLE solves the efficient influence curve equation 0=PnD(Qˉn,gn,Ψ(Qˉn,QL(0),n)), thereby making it a double robust locally efficient substitution estimator under regularity conditions, and positivity (3) (van der Laan [20], theorem 8.5, appendix A.18). Here, we provide standard error estimates and thereby confidence intervals for the case that gn is a maximum likelihood estimator for g0 using a correctly specified semiparametric model for g0.

Specifically, if gn is a maximum likelihood estimator of g0 according to a correctly specified semiparametric model for g0, and Qˉn converges to some possibly misspecified Qˉ, then under regularity conditions the TMLE ψn is asymptotically linear with an influence curve given by D(Qˉ,g0,ψ0) minus its projection onto the tangent space of this semiparametric model for g0. As a consequence, the asymptotic variance of n(ψnψ0) is more spread-out or equal to the covariance matrix Σ0=P0D(Qˉ,g0,ψ0)2. A consistent estimator of this asymptotic variance is given by Σn=PnD(Qˉn,gn,ψn)2.As a consequence, ψn(j)±1.96Σn(j,j)n is an asymptotically conservative 95% confidence interval for ψ0(j), and we can also use this multivariate normal limit result, ψnN(ψ0,Σ0/n), to construct a simultaneous confidence interval for ψ0 and to test null hypotheses about ψ0. This variance estimator treats weight function h as known. If h is estimated, then this variance estimator still provides valid statistical inference for the statistical target parameter defined by the estimated h.

In the case that gn is a data-adaptive estimator converging to g0, we suggest (without proof), that this variance estimator will still provide an asymptotically conservative confidence interval under regularity conditions. However, ideally the data-adaptive estimator gn should also be targeted [40]. An approach to valid inference in the case where gn is inconsistent but Qn is consistent is also discussed in van der Laan [40]; however, it remains to be generalized to the parameters in this paper.

4 Implementation of the pooled TMLE

The previous section reformulated the statistical parameter in terms of iteratively defined conditional means and described a pooled TMLE for this representation. In this section, we illustrate notation and implementation of this TMLE to estimate the parameters of a marginal structural working model on counterfactual survival over time.

4.1 The statistical estimation problem

We continue our motivating example, in which the goal is to learn the effect of switch time on survival. For illustration, focus on the two time point case where K=1. Let the observed data consist of n i.i.d. copies O1,,On of Oi=(Li(0),Ai(0),Li(1),Ai(1),Yi(2))P0. Let L(t)=(Y(t),CD4(t)), where Y(t) is an indicator of death by time t, and CD4(t) is CD4 count at time t. Assume all subjects are alive at baseline (Y(0)=0). As above, A(t) is an indicator of switch to second line by time t. We assume no right censoring so that all subjects are followed until death or the end of the study (for convenience define variable values after death as equal to their last observed value). We specify a NPSEM such that each variable may be a function of all variables that precede it and an independent error, and assume the corresponding non-parametric statistical model for P0.

Define the set of treatment rules of interest D as the set of all possible switch times {0,1,2} (where 2 corresponds to no switch). Each rule d implies a single vector aˉ=(a(0),a(1)); we use d(t) to refer to the value a(t) implied by rule d, and sd to refer to the switching time implied by rule d. We specify the following marginal structural working model for counterfactual probability of death by time t under rule d: Logitmβ(d,t)=β0+β1t+β2(d(t1)(tsd)).(7)The target causal parameter is defined as the projection of (E0(Yd(t)):dD,t{1,2}) onto mβ(d,t) according to eq. (1).

Under the sequential randomization (2) and positivity (3) assumptions, E(Yd(t)|L(0))=E(Yd(t)|L(0)). The target statistical parameter is defined as the projection of (E(Yd(t)|L(0)):dD,t{1,2}), onto the marginal structural working model mβ, according to eq. (5) with h(d,t)=1.

4.1.1 Reformulation of the statistical target parameter

Note that E(Yd(t=2)|L(0)) for rule d (denoted Qˉ1d,2) can be expressed in terms of iteratively defined conditional means: EL(1)EY(2)Y(2)|L(1),L(0),A(1)=d(1),A(0)=d(0)|L(0),A(0)=d(0),while E(Yd(t=1)|L(0)) (denoted Qˉ1d,1) equals E(Y(1)|L(0),A(0)=d(0)). The statistical target parameter Ψ(Q) is defined by plugging (Qˉ1d,1,Qˉ1d,2):dD, and the marginal distribution of L(0) into eq. (5).

4.2 Estimator implementation

We begin by describing implementation of a simple plug-in estimator of Ψ(Q).

4.2.1 Non-targeted substitution estimator

  • 1.

    For each rule of interest dD, corresponding to each possible switch time, generate a vector Qˉ1,nd,2 of length n for t=2:

    • (a)

      Fit a logistic regression of Y(2) on L(1),L(0),A(1),A(0) and generate a predicted value for each subject by evaluating this regression fit at A(1)=d(1),A(0)=d(0). Note E0(Y(2)|Y(1)=1)=1, so the regression need only be fit and evaluated among subjects who remain alive at time 1. This gives a vector Qˉ2,nd,2 of length n.

    • (b)

      Fit a logistic regression of the predicted values generated in the previous step on L(0),A(0). Generate a new predicted value for each subject by evaluating this regression fit at A(0)=d(0). This gives a vector Qˉ1,nd,2 of length n.

  • 2.

    For each rule of interest dD generate a vector Qˉ1,nd,1 of length n for t=1: Fit a logistic regression of Y(1) on L(0),A(0) and generate a predicted value for each subject by evaluating this regression fit at A(0)=d(0).

  • 3.

    The previous steps generated Qˉ1,n=(Qˉ1,nd,t,:dD,t{1,2}). Stack these vectors to give a single vector with length equal to the number of subjects n times the number of rules D times the number of time points (n×3×2). Fit a pooled logistic regression of Qˉ1,n on (d,t) according to model mβ (eq. 7), with weights given by h(d,t) (here equal to 1). This gives an estimator of the target parameter Ψ(Q).

We now describe how the pooled TMLE modifies this algorithm to update the initial estimator Qˉ1,n. In the following section, we compare the pooled TMLE to this non-targeted substitution estimator and with other available estimators.

4.2.2 Pooled TMLE

  • 1.

    Estimate P0(A(1)|A(0),L(1),L(0)) and P0(A(0)|L(0)). Denote these estimators g1,n and g0,n, respectively, and let g0:1,n=g0,ng1,n denote their product. In our example, this step involves estimating the conditional probability of switching at time 0 given baseline CD4 count, and estimating the conditional probability of switching at time 1, given a subject did not switch at time 1, did not die at time 1, and CD4 count at times 0 and 1.

  • 2.

    Generate a vector Qˉ2,nd,2, of length n×D for t=2, k=2:

    • (a)

      Fit a logistic regression of Y(2) on L(1),L(0),A(1),A(0). Generate a predicted value for each subject and each dD by evaluating this regression fit at A(1)=d(1),A(0)=d(0). Note that E0(Y(2)|Y(1)=1)=1, so the regression need only be fit and evaluated among subjects who remain alive at time 1. This gives a vector of initial values Qˉ2,nd,2 of length n×D.

    • (b)

      For each subject, i=1,,n, create a vector consisting of one copy of Yi(2) for each dD. Stack these copies to create a single vector of length n×D, denoted Qˉ3,nd,2,.

    • (c)

      For each subject i=1,,n and each dD, create a new multidimensional weighted covariate: h(d,t=2)ddβmβ(d,t=2)mβ(1mβ)I(Aˉi=d)g0:1,n(Oi).

      In our example, h(d,t)=1, and ddβmβ(d,t)mβ(1mβ) equals 1, t, and d(t1)(tsd) for the derivative taken with respect to β0,β1, and β2, respectively. The following 3×3 matrix would thus be generated for each subject i, with rows corresponding to switch at time 0, time 1, or do not switch: (1×I(Ai(0)=1,Ai(1)=1)g0:1,n(Oi)2×I(Ai(0)=1,Ai(1)=1)g0:1,n(Oi)2×I(Ai(0)=1,Ai(1)=1)g0:1,n(Oi)1×I(Ai(0)=0,Ai(1)=1)g0:1,n(Oi)2×I(Ai(0)=0,Ai(1)=1)g0:1,n(Oi)1×I(Ai(0)=0,Ai(1)=1)g0:1,n(Oi)1×I(Ai(0)=0,Ai(1)=0)g0:1,n(Oi)2×I(Ai(0)=0,Ai(1)=0)g0:1,n(Oi)0×I(Ai(0)=0,Ai(1)=0)g0:1,n(Oi))

      Stack these matrices to create a matrix with n×D rows and one column for each component of β (here, 3n×3).

    • (d)

      Among those subjects still alive at the previous time point (Y(1)=0), fit a pooled logistic regression of Qˉ3,nd,2, (the Y(2) vector) on the weighted covariates created in the previous step, suppressing the intercept and using as offset LogitQˉ2,nd,2,, the logit of the initial predicted values for t=2 and k=2. This gives a fit for multivariate 2. Denote this fit 2,n=(2,nβ0,2,nβ1,2,nβ2).

    • (e)

      Generate Qˉ2,nd,2, by evaluating the logistic regression fit in the previous step at each dD among those subjects for whom Y(1)=0. For subject i and rule d, evaluate Expit(Logit(Q¯2,nd,2(L¯i,d))+ε2,nβ0g0,n(d(0)|Li(0))g1,n(d(1)|Li(0),d(0),Li(1))+

      +2,nβ1×2g0,n(d(0)|Li(0))g1,n(d(1)|Li(0),d(0),Li(1))

      +2,nβ2×d(1)(2sd)g0,n(d(0)|Li(0))g1,n(d(1)|Li(0),d(0),Li(1))).

      For subjects with Y(1)=1, Qˉ2,nd,2,=Qˉ2,nd,2=1. This gives an updated vector Qˉ2,nd,2, of length n×D.

  • 3.

    Generate a vector Qˉ1,nd,2, of length n×D for t=2, k=1:

    • (a)

      Fit a logistic regression of Qˉ2,nd,2, (generated in the previous step) on L(0),A(0). Generate a predicted value for each subject and each dD by evaluating this regression fit at A(0)=d(0). This gives a vector of initial values Qˉ1,nd,2 of length n×D.

    • (b)

      For each subject i=1,,n and each dD, create the multidimensional weighted covariate as above, now for k=1: h(d,t=2)ddβmβ(d,t=2)mβ(1mβ)I(Ai(0)=d(0))g0,n(Oi).

      The following 3×3 matrix would thus be generated for each subject i, with rows corresponding to switch at time 0, time 1, or don’t switch: 1×I(Ai(0)=1)g0,n(Oi)2×I(Ai(0)=1)g0,n(,Oi)2×I(Ai(0)=1)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)2×I(Ai(0)=0)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)2×I(Ai(0)=0)g0,n(Oi)0×I(Ai(0)=0)g0,n(Oi),

      Stack these matrices to create a matrix with n×D rows and one column for each dimension of β.

    • (c)

      Fit a pooled logistic regression of Qˉ2,nd,2, (the updated fit generated in step 2) on these weighted covariates, suppressing the intercept and using as offset LogitQˉ1,nd,2,, the logit of the initial predicted values for t=2 and k=1. This gives a fit for multivariate 1. Denote this fit 1,n=(1,nβ0,1,nβ1,1,nβ2).

    • (d)

      Generate Qˉ1,nd,2, by evaluating the logistic regression fit in the previous step at each dD. For subject i and rule d, evaluate Expit(Logit(Q¯1,nd,2(Li(0),d(0)))+ϵ1,nβ0g0,n(d(0)|Li(0))+ϵ1,nβ1×2g0,n(d(0)|Li(0))+ϵ1,nβ2×d(1)(2sd)g0,n(d(0)|Li(0))).

      This gives an updated vector Qˉ1,nd,2,of length n×D.

  • 4.

    Generate a vector Qˉ1,nd,1, of length n×D for t=1, k=1:

    • (a)

      Fit a logistic regression of Y(1) on L(0),A(0). Generate a predicted value for each subject and each dD by evaluating this regression fit at A(0)=d(0). This gives a vector of initial values Qˉ1,nd,1 of length n×D.

    • (b)

      For each subject, i=1,,n, create a vector consisting of one copy of Yi(1) for each dD. Stack these copies to create a single vector of length n×D, denoted Qˉ2,nd,1,.

    • (c)

      For each subject i=1,,n and each dD, create a new multidimensional weighted covariate, for t=1,k=1: h(d,t=1)ddβmβ(d,t=1)mβ(1mβ)I(Ai(0)=d(0))g0,n(Oi).

      The following 3×3 matrix would thus be generated for each subject i, with rows corresponding to switch at time 0, time 1, or don’t switch: 1×I(Ai(0)=1)g0,n(Oi)1×I(Ai(0)=1)g0,n(,Oi)1×I(Ai(0)=1)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)0×I(Ai(0)=0)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)1×I(Ai(0)=0)g0,n(Oi)0×I(Ai(0)=0)g0,n(Oi)

      Stack these matrices to create a matrix with n×D rows and one column for each component of β.

    • (d)

      Fit a pooled logistic regression of Qˉ2,nd,1, (the Y(1) vector) on these weighted covariates, suppressing the intercept and using as offset LogitQˉ1,nd,1,, the logit of the initial predicted values for t=1 and k=1. This gives a fit for multivariate 1. Denote this fit 1,n=(1,nβ0,1,nβ1,1,nβ2).

    • (e)

      Generate Qˉ1,nd,1, by evaluating the logistic regression fit in the previous step at each dD. For subject i and rule d, evaluate Expit(Logit(Q¯1,nd,1(Li(0),d(0)))+ϵ1,nβ0g0,n(d(0)|Li(0))+ϵ1,nβ1g0,n(d(0)|Li(0))+ϵ1,nβ2×d(0)(1sd)g0,n(d(0)|Li(0))).

      This gives an updated vector Qˉ1,nd,1, of length n×D.

  • 5.

    The previous steps generated Qˉ1,n=(Qˉ1,nd,t,:dD,t=1,2). Stack these vectors to give a single vector with length equal to the number of subjects n times the number of rules D times the number of time points (n×3×2). Fit a pooled logistic regression of Qˉ1,n on (d,t) according model mβ (eq. 7), with weights given by h(d,t) (here equal to 1). This gives the pooled TMLE of the target parameter Ψ(Q).

5 Comparison with alternative estimators

In this section we compare the TMLE described with several alternative estimators available for dynamic MSMs for survival: non-targeted substitution estimators, IPW estimators, and the stratified TMLE of Schnitzer et al. [32].

5.1 Non-targeted substitution estimator

The consistency of non-targeted substitution estimators of Ψ(Q) relies entirely on consistent estimation of the Q portions of the observed data likelihood. For estimators based on the parametric G-formula this requires correctly specifying parametric estimators for the conditional distributions of all non-intervention nodes given their parents [7, 11, 12, 41]. For the non-targeted estimator described in Section (3.4), this requires consistently estimating the literately defined conditional means Qˉ(Qˉkd,t:dD,tτ,k=1,,t).

Correct a priori specification of parametric models for Qˉ in either case is rarely possible, rendering such non-targeted plug-in estimators susceptible to bias. Further, while machine learning methods, such as Super Learning, can be used to estimate Q non-parametrically, the resulting plug-in estimator has no theory supporting its asymptotic linearity, and will generally be overly biased for the target parameter β [21].

5.2 Inverse probability weighted estimators

The IPW estimator described in van der Laan and Petersen [2], Robins et al. [26] is commonly used to estimate the parameters of a dynamic MSM. In brief, this estimator is implemented by creating one data line for each subject i, for each t, and for each d for which Aˉ(t1)=d(Lˉ(t1)). Each data line consists of Yi(t), any functions of (d,t,Vi) included as covariates in the MSM, and a weight h(d,t,Vi)I(Aˉi(t1)=d(Lˉi(t1)))j=0t1gn(Ai(j)|Pa(Ai(j))). A weighted logistic regression is then fit, pooling over time and rules d.

The parameter mapping presented here for the TMLE suggests an alternative IPW estimator for dynamic MSM – namely, implement an IPW estimator for E(Yd) (possibly within strata of V if V is discrete) for dD,t and project these estimates onto mβ. The IPW estimator employed could be either the standard Horvitz–Thompson estimator: 1ni=1nI(Aˉi(t1)=d(Lˉi(t1)))j=0t1gn(Ai(j)|Pa(Ai(j)))Yi,or its bounded counterpart: i=1nI(Aˉi(t1)=d(Lˉi(t1)))j=0t1gn(Ai(j)|Pa(Ai(j)))Yi/i=1nI(Aˉi(t1)=d(Lˉi(t1)))j=0t1gn(Ai(j)|Pa(Ai(j)))Robins and Rotnitzky [14].

The consistency of both IPW estimators relies on having a consistent estimator gn of g0; further, even if gn is estimated consistently, neither will be asymptotically efficient. Both also suffer from the general sensitivity of IPW estimators to strong confounding (data sparsity or near positivity violations). Standard IPW estimators for dynamic MSMs are typically more susceptible to instability in such settings than their counterparts for static MSMs, due to the limited ability to stabilize weights (with stabilizing function restricted to h(d,t,V) versus h(Aˉi(t1),t,V).

5.3 Stratified targeted maximum likelihood estimator

Similar to the pooled TMLE, the stratified TMLE [32] also relies on reformulating the statistical target parameter in terms of iteratively defined conditional means and updating initial fits of these conditional means using covariates that are functions of an estimator gn of the intervention mechanism. The stratified and pooled differ, however, in several respects. In particular, in the pooled TMLE the update step is accomplished by fitting a single multivariate for each time point t and non-intervention node k, pooling across all rules of interest dD. In contrast, the stratified TMLE fits a separate for each time point t, non-intervention node k, and rule of interest dD. Specifically, the stratified TMLE consists of implementing the longitudinal TMLE of van der Laan and Gruber [19] for E(Yd(t)) separately for each time point t and each rule of interest dD, and then combining these estimates into a fit of mβ.

Let Qˉn denote the initial estimator of the iteratively defined conditional means that forms the basis of the pooled and the stratified TMLEs. Let Qˉn denote the targeted update of Qˉn for the two estimators (noting that the update is accomplished differently for the pooled and stratified estimators). As long as their corresponding update Qn solves the efficient influence curve equation PnD(Qn,gn), both the stratified and the pooled estimators will share the desirable asymptotic properties of a TMLE. Both estimators will be consistent if either Qˉn or gn is a consistent estimator of Q0 or g0, respectively. Further, both the pooled and the stratified estimators will be asymptotically efficient if both the initial estimator Qˉn and the estimator gn are consistent.

The pooled and stratified estimators may nonetheless differ in both their asymptotic and finite sample performance. The stratified TMLE uses a more saturated model when updating Qˉn than does the pooled TMLE. Thus if the initial estimator Qˉn is misspecified, the update of this initial estimator will be more extensive (the update will be further from the initial misspecified estimator) for the stratified, as compared to the pooled, TMLE, resulting asymptotically in a Qn that is closer to the true Q0 and thus improving efficiency (recall that the efficiency bound is achieved at Q0,g0). The extent to which this asymptotic property translates into meaningful finite sample gains in settings where Qˉn is misspecified remains to be investigated.

On the other hand, in some cases, it is no longer clear how to implement the stratified TMLE. For example, the target parameter may be defined using a MSM mβ(d,t,V), conditional on a subset of baseline covariates V. If V is discrete with adequate finite sample support at each value, the stratified TMLE can be applied by estimating E(Yd(t)|V=v) within each stratum v. When V is continuous, however, or has levels which are not represented in a given finite sample, such an approach will break down for many choices of weight function h(d,t,V). Similarly, whenever there are some rules dD with no support in a given finite sample, no data will be available to fit for some rules of interest, and the key update step will no longer be possible using the stratified TMLE. Note that such lack of support in finite samples can occur even when the assumption of positivity in the observed data distribution (3) is satisfied.

In cases where V is discrete and there is support for some but not all rules dD within each stratum of V, one option is to define a stratified quasi-TMLE using the initial fit Qˉnd,t for those rules, time points, and strata of V where no data are available to fit . The estimator, implemented in the simulations below, remains defined even when not all rules of interest are supported within each stratum of V in a given sample. However, in such cases, the initial estimator Qˉn is only partially updated, and thus Qn may no longer solve the efficient influence curve equation PnD(Qn,gn), even if gn is a consistent estimator of g0. If the initial estimator Qˉn is poor (for example, if it is a misspecified parametric model), the resulting estimator of β will be biased. In contrast, the pooled TMLE retains the ability to fit and thus update Qˉn by pooling over rules d. In other words, both estimators rely on the theoretical positivity assumption on P0 (3) for identifiability; however, they may respond differently to practical positivity violations in finite samples.

5.3.1 Numerical illustration

We use a simple simulation to illustrate a setting in which the positivity assumption holds, but many rules of interest have no support in a given finite sample. In this setting, the stratified estimator will be biased if the initial estimator Qˉn is misspecified. In contrast, the pooled TMLE remains asymptotically linear if gn is a consistent estimator of g0. Simulation studies comparing the performance of the pooled and stratified TMLEs as well as IPW estimators under more realistic scenarios are provided in the following section.

We implemented a simulation with observed data consisting of n i.i.d. copies of O=(L(0),A=(A(0),,A(6)),Y), where L(0) is a baseline covariate, A(t) is a binary treatment assigned randomly at seven time points (t=0,,6), and Y is a binary outcome. The data for a given individual i were generated by drawing sequentially from the following distributions: L(0)N(0,1), A(t)Bern(p=0.6)fort=0,,6, Y|L(0),ABernp=expitL(0)417k=06A(t)2.The target parameter was defined as the projection of E0(Yd:dD) onto marginal structural working model mβ(d)=β0+β1t=06d(t) according to eq. (5), with weight function h(d)=1, D equal to the 27=128 possible values of A, and d(t) denoting the treatment level a(t) assigned by a given rule d at time t.

Stratified and pooled TMLEs for β were implemented using estimators gn and Qˉn based on intercept only logistic regressions; thus Qˉn was an inconsistent estimator of Qˉ0. Table 1 shows estimated bias, bias to standard error ratio, variance, mean squared error, and 95% confidence interval coverage (using the variance estimator described in Section 3.7) based on 500 samples of size n=128. Note that at this sample size many of the 128 rules of interest have no support in the data in a given sample, while the remainder have few observations available to fit in the stratified TMLE. As predicted by its double robust property, the pooled TMLE remains without meaningful bias despite use of a poor initial estimator Qˉn. In contrast, the stratified TMLE has bias of approximately double the magnitude of its standard error, posing a substantial threat to valid inference. The stratified TMLE also exhibits markedly lower variance than its pooled counterpart, explained by the fact that for those rules d without support in the data, the stratified estimator uses an intercept only model to estimate Qˉd. Although unbiased, the pooled TMLE provides anti-conservative confidence interval coverage; we return to this point below.

Table 1

Breakdown of stratified TMLE when some rules dD have no support

This breakdown of the stratified TMLE in settings with no support for some rules of interest will not occur if the function h is chosen to give a weight of 0 to any rule (or more generally, any (d,V,t) combination) without support in the sample. For example, in the illustration above we could have defined h(d)=P0(A=d) and estimated it using the empirical distribution. Unless one is willing to assume that the MSM mβ is correctly specified, however, choice of h changes the target parameter being estimated [27]. Further, even with this choice of weight function, the estimators may still exhibit different finite sample performance in setting with marginal data support. We investigate this possibility further in the following section.

6 Simulation study

6.1 Overview

In this section, we investigate the relative performance of the pooled TMLE, stratified TMLE, and IPW estimators for the parameters of a marginal structural working model. For each candidate estimator, we report bias, variance, MSE, and 95% confidence interval coverage estimates based on influence curve variance estimators. We note that our influence curve-based estimators assume the weight function h(d,t) is known; if the weight function is estimated, the influence curve should be corrected for this additional estimated component. We investigate two basic data generating processes. Simulation 1 investigates a simple process, in which the effect of the longitudinal treatment (time to switch) is confounded by baseline variables only, the outcome is observed at a single time point, and there is no censoring. Simulation 2 introduces more realistic complexity, designed to resemble the data analysis presented in the following section.

6.2 Simulation 1: baseline confounding only

6.2.1 Data generating process

We implemented a simulation with observed data consisting of n i.i.d. copies of O=(L(0),A=(A(0),,A(K),Y), where L(0) is a baseline covariate, A(t) is a binary treatment assigned at time point t=0,,K, and Y is a binary outcome. The data for a given individual were generated by drawing sequentially from the following distributions: L(0)N(0,1), A(t)|L(0)Bernp=max(min(L(0)+0.5,0.62),0.38)fort=0,,K, Y|L(0),ABernp=expit(L(0)1K+1t=0KA(t).In order to investigate the impact of decreasing support in the data (practical violations or near violations of the positivity assumption), we considered two versions of this data generating process, with K=2 (Simulation 1a, lower bound on g0 of 0.05) and K=6 (Simulation 1b, lower bound on g0 of 0.001).

6.2.2 Target parameter

The target parameter was defined as the projection of E0(Yd:dD) onto marginal structural working model mβ(d)=β0+β1t=0Kd(t) according to eq. (5), with D equal to the 2(K+1) possible values of A, d(t) equal to the treatment level assigned by a given rule d at time, t and weight function h(d)=P0(A=d). In the case that some rules of interest had no support in a given sample, this choice of weight function (when estimated as the empirical proportion of subjects that followed rule d) ensured that the IPW estimator remained defined and that the updated fit Qn used by the stratified TMLE solved the efficient influence curve equation when gn was a consistent estimator of g0.

6.2.4 Estimators

This is a static point treatment problem, and thus a number of additional estimators are available. However, we use this as a special case of longitudinal dynamic MSMs and investigate the relative performance of three estimators described in Section 5: the pooled TMLE, the stratified TMLE, and the standard IPW estimator. All estimators were implemented using two estimators of g0: an estimator based on a correctly specified model for the conditional distribution of A given L(0) and an estimator using an intercept only model. The estimators gn were bounded from below at 0.001. TMLEs were implemented using two estimators of Qˉ0: an estimator based on main terms logistic regression models using the correct set of parents for a given node as independent variables (in a slight abuse, we refer to these as “correctly specified”) and an estimator using intercept only models. Performance was evaluated across 500 samples of size n=500; 95% confidence interval coverage was based on the variance estimator described in Section 3.7.

6.2.5 Results

Results for Simulation 1a are shown in Table 2. When both Qˉn and gn were based on correctly specified models, all three estimators were unbiased, had similar variance, and achieved close to nominal coverage. Table 2 also demonstrates double robustness; when Qˉn and gn were based on a misspecified model, both TMLEs remained without meaningful bias. In this simulation, both TMLEs continued to achieve close to nominal coverage even when gn was an inconsistent estimator of g0. In contrast and as expected, the IPW estimator was substantially biased with poor coverage when gn was based on an intercept only model.

Results for Simulation 1b, with both Qˉn and gn based on correctly specified models, are shown in Table 3. In this simulation, in which the lower bound for g0 is 0.001, the IPW estimator was minimally biased and retained good 95% confidence interval coverage. Interestingly, the performance of the stratified TMLE suffered in this setting, with bias approximately equal to the standard error, and 95% confidence interval coverage of 83% and 80% for β0 and β1, respectively. The pooled TMLE remained unbiased and retained good confidence interval coverage.

Table 2

Simulation 1a

Table 3

Simulation 1b

6.3 Simulation 2: resembling data analysis

In this simulation, we used a data generating process designed to resemble the data analysis presented in the following section, in which the goal was to investigate the effect of delayed switch to a new antiretroviral regimen on mortality among HIV-infected patients who have failed first line therapy. The data generating process thus contains both baseline and time-dependent confounders of a longitudinal binary treatment (time to switch), a repeated measures binary outcome (survival over time), and informative right censoring due to two causes (database closure and loss to follow up).

6.3.1 Data generating process

We implemented a simulation with observed data consisting of n i.i.d. copies of O=(L(0),A(0),L(1),A(1),,L(K),A(K),Y(K+1)),for K=9. Here, L(0)=(W,CD4(0)) and L(t)=(Y(t),CD4(t)), where W was a non-time-varying baseline covariate (W=(W1,,W4), representing baseline age, sex, and disease stage), CD4(t) was a time-varying covariate representing most recent measured CD4 count at time t (square root transformed), and Y(t) was an indicator of death by time t. The intervention nodes for a given time point t were A(t)=(C1(t),C2(t),A1(t)), where C1(t) was an indicator of database closure by time t, C2(t) was an indicator of loss to follow up by time t, and A1(t) was an indicator of having switched to second line therapy by time t. In brief, the data for a given individual were generated by first drawing baseline characteristics W, then for each time point t, for as long as the subject remained alive and uncensored,

  • 1.

    Drawing a time updated CD4 count CD4(t) given W, prior CD4 counts, and regimen status at the prior time point (A1(t1))

  • 2.

    Determining censoring due to database closure C1(t) using a Bernoulli trial with probability dependent on W.

  • 3.

    If still uncensored, determining censoring due to loss to follow up C2(t) using a Bernoulli trial with probability dependent on W, prior CD4 count and regimen status at the prior time point (A1(t1)).

  • 4.

    If still uncensored and not yet switched, determining switching using a Bernoulli trial with probability dependent on W and prior CD4 count.

  • 5.

    Determining death using a Bernoulli trial with probability dependent on W, prior CD4 counts and regimen status A(t)

The full data generating process is provided in Appendix D. Coefficients in the data generating process were chosen to approximate the degree of censoring, treatment, and death in the analysis data set, as detailed in Appendix D, Tables 7 and 9. The data generating process results in a true intervention mechanism g0 not bounded away from 0.

6.3.2 Target parameter

The target parameter was defined as the projection of the counterfactual survival curve for each switch time, (E(Yd(t)):dD,t=1,,10) onto marginal structural working model Logitmβ(d,t)=β0+β1t+β2(d(t1)(tsd)) according to eq. (5), where we use d(t) to denote the value a(t) assigned by rule d, sd to denote the switch time assigned by rule d, and with D consisting of each possible switch time, {0,,10} combined with an intervention to prevent censoring. We used the following weight function: h(d,t)=P0(Aˉ(t1)=aˉ(t1))ntI(t<t),where nt was the number of unique values aˉ(t1) compatible with dD, t is the first time point at which all subjects have either died or been censored, and both P0(Aˉ(t1)=aˉ(t1)) and t were estimated with their empirical counterparts. This weight function gave 0 weight to rules without support in a given sample. It further avoided up-weighting specific values aˉ(t1) proportional to the number of rules (assigned switch times) that they were compatible with.

6.3.3 Estimators

We implemented the pooled TMLE, the stratified TMLE, the IPW estimator based on estimating E(Yd(t):dD) using the bounded Horvitz–Thompson estimator and projecting the resulting estimates onto the model mβ (referred to as “Stratified IPW”) and the standard IPW estimator for dynamic MSM (referred to as a “Standard IPW” estimator).

The intervention rules dD could be considered static, in that they assign a fixed vector of treatment decisions aˉ irrespective of a subject’s covariate values. However, when the target parameter is defined using a MSM on survival, a static IPW estimator cannot be implemented in the standard way (fitting a weighted pooled regression of Y(t) on observed treatment history Aˉ(t1)) because the full Aˉ(t1) is not observed for subjects who die before time t. As noted by Picciotto et al. [42], one option, adopted by us here, is to instead define the interventions of interest as dynamic (switch at time sd if still alive).

All estimators were implemented using an estimator gn based on a correctly specified parametric model for g0, but bounding the resulting estimates from below at 0.01 in order to ensure that the denominator in the covariate used in the updating step remained bounded away from 0. The TMLEs were implemented using estimators of Qˉ0 based on main terms logistic regression models using the correct set of parents for a given node as independent variables (not equivalent to use of a correctly specified parametric model to estimate Qˉ0). The performance of each estimator was evaluated across 500 samples of size n=2,627, corresponding to the sample size in the data analysis. 95% confidence interval coverage was based on the variance estimator described in Section 3.7; calculation of non-parametric bootstrap-based coverage was computationally prohibitive.

6.3.4 Results

Results for Simulation 2 are shown in Table 4. In this simulation, both the pooled and the stratified TMLEs were essentially unbiased for all coefficients, and the two TMLEs had comparable MSEs. Both TMLEs exhibited less than nominal 95% confidence interval coverage when using influence curve-based variance estimators. The anti-conservative performance of the influence curve-based variance estimator is likely due to the presence of practical positivity violations and relatively rare outcomes; the fact that the weight function was treated as known may also make a small contribution. Further work is needed to develop improved diagnostics and variance estimators in these settings.

Table 4

Simulation 2: resembling data analysis

In contrast, both IPW estimators were substantially biased for β2, which reflected the treatment effect, despite use of an estimator gn based on a correctly specified parametric model. Both IPW estimators also showed higher MSE for β2, and achieved 95% confidence interval coverage for β2 substantially below that of the TMLEs. This finding is consistent with the known susceptibility of IPW estimators to positivity violations and data sparsity, exacerbated by the limited ability when using a dynamic regime formulation to choose an effectively stabilizing weight function. Across simulations, the median minimum value of gn used by IPW prior to bounding at 0.01 was 0.000297; 1.55% of values of gn used by IPW were less than 0.01. Tables 8 and 10, provide further details on data support and number of events.

7 Data analysis

We analyzed data from the International Epidemiological Databases – Southern Africa in order to investigate the effect of switching to second line therapy on mortality among HIV-infected patients with immunological failure on first line antiretroviral therapy. The data set and clinical question are described in detail in Gsponer et al. [1]. In brief, data were drawn from clinical care facilities in Zambia and Malawi, in which HIV-infected patients were followed longitudinally in clinic and data were collected on baseline demographic and clinical variables (sex, age, and baseline disease stage), time-varying CD4 count, and time-varying treatment, summarized here as switch to second line therapy. Death was independently reported. The 2,627 subjects meeting WHO immunological failure criteria were included in the current analysis beginning at time of immunologic failure. Following common practice and prior analysis, time was discretized into 3-month intervals; time updated CD4 count was coded such that CD4 count for an interval preceded switching decisions in that interval. Data on a subject were censored at time of database closure or after four consecutive intervals without clinical contact.

The data structure and target parameter were identical to those described in Simulation 2, with W=(W1,W2,W3,W4), W1 equal to sex, W2 and W3 representing two levels of a three-level categorical age variable (<30,3039,and>39), and W4 equal to disease stage. The analysis was implemented under the causal model assumed for Simulation 2; in particular assuming that monitoring times did not affect the outcome other than via effects on switching. We acknowledge that this assumption may not hold; however, relaxing it introduces a number of additional complications, as described in the Appendix. We implemented the estimators described in Simulation 2: pooled and stratified TMLEs and standard and stratified IPW estimators. Estimators of g0 and Qˉ0 were based on main term logistic regression models, analogous to Simulation 2. Given the results in Simulation 2 suggesting the poor performance of the influence curve-based variance estimator, we also estimated the variance using a non-parametric bootstrap.

Results are given in Table 5. The IPW estimates for the effect of switching on mortality (β2) are close to zero and non-significant. Both TMLE point estimates suggest a 0.88 relative odds of death per 3-month earlier switch, and all except for the stratified TMLE combined with bootstrap-based variance estimation were significant at the α=0.05 level. Such a protective effect of switching is consistent with clinical knowledge. Interestingly, these results appear consistent with those of Simulation 2, which suggested that the IPW estimator was substantially positively biased, underestimating the harm of delayed switch, while both TMLEs performed well in terms of bias. In summary, our results in both the simulation and data analysis are consistent with the TMLEs controlling for measured confounders more completely than the corresponding IPW estimators.

The poor coverage observed in Simulation 2, despite absence of bias for the TMLEs, suggests that the influence curve-based variance estimators may be systematically underestimating the true variance in this analysis. While the non-parametric bootstrap offers an alternative approach, it is not expected to resolve the challenge of anti-conservative variance estimation in the setting of practical positivity violations. Intuitively, rare treatment/covariate combinations, despite being theoretically possible, may simply not occur in a given finite sample and as a result, the corresponding extreme weights implied by these combinations will not occur. Because the non-parametric bootstrap resamples from the same finite sample, it fails to address the underlying problem. Indeed, the bootstrap-based confidence intervals in the data analysis were slightly smaller than confidence intervals based on the influence curve. Thus in this realistic setting of rare outcomes and moderately strong confounding, our results caution against reliance on either approach to variance estimation, for either IPW or TML estimators, and suggest that additional work developing robust variance estimators in this setting is urgently needed.

Table 5

Data analysis

In addition to the issues raised above, limitations of the analysis include the potential for unmeasured confounding by factors such as unmeasured health status and adherence, as well as bias due to incomplete health reporting, resulting in censoring due to loss to follow up that directly depends on death [43, 44]. These results also do not contradict the previous published results of Gsponer et al. [1], which found a protective effect of switching using a static IPW estimator for a hazard MSM; such an IPW estimator might perform substantially better than the dynamic IPW estimator for a survival MSM implemented here.

8 Discussion

In summary, we have presented a pooled TMLE for the parameters of static and dynamic longitudinal MSMs that builds on prior work by Robins [13, 29] and Bang and Robins [22]. We evaluated the performance of this estimator using simulated data and applied it, together with alternatives, in a data analysis. Both theory and simulations suggest settings in which the pooled TMLE offers advantages over alternative estimators. Software implementing this estimator, together with competitors, is included in supplementary files and is publicly available as part of R library ltmle (http://cran.r-project.org/web/packages/ltmle/).

The pooled TMLE presented in this paper, together with corresponding open source software, provides a new tool for estimation of the parameters of static or dynamic MSMs. It has clear theoretical advantages over available alternatives. Unlike IPW and augmented-IPW estimators, it is a substitution estimator. Unlike IPW estimators, it is double robust and asymptotically efficient depending on the initial estimators of g and Q. Unlike the previously proposed stratified TMLE, it does not require support in the data for every rule of interest and remains defined in the case that the target parameter is defined using a marginal structural working model conditional on a continuous baseline covariate.

In settings where some subset of discrete intervention rules has adequate support in a given sample, an alternative approach is to compare only this subset of rules [2]. However, in many settings smoothing over a large number of poorly supported rules is appropriate. For example, for a set of rules indexed by a continuous or multiple level ordered variable, smoothing over this variable provides a way to define a causal effect of interest despite inadequate support to estimate the counterfactual outcome under any of these rules individually.

The TMLE presented in this paper was developed for a causal model in which the non-intervention variables may be a function of the entire observed past (Pa(L(k))=(Aˉ(k1),Lˉ(k1))) and the intervention variables may be a function of some subset of the observed past (Pa(A(k))(Aˉ(k1),Lˉ(k))) – in other words, for a model in which exclusion restrictions were assumed, if at all, only for the intervention variables. In some cases, a causal model that also restricts the parent set of the non-intervention variables to a subset of the observed past may be appropriate. This smaller model is included in the larger model assumed in the current paper. As a result, while the TMLE developed for the larger model will still be valid, it will no longer be efficient.

Our simulations suggest that both stratified and pooled TMLEs may outperform both the IPW estimator typically used for dynamic MSMs, as well as an alternative “stratified” IPW estimator, in some settings with sparse data/near positivity violations. However, further work is needed to confirm this preliminary observation. Although the theory in this paper was developed for the general case of dynamic MSMs, including models of the time-specific hazard or survival functions, we focused our software implementation, examples, and simulations on MSMs for survival. The practical performance of the pooled TMLE relative to alternative estimators also remains to be investigated for the case of static and dynamic MSMs on the hazard. It also remains to implement and evaluate the relative performance of the alternative pooled TMLE described in Appendix C, in which the updating step pools not only over all rules of interest but also over all time points. Finally, the relative performance of the TMLE compared to double robust efficient estimating equation-based estimators for longitudinal MSM parameters, including those of Robins [13, 29] and Bang and Robins [22], remains to be evaluated.

Importantly, our simulations and data analysis illustrate the need for improved variance estimators for both TMLE and IPW in settings with moderately strong confounding, multiple time points, and relatively rare outcomes. Improved approaches to variance estimation and valid inference, as well as appropriate diagnostics to warn applied practitioners of settings in which coverage is likely to be poor, are crucial research priorities.

Acknowledgments

This work was supported by NIH Grant #U01AI069924 (NIAID, NICHD, and NCI) (PIs: Egger and Davies), the Doris Duke Charitable Foundation Grant #2011042 (PI: Petersen), and NIH Grant #R01AI074345-06 (NIAID) (PI: van der Laan).

Appendix A

Notation

Table 6

Partial list of notation

Appendix B

The efficient influence curve of the statistical target parameter

We have the following theorem for a general parameter Ψ(P)=f(QˉL(1)=(QˉL(1)d,t:d,t),QL(0)).

Theorem 1 Consider a parameter Ψ:MIRJ that can be represented as Ψ(P)=f(QˉL(1),QL(0)), where QˉL(1)=(QˉL(1)d,t=E(Yd(t)|L(0)):d,t). Let Q=(QˉL(1),QL(0)). Assume that f() is such that Ψ is pathwise differentiable. Let fd,t(Q)(w)=ddQˉL(1)d,t(w)f(QˉL(1),QL(0)) be the partial derivative of f with respect to QˉL(1)d,t(w)=E(Yd(t)|L(0)=w) at Q. Let DL(0)(Q) be the influence curve of f(QˉL(1),QL(0),n) as an estimator of f(QˉL(1),QL(0)), where QL(0),n is the empirical distribution of Li(0), i=1,,n.

Then, the efficient influence curve of Ψ at P can be represented as follows: D(P)=DL(0)(Q)+d,tfd,t(L(0))k=1tI(Aˉ(k)=dˉk(Lˉ(k)))g0:k(QˉL(k+1)d,tQˉL(k)d,t)Proof: The efficient influence curve of the parameter E(Yd(t)|L(0)=w) (assuming discrete random variable L(0)) is given by Dd,t,w=I(L(0)=w)QL(0)(w)k=1tI(Aˉ(k)=dˉk(Lˉ(k)))g0:k(QˉL(k+1)d,tQˉL(k)d,t)(appendix A3, van der Laan and Rose [20]). By the delta-method, the efficient influence curve of f(QˉL(1),QL(0)) is thus given by D=DL(0)+w,d,tfd,t(w)Dd,t,w =DL(0)+w,d,tfd,t(w)I(L(0)=w)QL(0)(w)k=1tI(Aˉ(k)=dˉk(Lˉ(k)))g0:k(QˉL(k+1)d,tQˉL(k)d,t) =DL(0)+d,tfd,t(L(0))k=1tI(Aˉ(k)=dˉk(Lˉ(k)))g0:k(QˉL(k+1)d,tQˉL(k)d,t)(appendix A3, van der Laan and Rose [20]). This completes the proof. ⃞

In order to determine the partial derivative of the function f and DL(0) the following is useful. Suppose that, as in our examples, f(Q)=argmaxβM(β,Q) for some function M, and suppose that Q=(Qˉ,QL(0)). Then β(Q)=f(Q) solves the equation 0=ddβM(β,Q)U(β,Q). By the implicit function theorem we have that ddQβ(Q)=ddβU(β,Q)1ddQU(β,Q).In particular, ddQˉβ(Qˉ,QL(0))=ddβU(β,Q)1ddQˉU(β,Q),and ddQL(0)β(Q)=ddβU(β,Q)1ddQL(0)U(β,Qˉ,QL(0)).We can now apply our general Theorem 1 to the example with Y(t)[0,1] and the logistic regression working MSM in which β(Q) solves the equation U(β,QˉL(1),QL(0)) with U(β,QˉL(1),QL(0))EQL(0)tdDh1(d,t,V)(QˉL(1)d,t(L(0))mβ(d,t,V)).The same equation applies for the linear working MSM but with the other definition of h1 as mentioned above. Define c(Q)EQL(0)t,dh1(d,t,V)ddβmβ(d,t,V).Note that ddβU(β,Q)=c(Q). Thus, DL(0)(Q)=c(Q)1t,dh1(d,t,V)(QˉL(1)d,tmβ(d,t,V)),and fd,t(L(0))=c(Q)1h1(d,t,V).This proves the following corollary [13, 22, 29].

Corollary 1 Consider the target parameter Ψ(Q) defined by eq. (5). This target parameter is pathwise differentiable at P with efficient influence curve given by D(P)=c(Q)1t,dh1(d,t,V)(QˉL(1)d,tmβ(d,t,V))+c(Q)1tk=1tdh1(d,t,V)I(Aˉ(k)=dˉk(Lˉ(k)))g0:k(QˉL(k+1)d,tQˉL(k)d,t)The efficient influence curve is double robust. In other words we have that P0D(Q,g0)=Ψ(Q)ψ0, so that, in particular, if Ψ(Q)=ψ0, then P0D(Q,g0)=0. As a consequence, our TMLE Ψ(Qn) will be a consistent estimator of ψ0 if either Qn is consistent for Q0 or gn is consistent for g0 [38].

Appendix C

An alternative pooled TMLE that only fits a single to compute the update

The TMLE described in the main text relies on a separate for each k=1,,t and for each t=1,,K+1 resulting in a collection of t=1K+1t estimators of that define the TMLE. A nice feature of this TMLE is that it exists in closed form. The following alternative TMLE only relies on fitting a single , but in this case the updating needs to be iterated until convergence. First construct an initial estimator Qˉn0 of Qˉ0=(Qˉk,0d,t:k=1,,t,dD,t=1,,K+1) as described above. Now, consider the above-presented submodel (Qˉn0()=(Qˉk,nd,t(,g):) through Qˉn0 at =0. Compute n0=argmint=1K+1dDk=1tLd,t,k,Qˉk+1,nd,t,0(Qˉk,nd,t,0(,gn)),where the nuisance parameters of the loss function are estimated with the initial estimator Qˉn0. Note that n0 can be fit with a pooled logistic regression as stated above. This yields an update Qˉn1=Qˉn0(n0). In general, at the mth step, given the estimator Qˉnm, we compute nm=argmint=1K+1dDk=1tLd,t,k,Qˉk+1,nd,t,m(Qˉk,nd,t,m(,gn)),and the resulting update Qˉnm+1=Qˉnm(nm,gn). This updating process is iterated until nm0. The resulting final update is denoted with Qˉn and is the TMLE of Qˉ0. By construction, we have that this TMLE also solves the efficient influence curve equation PnD(Qn,gn)=0 with arbitrary precision. The TMLE of ψ0 is now computed with the corresponding plug-in estimator Ψ(Qˉn,QL(0),n), as above. The potential advantage of this alternative TMLE is that it is able to smooth across all time points t and k when computing the update, while the closed form TMLE presented above only smoothes over the rules dD.

Appendix D

Supplementary material, Simulation 2

Overview

Below we describe the true data generating process used in Simulation 2 in greater detail. We also provide a summary table comparing the support and number of events over time in the simulated data and in the real data set it was designed to resemble.

In our presentation of the simulation below we have altered our notation slightly from that presented in the paper to match notation in the accompanying R code. Specifically, L(t) is used to refer to time varying CD4 count CD4(t). The observed data generated on a given subject consisted of O=(W,Y(0),L(0),C1(0),C2(0),A1(0),,Y(9),L(9),C1(9),C2(9),A1(9),Y(10))Further, the data generating process included a non-monotone monitoring process (denoted M(t)) designed to mimic when subjects come into clinic, have their CD4 counts measured, and have an opportunity to switch regimens. This adds several additional complexities. First, observed CD4 count, denoted here as L(t) (and in the main text as CD4(t)), is only updated to reflect the true underlying CD4 count process when a patient is seen. Below, Lˆ(t) is used to denote the true underlying CD4 value. Subsequent CD4 values and death are functions of this true underlying value, while the intervention nodes are functions of the observed values only. Because both switching and monitoring are generated only in response to the observed past, however, this time-dependent non-monotone monitoring process is a multivariate instrumental variable, warranting its exclusion from the adjustment set. Further, its inclusion would both be expected to harm efficiency and introduce positivity violations (for example, subjects not seen in clinic at a given time point have zero probability of switch at that time point). The non-monotone monitoring process, while retained to mimic the data analysis, is thus omitted from the observed data and presentation in the main text in order to simplify discussion. Second, in accordance with common practice in clinical cohort data, censoring due to loss to follow up C2 is defined deterministically based on not being seen in clinic for a certain number of consecutive time points. Finally, a subject can only switch treatment when seen (A1(t) is only at risk of jumping when M(t)=1). As above, W=(W1,W2,W3,W4) and Y(t) denotes an indicator of death by time t.

Data generating process

Data were generated for a given individual according to the following process, where 1 and 2 are draws from a standard normal distribution, and all binary variables were drawn from a Bernoulli distribution with the conditional probabilities given below. Data for a given subject were drawn sequentially until either Y(t) jumped to one, C1(t) jumped to 1, C2(t) jumped to 1, or Y(10) was generated. Tables 7 and 8 compare the number of deaths in the simulated data (median of 501 samples) and actual data among patients following a given regime. P(W1=1)=0.3 P(W2=1|W1=0)=0.5 P(W3=1)=0.5 P(W4=1)=0.3 P(Y(t)=1|W,Lˆ(t1),A1(t1))= {0,ift=05.80.1W10.1W2+0.1W30.2W40.7Lˆ(t1)0.9A1(t1),ift>0 Lˆ(t)= {maxmin(1(t)W4,4),4,ift=0maxmin1(t)+0.1W10.1W20.1W30.5W4+0.9Lˆ(t1)+A1(t1),4,4,ift>0 P(M(t)=1|W,L(t1),A1(t1))= {1,ift=0expit(0.4+0.1W10.2W2+0.3W3+0.1W40.1L(t1)+0.2A1(t1),ift>0 L(t)={Lˆ(t),ifM(t)=1L(t1),ifM(t)=0 P(C1(t)=1|W,L(0))=1expit2+0.1W1+0.2W2+0.1W3+0.1W4+0.1L(0) C2(t)=IM(t2)=0andM(t1)=0andM(t)=0 PA1(t)=1|M(t),A1(t1),W,L(t)= {1,ift>0andA1(t1)=10,ift>0andA1(t1)=0andM(t)=0expit5+0.1W1+0.1W2+0.2W3+0.2W41.5L(t)+ε2(t),otherwise

Tables 9 and 10 compare the number of patients in the simulated data and actual data who are uncensored and following a given regime. In the data analysis, for example, in the first 3-month interval following immunologic failure (time = 1) there were no patient deaths among the 137 uncensored patients who switched immediately (switch time = 0) and 13 deaths among the 2,285 uncensored patients who did not switch immediately (switch time = 1). In the second 3-month interval following immunologic failure (time = 2), there was one patient death among the 120 uncensored patients who switched immediately (switch time = 0), no patient deaths among the 88 uncensored patients who switched during the first 3-month interval (switch time = 1) and 8 deaths among the 1,962 uncensored patients who did not switch immediately or during the first 3-month interval (switch time = 2).

Table 7

Events in data analysis

Table 8

Events in simulated data

Table 9

Support (# uncensored and following rule) in data analysis

Table 10

Support (# uncensored and following rule) in simulated data

References

  • 1.

    Gsponer T, Petersen M, Egger M, Phirid S, Maathuise M, Boulle A, et al., and O. Keiser for IeDEA Southern Africa. The causal effect of switching to second line ART in programmes without access to routine viral load monitoring. AIDS 2012;26:57–65. Web of ScienceCrossrefPubMedGoogle Scholar

  • 2.

    van der Laan MJ, Petersen ML. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat 2007;3(1):Article 3. PubMedCrossrefGoogle Scholar

  • 3.

    Robins JM. Analytic methods for estimating HIV treatment and cofactor effects. In: Ostrow DG, Kessler R, editors. Methodological issues of AIDS Mental Health Research. New York: Plenum Publishing, 1993;213–90. Google Scholar

  • 4.

    Murphy SA, van der Laan MJ, Robins JM. Marginal mean models for dynamic regimes. J Am Stat Assoc 2001;960:1410–23. CrossrefGoogle Scholar

  • 5.

    Hernan MA, Lanoy E, Costagliola D, Robins JM. Comparison of dynamic treatment regimes via inverse probability weighting. Basic Clin Pharmacol 2006;98:237–42. CrossrefGoogle Scholar

  • 6.

    Cain LE, Robins JM, Lanoy E, Logan R, Costagliola D, Hernn MA. When to start treatment? A systematic approach to the comparison of dynamic regimes using observational data. Int J Biostat 2010;6(2):Article 18. CrossrefWeb of SciencePubMedGoogle Scholar

  • 7.

    Schomaker M, Egger M, Ndirangu J, Phiri S, Moultrie H, Technau K, et al.. When to start antiretroviral therapy in children aged 2–5 years: A collaborative causal modelling analysis of cohort studies from Southern Africa. PLoS Med 2013;100:e1001555. CrossrefWeb of ScienceGoogle Scholar

  • 8.

    Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology 2000;110:550–60. CrossrefGoogle Scholar

  • 9.

    Robins JM, Hernan MA. Estimation of the causal effects of time-varying exposures. In: Fitzmaurice G, Davidian MVerbeke MG, Molenberghs G, editors. Advances in longitudinal data analysis. New York: Chapman and Hall/CRC Press, 2009:553–599. Google Scholar

  • 10.

    Robins JM. Marginal structural models versus structural nested models as tools for causal inference. In: Halloran ME, Berry D, editors. Statistical models in epidemiology, the environment, and clinical trials. New York: Springer, 2000:95–133. Google Scholar

  • 11.

    Robins JM. A new approach to causal inference in mortality studies with sustained exposure periods – application to control of the healthy worker survivor effect. Math Model 1986;7:1393–512. CrossrefGoogle Scholar

  • 12.

    Taubman SL, Robins JM, Mittleman MA, Hernan MA. Intervening on risk factors for coronary heart disease: an application of the parametric g-formula. Int J Epidemiol 2009;380:1599–611. CrossrefWeb of ScienceGoogle Scholar

  • 13.

    Robins JM. Robust estimation in sequentially ignorable missing data and causal inference models. In: Proceedings of the American Statistical Association on Bayesian Statistical Science, 1999, 2000:6–10. Google Scholar

  • 14.

    Robins JM, Rotnitzky A. Recovery of information and adjustment for dependent censoring using surrogate markers. In: Nicholas P. Jewell, Klaus Dietz, Vernon T. Farewell, editors, AIDS epidemiology. Boston: Birkhäuser, 1992:297–331. Google Scholar

  • 15.

    Robins JM, Rotnitzky A. Comment on the Bickel and Kwon article, “Inference for semiparametric models: some questions and an answer”. Stat Sin 2001;110:920–36.Google Scholar

  • 16.

    Robins JM, Rotnitzky A, van der Laan MJ. Comment on “On profile likelihood”. J Am Stat Assoc 2000;450:431–5. Google Scholar

  • 17.

    Rosenblum M, van der Laan MJ. Simple examples of estimating causal effects using targeted maximum likelihood estimation. UC Berkeley Division of Biostatistics Working Paper Series, Working Paper 209. Available at: http://biostats.bepress.com/jhubiostat/paper209, 2011. 

  • 18.

    Stitelman OM, De Gruttola V, van der Laan MJ. A general implementation of TMLE for longitudinal data applied to causal inference in survival analysis. UC Berkeley Division of Biostatistics Working Paper Series, Working Paper 281. Available at: http://biostats.bepress.com/ucbbiostat/paper281, 2011. 

  • 19.

    van der Laan MJ, Gruber S. Targeted minimum loss based estimation of causal effects of multiple time point interventions. Int J Biostat 2012;8(1):Article 8. Web of SciencePubMedCrossrefGoogle Scholar

  • 20.

    van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data. Berlin/Heidelberg/New York: Springer, 2011. Web of ScienceGoogle Scholar

  • 21.

    van der Laan MJ, Rubin D. Targeted maximum likelihood learning. Int J Biostat 2006;6(2):Article 2. CrossrefGoogle Scholar

  • 22

    Bang H, Robins JM. Doubly robust estimation in missing data and causal inference models. Biometrics 2005;61:962–72. PubMedCrossrefWeb of ScienceGoogle Scholar

  • 23.

    Robins JM. Marginal structural models. In: 1997 Proceedings of the American Statistical Association, Section on Bayesian Statistical Science, 1998:1–10. Google Scholar

  • 24.

    Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology 2000;110:561–70. CrossrefGoogle Scholar

  • 25.

    Petersen ML, van der Laan MJ, Napravnik S, Eron J, Moore R, Deeks S. Long term consequences of the delay between virologic failure of highly active antiretroviral therapy and regimen modification: a prospective cohort study. AIDS 2008;22:2097–106. CrossrefWeb of ScienceGoogle Scholar

  • 26.

    Robins JM, Orellana L, Rotnitzky A. Estimation and extrapolation of optimal treatment and testing strategies. Stat Med 2008;27:4678–721. PubMedCrossrefWeb of ScienceGoogle Scholar

  • 27.

    Neugebauer R, van der Laan M. Nonparametric causal effects based on marginal structural models. J Stat Plann Inference 2007;137:419–34. CrossrefGoogle Scholar

  • 28.

    Petersen ML, Porter KE, Gruber S, Wang Y, van der Laan MJ. Diagnosing and responding to violations in the positivity assumption. Stat Methods Med Res 2012;21:31–54. CrossrefWeb of SciencePubMedGoogle Scholar

  • 29.

    Robins JM. Commentary on “using inverse weighting and predictive inference to estimate the effects of time-varying treatments on the discrete-time hazard. Stat Med 2002;210:1663–80. CrossrefGoogle Scholar

  • 30.

    Rosenblum M, van der Laan MJ. Targeted maximum likelihood estimation of the parameter of a marginal structural model. Int J Biostat 2010;6(2):Article 19. Web of SciencePubMedCrossrefGoogle Scholar

  • 31.

    Scharfstein DO, Rotnitzky A, Robins JM. Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion and rejoinder). J Am Stat Assoc 1999;940:1096–1120 (1121–1146). CrossrefGoogle Scholar

  • 32.

    Schnitzer ME, Moodie EM, van der Laan MJ, Platt RW, Klein MB. Modeling the impact of hepatitis C viral clearance on end-stage liver disease in an HIV co-infected cohort with targeted maximum likelihood estimation. Biometrics 2014;70(1):144–52. Web of ScienceCrossrefGoogle Scholar

  • 33.

    R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, 2013. Available at: http://www.R-project.org/. ISBN 3-900051-07-0. 

  • 34.

    Schwab J, Lendle S, Petersen M, van der Laan M. LTMLE: longitudinal targeted maximum likelihood estimation, 2013. Available at: http://cran.r-project.org/web/packages/ltmle/. R package version 0.9.3. 

  • 35.

    Pearl J. Causal diagrams for empirical research. Biometrika 1995;82:669–710. CrossrefGoogle Scholar

  • 36.

    Pearl J. Causality: models, reasoning, and inference. Cambridge: Cambridge University Press, 2000. Google Scholar

  • 37.

    Bickel PJ, Klaassen CA, Ritov Y, Wellner JA. Efficient and adaptive estimation for semiparametric models. Baltimore, MD: Johns Hopkins University Press, 1993. Google Scholar

  • 38.

    van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. Berlin/Heidelberg/New York: Springer, 2003. Google Scholar

  • 39.

    van der Laan MJ, Polley E, Hubbard A. Super learner. Stat Appl Genet Mol Biol 2007;6:Article 25. Web of ScienceGoogle Scholar

  • 40.

    van der Laan MJ. Statistical inference when using data adaptive estimators of nuisance parameters. UC Berkeley Division of Biostatistics Working Paper Series, Working Paper 302, 2012. Available at: http://www.bepress.com/ucbbiostat/paper302

  • 41.

    Young JG, Cain LE, Robins JM, OÕReilly EJ, Hernan MA. Comparative effectiveness of dynamic treatment regimes: an application of the parametric g-formula. Stat Biosci 2011;30:119–43. CrossrefGoogle Scholar

  • 42.

    Picciotto S, Hernan M, Page J, Young J, Robins J. Structural nested cumulative failure time models to estimate the effects of interventions. J Am Stat Assoc 2012;1070:866–900. Web of ScienceGoogle Scholar

  • 43.

    Geng EH, Glidden D, Bangsberg DR, Bwana MB, Musinguzi N, Metcalfe J, et al. Causal framework for understanding the effect of losses to follow-up on epidemiologic analyses in clinic based cohorts: the case of HIV-infected patients on antiretroviral therapy in Africa. Am J Epidemiol 2012;175:1080–7. PubMedWeb of ScienceCrossrefGoogle Scholar

  • 44.

    Schomaker M, Gsponer T, Estill J, Fox M, Boulle A. Non-ignorable loss to follow-up: correcting mortality estimates based on additional outcome ascertainment. Stat Med 2014;330:129–42. CrossrefWeb of ScienceGoogle Scholar

Footnotes

    About the article

    Published Online: 2014-06-18

    Published in Print: 2014-09-01


    Citation Information: Journal of Causal Inference, ISSN (Online) 2193-3685, ISSN (Print) 2193-3677, DOI: https://doi.org/10.1515/jci-2013-0007.

    Export Citation

    ©2014 by De Gruyter. Copyright Clearance Center

    Supplementary Article Materials

    Citing Articles

    Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

    [2]
    Laura B. Balzer
    Epidemiology, 2017, Volume 28, Number 4, Page 562
    [4]
    Kristin A. Linn, Eric B. Laber, and Leonard A. Stefanski
    Journal of the American Statistical Association, 2017, Volume 112, Number 518, Page 638
    [5]
    M A Gianfrancesco, L Balzer, K E Taylor, L Trupin, J Nititham, M F Seldin, A W Singer, L A Criswell, and L F Barcellos
    Genes and Immunity, 2016, Volume 17, Number 6, Page 358
    [6]
    Susan Gruber
    Big Data, 2015, Volume 3, Number 4, Page 211
    [7]
    Mireille E. Schnitzer, Judith J. Lok, and Ronald J. Bosch
    Biostatistics, 2015, Page kxv028

    Comments (0)

    Please log in or register to comment.
    Log in