Skip to content
Publicly Available Published by De Gruyter May 25, 2016

Revisiting g-estimation of the Effect of a Time-varying Exposure Subject to Time-varying Confounding

  • Stijn Vansteelandt EMAIL logo and Arvid Sjolander
From the journal Epidemiologic Methods


Marginal Structural Models (MSMs), with the associated method of inverse probability weighting (IPW), have become increasingly popular in epidemiology to model and estimate the joint effects of a sequence of exposures. This popularity is largely related to the relative simplicity of the method, as compared to other techniques to adjust for time-varying confounding, such as g-estimation and g-computation. However, the price to pay for this simplicity can be substantial. The IPW estimators that are routinely used in applications make inefficient use of the information in the data, and are susceptible to large finite-sample bias when some confounders are strongly predictive of exposure. Moreover, the handling of continuous exposures easily becomes impractical, and the study of effect modification by time-varying covariates even impossible. In view of this, we revisit Structural Nested Mean Models (SNMMs) with the associated method of g-estimation as a useful remedy, and show how this can be implemented through standard software.

1 Introduction

Observational studies of the effect of a time-varying exposure are nearly always plagued by time-varying confounding. This phenomenon is caused by time-varying prognostic factors of the outcome that influence the exposure at each measurement time, and thereby distort the association between exposure and outcome over time. For instance, the association between physical activity and functional performance (e. g. walking speed) in subjects with radiographic knee osteoarthritis may be confounded by the extent of knee pain and symptoms (Mansournia et al. 2012). Adjustment for such prognostic factors is usually difficult because they may themselves be influenced by previous exposures. For instance, the extent of knee pain may be influenced by the history of physical activity. When that happens, standard regression methods to adjust for confounding are fallible because they employ one and the same model to infer the effect of early versus late exposures; they thereby cannot avoid undue control for time-varying prognostic factors that are intermediate on the causal pathway from early exposure to later outcome (Robins 1986, 2000).

G-computation (Robins 1986), g-estimation for Structural Nested Models (Robins et al. 1992a; Robins 1994, 1997; Lok and DeGruttola 2012; Vansteelandt and Joffe 2014) and inverse probability weighting (IPW) for Marginal Structural Models (MSMs) (Robins 2000; Hernán et al. 2000) have been proposed by James Robins as alternatives to standard regression adjustment for confounding, to overcome the aforementioned problems. Of these, IPW estimation is by far the most popular method because, though complex, it appears much simpler than the other two alternative estimation methods (Joffe 2012; Vansteelandt and Joffe 2014). In this article, we argue that this perceived simplicity may come at a large price: the default IPW estimators make inefficient use of the information in the data, are susceptible to bias when some confounders are strongly predictive of exposure, are not well suited for the analysis of continuous exposures and cannot accommodate effect modification by time-varying covariates. Their inefficiency has prompted the use of ad hoc remedies, such as truncation of extreme weights (Cole and Hernán 2008). Such remedies are valid when the weights are indeed bounded by the truncation value, but may otherwise induce bias and always raise concerns over the choice of truncation value; indeed, this choice is sometimes data-driven, thereby leaving room for ‘cherry-picking’. In some cases, these considerations can make one wonder whether the final results are necessarily better than the biased results obtained by standard regression methods.

G-estimation and Structural Nested Models are, in principle, better tailored for dealing with failure of the standard assumptions of no unmeasured confounders used to justify the application of all of the above methods, as well as with (near) positivity violations whereby certain strata contain (nearly) no treated or untreated subjects (Robins 2000; Joffe et al. 2010). These models can handle effect modification by time-varying covariates, but lack some flexibility when extending to link functions other than the identity or log link. Furthermore, these estimators do not employ inverse probability weights and are therefore particularly well suited for the analysis of continuous exposures. In spite of this, they are largely non-popular due to the lack of implementation in off-the-shelf software along with difficulties with the handling of censoring in the analysis of survival endpoints (Joffe et al. 2012; Vansteelandt and Joffe 2014). In this article, we revisit g-estimation for linear Structural Nested Mean Models (SNMMs). We show that g-estimators are also obtainable through standard software and argue that, in cases where they target the same effect parameter, they tend to have better performance than IPW estimators. Our results provide insight to enable similar developments for loglinear SNMMs for counts (or more generally, positive constrained outcomes) (Robins 1994), as well as for structural nested cumulative failure time models for survival endpoints (Martinussen et al. 2011; Picciotto et al. 2012), which we will publish elsewhere.

2 Estimating the effect of a point exposure

For pedagogic purposes, we first consider an observational study on the effect of an exposure A (e. g. physical exercise) assessed at one time point, on an end-of-study outcome Y (e. g. walking speed). Suppose that we have available sufficient covariate information L so that the association between exposure and outcome is unconfounded within levels of L. Formally, with Y(a) denoting the potentially unobserved, counterfactual outcome at exposure level a, we thus assume that Y(a)A|L for all feasible exposure levels a (Hernán and Robins 2006). Throughout, we furthermore make the consistency assumption that Y(a) equals Y (in distribution, conditional on L) amongst subjects with A=a. We will use E(Z|l) and Var(Z|l) for a given variable Z to denote its expectation and variance, respectively, in the stratum of subjects with covariate values L=l.

2.1 G-estimation in linear structural mean models

A linear structural mean model (SMM) (Robins 1994; Vansteelandt and Joffe 2014) is a model for the conditional causal effect EY(a)Y(0)|l, which is linear in an unknown (finite-dimensional) causal parameter ψ. It encodes the effect of setting the exposure to level a versus 0 on the additive scale, amongst individuals with covariate values l; here, 0 is some reference exposure value which could, but need not encode ‘no exposure’. A linear SMM may express, for instance, that the causal effect is linear in the exposure but the same at all covariate levels, i. e.


or that it depends linearly on a scalar covariate V, which is contained in L, i. e.


More generally, we will consider SMMs of the form


where Z is a covariate vector that may depend on L (e. g. 1 in the first example, and (1,V) in the second example).

It follows from the results in Robins et al. (1992b), Brumback et al. (2003) and Vansteelandt and Daniel (2014), and from the technical derivations in the Appendix, that a linear SMM can be straightforwardly fitted using the following procedure:

  1. Regress the exposure on the covariates L and let the propensity score P be the fitted value from this regression model (for given subject). This could be based on a logistic regression model for a dichotomous exposure, but may also involve linear regression for a continuous exposure, for instance. Note that we thus use the term ‘propensity score’ for the expected value of the exposure, given the covariates L, even though this term is usually reserved for dichotomous exposures. Likewise, we will use the term ‘propensity score model’ to refer to the underlying regression model for the exposure.

  2. Regress the outcome on the covariates L, the terms ZA in the SMM and the corresponding terms ZP obtained by substituting the exposure in the SMM by the propensity score, i. e. fit the model


    using ordinary least squares. We will refer to this model as the ‘outcome model’.

The above procedure is essentially tantamount to regression adjustment for the propensity score (Vansteelandt and Daniel 2014). The estimator it delivers for the causal parameter ψ is equivalent (in large samples) to a g-estimator (see the Appendix). It is unbiased (in large samples) when the SMM and the propensity score model are both correctly specified, even when the association between outcome and covariates in the second step of the procedure was misspecified (see the Appendix). This is because the model in step 2 of the procedure is equivalent to model


for β2=β2+ψ, where Z(AP) is uncorrelated with the covariates L and ZP when the exposure model is correctly specified; it follows by the properties of ordinary least squares estimators that the g-estimator is therefore not susceptible to misspecification of the covariate effect and the effect of the propensity score in the outcome model. This will be key to the later extensions to time-varying exposures. Even though adjustment for covariates is thus not required in the outcome model, it is recommended. It may reduce residual variation and thereby increase precision (Robins et al. 1992b; Vansteelandt and Joffe 2014). Moreover, when the association between outcome and covariates is correctly modelled in the second step of the above procedure, in the sense that E(Y|l,a)=β0+β1l+β2*zp+ψz(ap),, then the g-estimator remains unbiased in large samples, even when the propensity score model is misspecified (Robins et al. 1992b); see the Appendix for a technical proof. The proposed g-estimator is therefore called double-robust: unbiased (in large samples) when the SMM and either the model in the first or second step of the procedure are correctly specified, but not necessarily both. When both these models are correctly specified and, moreover, the variability in the outcome is constant across exposure and covariates, then the obtained g-estimator is sometimes also efficient (i. e., not more variable in large samples) relative to all other estimators that are valid under correct specification of the SMM and propensity score model. In particular, it is then efficient when the propensity score model is a canonical generalised linear model (i. e., with an identity, log, logit link) and minimally includes the covariate terms (including the intercept) in the outcome model multiplied by Z (i. e. Z and ZL); see the Appendix.

Standard errors for the g-estimator can be directly obtained from off-the-shelf software for linear models, provided that a sandwich estimator is used; such estimator is available through software for generalised estimating equations (GEE), or by requesting the software to output a ‘robust’ standard error. The resulting standard errors are unbiased (in large samples) when the association between outcome and covariates in the second step is correctly modelled (see the Appendix); they are conservative otherwise, because they ignore the uncertainty in the estimated parameters from the propensity score model (Robins et al. 1992b). The sandwich standard error can be modified to take this uncertainty into account, but this requires some additional programming. An alternative is to use a bootstrap procedure.

2.2 G-estimation versus IPW estimation

Vansteelandt and Joffe (2014) (Section 4) make a detailed comparison of g-estimators under the linear SMM [1], versus IPW estimators under the corresponding MSM


which is implied by the SMM [1]; here, α=EY(0). Assuming that the outcome variance is constant at all levels of exposure and confounders and that correctly specified models are used, they find that g-estimators are always more efficient, and sometimes drastically more efficient than IPW estimators, for various reasons.

First, the IPW estimators that are routinely used in practice, are relatively inefficient compared to other, more advanced IPW estimators (see e. g. Bryan et al. 2004). Although more precise IPW estimators thus exist, they are seldom used in practice because of their greater complexity. In contrast, reasonably efficient g-estimators are not so hard to obtain; see for instance the previous section where we show that the estimation procedure of Section 2.1 delivers an efficient g-estimator whenever the propensity score model includes specific covariate terms (regardless of whether these terms are associated with the exposure).

Second, when reporting standard errors that ignore the uncertainty in the estimated exposure model, as is routinely done in the fitting of MSMs, a further ‘apparent’ efficiency benefit may be observed for the g-estimator. This is because the standard error of the g-estimator is unbiased (in large samples) when the association between outcome and covariates is correctly modelled (see the Appendix). In contrast, routine IPW estimators come with conservative standard errors (Hernán et al. 2000).

Third, the assumption that the effect is the same at all levels of l, which is made by the SMM but not by the corresponding MSM, brings additional gains in precision. This is especially so in settings where the exposed and unexposed subjects have very different covariate distributions. The performance of the IPW estimator tends to be very poor in those settings, both in terms of bias and precision, because it aims to infer how different the expected outcome would be if all were exposed versus unexposed; the lack of sufficient overlap between these populations precludes such comparisons. This can sometimes be overcome by restricting evaluation to the subpopulation of subjects for which, on the basis of their covariates, there is sufficient variation in the exposure (Crump et al. 2009); however, this may have the drawback of leaving room for ‘fishing expeditions’, whereby one chooses the subpopulation that delivers the largest estimated exposure effect, and moreover has no immediate extensions to time-varying exposures.

In contrast, by relying on the constant effect assumption, the g-estimator allows one to extract information on the exposure effect at those covariate levels l at which there are both exposed and unexposed subjects and then transport the results across those covariate levels. When the assumption of constant effect is violated, then the g-estimator for ψ generally continues to be meaningful. For instance, when the propensity score is correctly modelled and the exposure effect is linear, but possibly depending on covariate levels, then the g-estimator for ψ in the possibly misspecified model [1] converges to the weighted average


of the conditional exposure effects EY(1)Y(0)|l with weights proportional to the variability in exposure levels at that covariate level l (Vansteelandt and Daniel 2014; Vansteelandt and Joffe 2014). Most weight is thus given to the exposure effects in strata that carry most information about the exposure effect, and for which comparison is most acute. For a continuous, homoscedastic exposure, the g-estimate therefore averages the conditional exposure effects across the observed covariate levels, and thus returns the effect parameter targeted by the MSM; violation of the assumption of constant effect is then not a real concern. For a dichotomous exposure, the exposure variance reduces to P(A=1|l)P(A=0|l), thereby ensuring that most weight is given to the exposure effects at those covariate levels with sufficient proportions of exposed as well as unexposed subjects (Vansteelandt and Daniel 2014).

With concern for violation of the assumption of constant effect, the assumption can also be relaxed by explicitly allowing for effect modification in the SMM, as in model [2]. A corresponding Wald test under that model then allows for testing the assumption of no effect modification, although power may be weak, especially when there is lack of overlap between differently exposed subpopulations.

The benefits of g-estimation versus IPW estimation are most pronounced in the case of continuous exposures. Here, g-estimation merely demands modelling the expected value of the exposure, given covariates, whereas IPW estimation requires inverse weighting by the density of the exposure, given covariates. The resulting inverse probability weights can be highly variable (Vansteelandt 2009). Moreover, models for a density are generally difficult to specify, and minor misspecifications in the tails of the density could have a major impact on the inverse probability weights and resulting inference.

3 Estimating the effect of a time-varying exposure

Consider now an observational study on the effect of a time-varying exposure At, repeatedly assessed at time points t=0,...,T1, on an end-of-study outcome Y assessed at time T. Suppose that we have available sufficient covariate information Lt at each time t=0,...,T1 so that the association between exposure and outcome at time t is unconfounded within levels of the covariate history Lˉt up to time t and of the exposure history Aˉt1 up to time t1. Formally, with Y(aˉt,0) denoting the counterfactual outcome under a regime in which exposure is set to aˉt until time t and zero thereafter, we thus assume that Y(aˉt,0)At|Lˉt,Aˉt1=aˉt1 for all feasible exposure regimes and at all times t [Robins 2000]. Throughout, we continue to make the consistency assumption that Y(aˉT1) equals Y (in distribution, conditional on L) amongst subjects with AˉT1=aˉT1. We make the notational convention that a1=.

3.1 Linear structural nested mean models

A SNMM (Robins 1994) describes, at each time t, the effect of exposure versus no exposure at that time, in the hypothetical case where the exposure is set to zero after time t; this is done within strata defined by the exposure history and covariate history up to that time. For instance, the linear SNMM defined by


for t=0,...,T1, postulates that exposure at each time has the same effect on the end-of-study outcome, regardless of time and regardless of the history of covariates and exposures. Under the no unmeasured confounders assumption, this model implies (see the Appendix) the commonly used MSM defined by


which may thus be viewed as its natural analog.

When exposure effects decay over time, one may accommodate this via the SNMM


for t=0,...,T1, which allows for the effect of early exposures to be different from the effect of late exposures, or via the model


for t=0,...,T1, which codes the exposure effect at each time t via a separate parameter ψt. These models imply the MSMs defined by




respectively. As a final example, consider the SNMM


for t=0,...,T1, which incorporates effect modification by time-varying covariates. This model has no natural analog in the class of MSMs as the latter models cannot incorporate such effect modification. The possibility to incorporate effect modification by time-varying covariates constitutes a major advantage of SNMMs as compared to MSMs (Robins 2000).

All of the above models are instances of the more general SNMM [Robins 1994] defined by


for t=0,...,T1. Here zt is a covariate vector that may depend on time t and the exposure history aˉt1 and confounder history lˉt (e. g. zt equals 1 in model [3], (1,t) in model [4], a vector of zeroes of length T, except for the value 1 at the (t+1)th position in model [5], and (1,lt) in model [6]).

Although g-estimators of the parameters indexing SNMMs are in principle obtainable in closed-form (Lok and DeGruttola 2012), the lack of implementation in off-the-shelf software has been a key limiting factor in the uptake of these methods in practice. In the next section, we will therefore show how g-estimators of ψ in model [7] are obtainable through standard software.

3.2 G-estimation in linear structural nested mean models

Consider first model [7] for the single time point t=T1. This is a SMM for the effect of exposure at the last measurement time T1 on the observed outcome Y (because Y(aˉT1)=Y for subjects with exposure AˉT1=aˉT1). A preliminary g-estimator ψˆ(0) of ψ (or some of its components) can thus be obtained by adopting the estimation procedure of Section 2.1; that is:

  1. Regress the exposure at time T1 on the history of exposures (AˉT2) and covariates (LˉT1) and let the propensity score PT1 be the fitted value from this regression model (for given subject). For instance, with a dichotomous exposure, one may consider fitting the model

  2. Regress the outcome on the history AˉT2 and LˉT1, the terms ZT1AT1 in the SNMM, as well as the corresponding terms ZT1PT1 with exposure AT1 substituted by PT1, e. g. fit model


    Let ψˆ(0) be the resulting estimate of ψ.

The obtained g-estimator ψˆ(0) will generally be inefficient because it only draws information from the last exposure, but not from earlier exposure measurements. In some SNMMs, it will only deliver a subvector of the effect ψ because the above procedure does not enable one to distinguish short term from long term exposure effects. For instance, in model [4] the above procedure only delivers an estimate of ψ0+ψ1(T1), but not of ψ0 and ψ1 separately; in model [5], it only delivers an estimate of ψT1. Below, we will therefore explain how the preliminary estimate ψˆ(0) can be improved.

Reconsider model [7] for all measurement times t=0,...,T1. This can be viewed as a sequence of SMMs – one for each (exposure) measurement time – for the effect of exposure at that time on the counterfactual outcome Y(Aˉt,0), considering Aˉt1 and Lˉt as baseline covariates. Here, Y(Aˉt,0) refers to the outcome that would be observed if the exposure history agreed with the observed exposure history Aˉt up to time t, and were zero thereafter. If we could obtain an unbiased prediction Ht of this counterfactual outcome Y(Aˉt,0), then we could apply the estimation principles advocated in Section 2.1 with Ht,At and (Aˉt1,Lˉt) in lieu of Y,A and L. We first explain how this works for the case where ψˆ(0) has already delivered a preliminary estimate of the entire exposure effect ψ, as would be the case in models [3] and [6]:

  1. Regress the exposure at each time t on the history of exposures (Aˉt1) and covariates (Lˉt) and let the propensity score Pt be the fitted value from this regression model (for given subject). This could be based on separate regression models, one for each time, or based on a pooled regression model. For instance, with a dichotomous exposure, one may consider fitting the model


    for t=0,...,T1. Note that this first step is also needed for estimating the inverse probability weights in the fitting of MSMs.

  2. Use the preliminary choice ψˆ(0) of ψ to predict the counterfactual outcomes Y(Aˉt,0) for each measurement time t by subtracting the cumulative effect of the observed exposures past time t from the observed outcome, as


    for t=0,...,T1, where we define s=TT1ψˆ(0)ZsAs=0. Under model [3], we would thus calculate HT1=Y, HT2=Yψˆ(0)AT1, …, H0=Yψˆ(0)s=1T1As.

  3. Regress the predicted counterfactuals Ht for t=0,...,T1 on the exposure history Aˉt1 and the covariate history Lˉt, the terms ZtAt in the SMM, as well as the corresponding terms ZtPt with exposure At substituted by the propensity score Pt, i. e. fit model


    for t=0,...,T1, using independence GEE (i. e. Generalised Estimating Equations with independence working correlation). Let ψˆ(1) be the resulting estimate of ψ.

The suggestion in step 3 to ignore the dependence across measurement times is valid according to the general theory on GEE and simplifies the proposal, but is also important from a more substantive point of view. As is the case for IPW estimators in MSMs, not adhering to this (e. g. by fitting the model in step 3 using exchangeable GEE instead of independence GEE) may introduce bias as it may conflate the sequential nature of the model (Vansteelandt 2007). Under heteroscedasticity, some efficiency may potentially be gained via weighted least squares regression to allowing for the variability of Ht to depend on Aˉt,Lˉt and time t. A study of how much can potentially be gained via more efficient estimation approaches is beyond the scope of this work.

It is tempting to consider iterating steps 2 and 3 a few times to reduce dependence on the preliminary estimate ψˆ(0) when it is of poor quality. However, this is not required for the procedure to deliver unbiased estimates (in large samples) and limited simulation studies (not reported) hardly resulted in a noticeable efficiency gain (and sometimes even a very minor loss).

Consider finally the case where ψˆ(0) has delivered only a subvector of ψ, as would be the case in models [4] and [5]. In that case, ψˆ(0) enables one to obtain unbiased predictions Ht of the counterfactual outcome Y(Aˉt,0) for only certain time points t=k,...,T1 for some 0<k<T1. For instance, in model [5], we would calculate HT1=Y, HT2=YψˆT1(0)AT1, and thus k=T2. Upon applying step 3 of the above procedure to HT1,...,Hk, we obtain a g-estimate ψˆ(1), which updates the previously obtained estimate, but also includes effects of earlier exposures. For instance, ψ=(ψT2,ψT1) in model [5] can be estimated by fitting a model of the form [8] for t=T2,T1 with zT2=(1,0) and zT1=(0,1); the resulting estimates ψˆT2(1) and ψˆT1(1) can be used to predict Y(Aˉt,0) at times t=T,T1,T2 as HT1=Y, HT2=YψˆT1(1)AT1 and HT3=HT2ψˆT2(1)AT2. Repeated application of steps 2 and 3 to this growing vector of predictions eventually delivers a g-estimate of the entire effect ψ.

The above procedure works because it involves repeated application of the procedure for point exposures in Section 2.1 to predictions Ht for the counterfactuals Y(Aˉt,0); these predictions are unbiased (in large samples) because they make use of a preliminary consistent estimator of ψ, e. g. ψˆ(0). By resulting from repeated application of the procedure for point exposures, the obtained g-estimator preserves the double robustness property: it is unbiased (in large samples) when either the exposure model in step 1, or the association between each of the predictions Ht and the history (aˉt1,lˉt) in step 3 (and step 2 for the initial estimator) is correctly modelled. Arguably, the latter models can be more difficult to correctly specify because all of the predicted counterfactuals derive from the same outcome Y; different models for these may therefore fail to yield a coherent model specification.

Unfortunately, standard errors obtained through sandwich estimators in standard regression software are no longer guaranteed to be conservative because they ignore the uncertainty in the initial estimate ψˆ(0) (although in the application of Section 4, we have observed mild conservatism). We therefore recommend the bootstrap.

3.3 Censoring by loss-to-follow-up

Loss-to-follow-up is common in longitudinal studies. It induces missingness in the end-of-study outcome, but also in the intermediate exposures and confounders. When at each time, the decision to leave the study is independent of future exposures, covariates and outcome, given exposures and covariates measured up to that time (in the sense of ‘missing at random’) (Robins et al. 1995; Lok and DeGruttola 2012), then we may use inverse probability of censoring weighting to adjust for censoring by loss-to-follow-up (Robins et al. 1995; Robins 2000; Lok and DeGruttola 2012). That is, let Ct equal 1 if a subject was lost-to-follow-up by time t and 0 otherwise. Then the outcome model must be fitted via a weighted regression, using weights


for the outcome Ht. The propensity score models at each time t can then be fitted within subjects who are uncensored at that time without the need for weighting (Lok and DeGruttola 2012), provided that all predictors of censoring (that are also associated with the exposure) at the considered time are included in the propensity score model.

The weights wTt are similar, though not identical to those used in the fitting procedure for MSMs. Indeed, because the SMM already conditions on the exposure and confounder history up to time t, there is no further need to adjust for loss-to-follow-up up to that time. The weights wTt are therefore generally less variable than those used in the fitting of MSMs. They can be made even less variable as follows:


Here, the probabilities in the numerator may include the exposure and covariate history until time t1 and t, respectively. We will refer to the weights wTt as ‘stabilised weights’ because they would reduce to 1 if censoring at each time s>t had no residual dependence on exposure and covariate values measured past time t, conditional on Aˉt1,Lˉt. Also these weights differ from the stabilised weights that are commonly used in the fitting of MSMs (where the probabilities in the numerator exclude time-varying covariates and where the product starts at s=1), and are generally expected to be more stable (i. e., less variable). For instance, the stabilised weights wT,T1 equal P(CT=0|CT1=0,AˉT2,LˉT1)/P(CT=0|CT1=0,AˉT1,LˉT1), which will often be close to 1.

3.4 Time-varying outcomes

The previous results extend immediately to time-varying outcomes Ys,s=1,...,T. A SNMM must then be postulated for each such outcome by considering it as being the end-of-study outcome. For instance, the SNMM [3] can be extended to


for all s=1,...,T and all t<s; this model postulates that exposure at each time has the same effect on all subsequent outcomes. This assumption can be relaxed via the model


for all s=1,...,T and all t<s, which allows for a decaying exposure effect over time. In particular, model [11] assumes that the effect of a unit increase in the exposure on the mean of the next outcome is ψ0, and that this changes with ψ1 per time unit. Model


for s=1,...,T and t<s, encompasses these examples; this can be seen upon setting zst=1 for model [10] and zst=(1,ts+1) for model [11].

Model [12] can be fitted with a relatively minor modification of the procedure proposed in Section 3.2. In particular, a preliminary consistent estimate ψˆ(0) of ψ can now be obtained by estimating the effect of each exposure At on the subsequent outcome Yt+1 for t=0,...,T1, following the procedure of Section 2.1. This can be done by regressing the outcomes Yt for t=1,...,T on the covariate history Lˉt1 (which may include previous outcomes) and the exposure history Aˉt2 as well as the terms Zt,t1At1 and Zt,t1Pt1 in the SNMM, i. e. fit regression model


for t=1,...,T. As before, this must be done in a way that ignores the dependence between measurements (e. g. using independence GEE, possibly allowing for heteroscedasticity). The resulting estimate ψˆ(0) of ψ can be used to calculate unbiased predictions


for the counterfactuals Ys(Aˉt,0) for s=1,...,T and t<s. These can next be regressed on the covariate and exposure history, as well as the term ZstAt and ZstPt in the SNMM. This can be done by fitting regression model


for all s=1,...,T and all t<s using independence GEE, regarding the measurements Hst as repeatedly measured outcomes, but ignoring their dependence as before; ignoring their dependence will not imply a loss of efficiency when the covariate history includes previous outcomes, for then the dependence across outcome measurements is already modelled via the term β2lt. When the resulting g-estimate ψˆ(1) delivers only a subvector of ψ, then the above process may need to be repeated a few times, as explained in Section 3.2.

4 Analysis of the effect of physical activity on functional performance

We consider a re-analysis of the Osteoarthritis Initiative study (Dunlop et al. 2011), a multicenter follow-up study, based at four clinical sites in the U.S., of knee osteoarthritis in men and women aged 45–79 years who either have or are at risk for developing knee osteoarthritis. As in Mansournia et al. (2012) we will estimate the effect of physical activity on functional performance based on 2,545 subjects with radiographic knee osteoarthritis who had data on all baseline covariates. Participants were assessed at baseline and on three annual follow-up visits. Physical activity levels were based on the Physical Activity Scale for the Elderly questionnaire, which resulted in a score between 0 and 795. Functional performance was assessed using the timed 20-m walk test (average speed in meters per minute (m/min) during two 20-m tests). Cumulative dropout rates equalled 12.2 % at visit 2, 21.8 % by visit 3 and 28.3 % by visit 4. For all analyses below, 95 % confidence intervals will be based on the nonparametric bootstrap with 1,000 resamples.

4.1 Marginal structural models

Mansournia et al. (2012) categorised the exposure based on the quartiles of the physical activity score, and subsequently considered the MSM defined by


for t=0,1,2, where at1,at2 and at3 are (0/1) indicators of moderate, high and very high physical activity on visit t (relative to low physical activity), and V is a set of baseline confounders. We used the baseline confounders (e. g. age, gender, marital status, ankle pain, hip pain, …), time-varying confounders (concurrent to the exposure, e. g. knee symptoms, depressive symptoms based on the Center for Epidemiologic Studies Depression Scale, body mass index, functional performance (history) and knee pain) reported in Mansournia et al. (2012). We used a slightly more flexible exposure model specification by including the square root and square of age, and the square of previous functional performance besides all main effects. We moreover used the stabilised weights (w.r.t. both the treatment and censoring model) that are commonly used in the analysis of MSMs. This results in effect estimates of 0.37 (95 % confidence interval 0.27 to 1.01), 0.38 (0.29 to 1.05) and 0.44 (0.29 to 1.17) for moderate, high and very high physical activity on walking speed (relative to low physical activity). While our estimates are different owing to our choice of exposure model, they were nevertheless broadly similar to those found by Mansournia et al. (2012), who reported corresponding effect estimates of 0.48 (0.12 to 1.08), 0.45 (0.23 to 1.13), and 0.46 (0.29 to 1.22) meters/min, respectively.

4.2 Structural nested models for short-term effects

We next considered the related linear SNMM


for t=0,1,2. Since this model only parameterises an effect of the previous exposure on the outcome at each time t, parameter estimates can be obtained as explained for the calculation of ψˆ(0) in Section 3.4; that is, by calculating the probabilities ptj of the exposure taking the value j at time t for each individual (given the exposure and confounder history) under a multinomial propensity score model, and next fitting the corresponding model


using independence GEE; see the Appendix for R-code. Because the linear SNMM [14] evaluates the effect of each exposure At on the subsequent outcome Yt+1, we used the stabilised weights wt+1,t defined in eq. [9] to adjust for censoring of the outcome Yt+1; see the Appendix for R-code. We obtained effects of 0.33 (0.20 to 0.86), 0.41 (0.16 to 0.96) and 0.35 (0.32 to 1.00), which are similar to the previously obtained IPW estimates, but more precise. In particular, estimated relative efficiencies are around 0.70 when based on the bootstrap and around 0.55 when based on nave standard errors. This relatively smaller efficiency of the IPW estimates is partly related to the use of inverse probability of treatment weights, which were not required for the g-estimator. This is even more pronounced when using naive standard errors, as commonly done for IPW estimators in practice, because those standard errors are conservative for IPW estimators, but not for G-estimators when the outcome model is correctly specified. Naive sandwich standard errors obtained from standard software for independence GEE equalled 0.26, 0.28 and 0.33 for the g-estimator, and were indeed very close to the bootstrap standard errors of 0.27, 0.28 and 0.34, respectively.

In all of the analyses, the covariate Lt at each time t includes the outcome history at that time to control for confounding by previous outcome. Model [15] can thus be viewed as a so-called transition model (Diggle et al. 2013). Because it directly models the dependence on previous outcome, the use of independence GEE then does not imply an efficiency loss (provided a correct outcome model is used).

We subsequently avoided categorisation of the exposure via the linear SNMM


for t=0,1,2, where, to avoid skewness, Atlog(At+1), which varies between 0 and 6.3; here, At refers to the original (continuous) physical activity score at time t. Parameter estimates were obtained by fitting the corresponding model


for t=0,1,2 using independence GEE, where pt is the fitted value from a linear propensity score model for At; see the Appendix. Again, stabilised weights wt+1,t were used to adjust for censoring of the outcome Yt+1. This resulted in an effect estimate of 0.36 (0.03 to 0.74), which suggests stronger – but still marginal – evidence of an effect of physical activity on functional performance. IPW estimation for MSMs could alternatively be used. However, in view of the exposure being continuous, this would demand specification of the entire exposure distribution rather than just the mean. The performance of the resulting IPW estimates may then be questionable as the weights would likely be highly variable and sensitive to misspecification of the tails of the distribution.

4.3 Structural nested models for short- and long-term effects

All previous SNMMs parameterise short term exposure effects only; that is, they merely model the effect of each exposure At on the subsequent outcome Yt+1, but not on later outcomes Ys,s>t+1. The results obtained under these models are valid, even when longer term effects exist because the considered models make no assumption about longer term effects. This is not the case for the MSM [13], which explicitly assumes that there are no long term effects. IPW estimates of the parameters in MSMs may therefore be biased when this assumption of no long term effects is wrong.

To develop insight into the presence or absence of long term effects, we fitted the linear SNMM [11], which allows for a changing exposure effect with increasing time lags between exposure and outcome. This can be viewed as a model for the effect of At onto the counterfactual Ys(Aˉt,0). These counterfactuals are observed when s=t+1, in which case they equal the observed outcome Yt+1, but are unobserved otherwise. As previously explained, we will therefore predict the counterfactuals Ys(Aˉt,0) and next build a regression model for the resulting predictions Hst. In particular, we will regress those predictions onto At (to infer ψ0) and on the corresponding propensity score Pt, onto At(st1) (to infer ψ1) and on the corresponding propensity score Pt(st1), and finally onto the exposure history Aˉt1 and the covariate history Lˉt (in view of the unknown expectation EYs(aˉt1,0)|aˉt1,lˉt being a function of the exposure and covariate history). Using the previously obtained estimate ψˆ0(0)=0.36 as a preliminary estimate of ψ0, we thus calculated the predictions


and built a regression model for those. With the first 3 components, Y1,Y2,Y3, already available in the database (which has 3 lines per subject), we thus extended the database with 2 lines per subject, containing the remaining components Y2ψˆ0(0)A1,Y3ψˆ0(0)A2. In accordance with model [11], the resulting variable Z was regressed onto the exposure vector B1(A0,A1,A2,A0,A1) (to infer ψ0) and the corresponding propensity scores P1(P0,P1,P2,P0,P1), onto B2(0,0,0,A0,A1) (to infer ψ1) and the corresponding propensity scores P2(0,0,0,P0,P1), and additionally on the exposure history A(0,A0,A1,0,A0) and covariate history L(L0,L1,L2,L0,L1). The database was thus extended with a column representing the variable B1, which stores the previous exposures A0,A1,A2 on the first three lines for each subject, and the exposures A0,A1 on the next two lines. Columns representing the variables B2,P1,P2,A and L were constructed analogously. The resulting regression model


was fitted using independence GEE (viewing the predictions in Z as a repeatedly measured outcome), weighting each outcome Hst by the previously defined stabilised weights wst (see the Appendix). This resulted in a preliminary estimate ψˆ1(1)=0.69 for ψ1 and an updated estimate ψˆ0(1)=0.36 of ψ0.

In principle, the estimation process could stop here. However, because these estimates of ψˆ0(1) and ψˆ1(1) enable us to additionally predict Y3(0), they allow us to extract more information about the long term effect by also evaluating the association between A0 and Y3(0). We therefore extended the database with one final line for each subject, containing the prediction


and also updated the predictions H20 and H31. We thus redefined


for each subject, and refitted the above marginal regression model with the redefinitions B1(A0,A1,A2,A0,A1,A0), B2(0,0,0,A0,A1,2A0), A(0,A0,A1,0,A0,0), L(L0,L1,L2,L0,L1,L0), P1(P0,P1,P2,P0,P1,P0) and P2(0,0,0,P0,P1,2P0). This resulted in estimates 0.22 for ψ0 and 0.13 for ψ1. One additional iteration gave 0.25 (–0.088 to 0.59) for ψ0 and 0.21 (–0.48 to 0.056) for ψ1, indicating a decaying effect over time (decaying with 0.2 per visit). Further iterations gave no further change. We thus observe a weakening effect of physical activity with longer time lags, although the evidence for this remains weak.

4.4 Modelling effect modification by time-varying covariates

Finally, given that the evidence for long-term effects is weak, we returned to a model for short-term effects only, and investigated the possibility of effect modification by time-fixed and time-varying covariates. In particular, we considered the model


for t=0,1,2, with V1t denoting the logarithm of 1 plus the WOMAC (Western Ontario and McMaster Universities) pain score on day t (which varies between 0 and 3) and V2 being 1 for individuals who are less than 65 years of age (and 0 otherwise). This model was fitted by regressing the outcome at each time on each of the terms in the righthand side of the above model, as well as the corresponding terms with the exposure substituted by the propensity score, and on the covariate and exposure history, using weighted independence GEE. From the estimated coefficients ψˆ0=1.66 (0.63 to 2.68), ψˆ1=0.43 (0.89 to 0.035), ψˆ2=1.21 (2.36 to 0.056), ψˆ3=0.59 (1.24 to 0.055) and ψˆ4=0.72 (0.16 to 1.61), we find strong evidence of an effect of physical activity in elderly individuals without pain at the first visit time, marginal evidence that this effect weakens with higher pain scores and marginal evidence of a larger effect of physical activity in the elderly. Furthermore, there is marginal evidence of decreasing effect of physical activity with time in the elderly, although not in people less than 65 years of age.

5 Discussion

SNMMs, along with the associated g-estimation method, deserve to play a greater role in studies of the causal effect of time-varying exposures. These models enable a deeper understanding of exposure effects by describing how exposure effects differ between strata defined by time-varying covariates. They moreover allow for borrowing of information between these strata, so that the associated g-estimators can make more efficient use of the information in the data, while downplaying strata that contain only exposed or unexposed subjects and thereby carry no real information about the exposure effect. This can result in more stable inferences relative to the more popular IPW estimation approaches which sometimes show some lack of performance, that may subsequently be concealed by heuristic truncation of extreme weights.

For pedagogic purposes, we have limited our development to linear SNMMs. Similar developments can be made for loglinear SNMMs for counts (or more generally, positive constrained outcomes) (Robins 1994), as well as for structural nested cumulative failure time models for survival endpoints (Martinussen et al. 2011; Picciotto et al. 2012). These extensions will be published elsewhere. In future work, we will moreover examine if standard software can be tweaked to produce modified sandwich standard errors that take the uncertainty in the estimated exposure model into account, in order to avoid time-consuming bootstrap procedures. In addition, we have argued that the use of independence GEE should deliver reasonably efficient estimators. It remains to be studied if meaningful efficiency gains can still be realised under realistic data-generating mechanisms. In the considered data analysis of Section 4, the inverse probability weights used in the fitting of the MSM were not very variable (the smallest weight was 0.23, the largest was 3.39). In studies with strong confounding and/or long follow-up, more variable inverse probability weights are typically seen; the same would be true when analysing a continuous exposure (as in some of the SNMMs that we fitted). We conjecture that in that case even greater benefits of g-estimation can be expected.


Estimation approach for point exposures

Define Li=(1,Li,ZiPi). It is easily verified that the ordinary least squares estimator of ψ, obtained in step 2 of the proposed procedure, is obtained as the solution to


where E˜ZiAi|Li is the fitted value from an ordinary least squares regression of ZiAi onto Li. This estimator is asymptotically equivalent to the solution to


where E˜ZiAi|Li is the population fitted value from an ordinary least squares regression of ZiAi onto Li; that is, E˜ZiAi|Li is the linear function of Li that solves


Suppose now that the propensity score model is correctly specified. Then it follows from the fact that E(Ai|Li)=Pi and the above identity that E˜ZiAi|Li=ZiPi. The proposed estimator is then asymptotically equivalent to the solution to


which is the g-estimator considered in Robins et al. (1992b), for instance. The efficient g-estimator according to Robins (1994) (see also Vansteelandt and Joffe 2014) is the solution to


for E{YiψZiAi|Li}=β0+β1Li, when the outcome variance is constant at all levels of exposure and confounders. When the propensity score model is a canonical generalised linear model which includes the terms Zi and ZiLi, then it follow from the score equations of that model that


so that eqs [18] and [19] give identical solutions. It follows in that case that the efficient g-estimator of ψ is asymptotically equivalent (and under some conditions also mathematically identical – not shown) to the proposed estimator, i. e. the solution to eq. [18].

It is relatively easy to see that the proposed g-estimator is unbiased (in large samples) for ψ in model EY(a)Y(0)|l=ψza in large samples when the propensity score model is correctly specified; the estimator is therefore, by definition, a g-estimator. This is because eq. [18] then has mean


because Zi is a function of Li and E(Ai|Li)=Pi. That the obtained estimator is also unbiased (in large samples) when the propensity score model is misspecified but the outcome model in step 2 of the procedure is correctly specified, can be seen as follows. In that case, the equation has mean


where the last equality can be understood upon noting that


are the score functions from an ordinary least squares regression of ZiAi onto Li, which have mean zero by construction (see eq. [17]), and the fact that Li includes 1 and Li. Moreover, in that case, the uncertainty in the fitted propensity score model can be ignored when calculating a standard error for the g-estimator. The reason is because the gradient of the estimating functions w.r.t. the propensity score parameters then has mean 0, which is a consequence of the double robustness of the estimator. Indeed, the double robustness implies that the estimating function has mean zero when evaluated at the true outcome model, for each choice of the propensity score parameters. The gradient w.r.t. those propensity score parameters must thus equal zero. This implies that the estimating functions are locally insensitive to perturbations in the propensity score parameters of the order 1 over root-n. It thus follows in particular that the usual sandwich standard errors are not conservative when the outcome model is correctly specified.

Relation between SNMMs and MSMs

For pedagogic purposes, we demonstrate the relation between eq. [3] and the MSM


for T=3. Note that




under eq. [3]. Further,


where in the second equality we make use of the fact that, by assumption, A0Y(aˉ1,0)|L0 and A0Y(a0,0)|L0. Finally, by the same reasoning we have


It follows that



The dataset fin contains 4 lines for each subject, corresponding to the 4 measurement times (the first of which is the baseline visit). In the analyses below, the only time-varying variables are the outcome FUNCTION_MIN, the exposure PASECAT (if categorical) or PASE (if continuous), and the remaining confounders BMIGROUP, KNEESYMPTOM, WOMACPAIN, CESDSBIN. Those variables starting with ‘B’ refer to the realisations of these variables at the baseline visit. Those variables starting with ‘L’ and ‘LL’ refer to the corresponding lagged variables assessed one or two visits, respectively, prior to the considered time. The following display, which involves data for 2 subjects, shows the data structure:


To fit model [14], we first fitted a propensity score model using the following R-code (with d the name of the database):

install.packages("nnet") library(nnet) mod <- multinom(PASECAT~factor(LPASECAT)+SMOKE+factor (ALCOHOL)   +factor(RACE)+factor(EDUCATION)+COMORBIDITY   +MARRIED+SEX+AGE+I(sqrt(AGE))+I(AGE^2)+factor(OASEVERITY)   +HIPPAIN+ANKLEPAIN+FOOTPAIN+factor(TIME) +KNEEINJURY   +BFUNCTION_MIN+factor(BBMIGROUP)+BKNEESYMPTOM   +BWOMACPAIN+BCESDSBIN+FUNCTION_MIN   +I(FUNCTION_MIN^2)+factor(BMIGROUP)+KNEESYMPTOM   +WOMACPAIN+CESDSBIN,data=d,maxit=250) ps <- predict(mod,newdata=d,type="probs") a <-cbind(ifelse(d$PASECAT==0,1,0), ifelse(d$PASECAT==1,1,0),    ifelse(d$PASECAT==2,1,0), ifelse(d$PASECAT==3,1,0))

We then created lagged copies of exposure and propensity score to allow that functional performance at each time is influenced by previous rather than concurrent physical activity levels:

install.packages("DataCombine") library(DataCombine) finr <- data.frame(cbind(d,a,ps)) finr <- slide(finr, Var = “X2", GroupVar = “ID", slideBy = –1,   NewVar = “LX1") finr$LX1[fin$TIME==1]<-0 finr <- slide(finr, Var = “X3", GroupVar = “ID", slideBy = –1,   NewVar = “LX2") finr$LX2[fin$TIME==1]<-0 finr <- slide (finr, Var = “X4", GroupVar = “ID", slideBy = –1,  NewVar = “LX3") finr$LX3[fin$TIME==1]<-0 finr <- slide (finr, Var = “X1.1", GroupVar = “ID", slideBy = –1,  NewVar = “PX1") finr$PX1[fin$TIME==1]<-0 finr <- slide (finr, Var = “X2.1", GroupVar = “ID", slideBy = –1,  NewVar = “PX2") finr$PX2[fin$TIME==1]<-0 finr <- slide (finr, Var = “X3.1", GroupVar = “ID", slideBy = –1,  NewVar = “PX3") finr$PX3[fin$TIME==1]<-0

Here, LX1 is an indicator of having been in physical activity class 1 at the previous visit, and PX1 is the corresponding probability (or propensity) of this happening; likewise for LX2, LX3, PX2 and PX3. Next, with C being a column vector which takes the value 0 if the considered subject was previous at the current and all previous visits, and 1 otherwise, we next constructed inverse probability of censoring weights as follows:

modc <- glm(C~LPASE+I(sqrt(LPASE))+SMOKE+factor(ALCOHOL)  +factor(RACE)+factor(EDUCATION)+COMORBIDITY+MARRIED  +SEX+AGE+I(sqrt(AGE))+I(AGE^2)+factor(OASEVERITY)+HIPPAIN  +ANKLEPAIN+FOOTPAIN+factor(TIME)+KNEEINJURY  +BFUNCTION_MIN+factor(BBMIGROUP)+BKNEESYMPTOM  +BWOMACPAIN+BCESDSBIN+LFUNCTION_MIN+factor(BMIGROUP)  +LKNEESYMPTOM+LWOMACPAIN+LCESDSBIN,family=binomial,  data=fin[(fin$LC==0)&(fin$TIME>1),]) modcb <- glm(BPASE+I(sqrt(BPASE))+SMOKE+factor(ALCOHOL)  +factor(RACE)+factor(EDUCATION)+COMORBIDITY+MARRIED  +SEX+AGE+I(sqrt(AGE))+I(AGE^2)+factor(OASEVERITY)+HIPPAIN +ANKLEPAIN+FOOTPAIN+factor(TIME)+KNEEINJURY  +BFUNCTION_MIN+factor(BBMIGROUP)+BKNEESYMPTOM  +BWOMACPAIN+BCESDSBIN,family=binomial,  data=fin[(fin$LC==0)&(fin$TIME>1),]) w <-(1-predict(modcb,newdata=finr[(finr$TIME>1)&(finr$C==0),],  type="response"))  /(1-predict(modc,newdata=finr[(finr$TIME>1)&(finr$C==0),],  type="response"))

Finally, we fitted the SNMM [14] as follows:


To fit model [16], we first fitted a propensity score model using the following R-code:

mod <- lm(I(log(PASE+1))~I(log(LPASE+1))+I(log(LPASE+1)^2)+SMOKE  +factor(ALCOHOL)+factor(RACE)+factor(EDUCATION)+COMORBIDITY  +MARRIED+SEX+AGE+I(sqrt(AGE))+I(AGE^2)+factor(OASEVERITY)  +HIPPAIN+ANKLEPAIN+FOOTPAIN+factor(TIME)+KNEEINJURY  +BFUNCTION_MIN+factor(BMIGROUP)+factor(BBMIGROUP)  +BKNEESYMPTOM+BWOMACPAIN+BCESDSBIN+FUNCTION_MIN  +I(FUNCTION_MIN^2)+KNEESYMPTOM+WOMACPAIN+CESDSBIN,data=d) f <- predict(mod,newdata=d) a <- log(d$PASE+1) finr <- data.frame(cbind(d,a,f)) finr <- slide(finr, Var = “a", GroupVar = “ID", slideBy = –1,  NewVar = “La") finr$La[fin$TIME==1]<-0 finr <- slide(finr, Var = “f", GroupVar = “ID", slideBy = –1,  NewVar = “Lf") finr$Lf[fin$TIME==1]<-0

We then fitted the model as


Finally, to fit model [11], we first augmented the dataset so that the column of lagged exposures

contains the exposure measurements B1(A0,A1,A2,A0,A1) defined in the main text, and likewise for A and L:

gfin <-data.frame(rbind(finr[finr$TIME==2,],finr[finr$TIME==3,],  finr[finr$TIME==4,],finr[finr$TIME==2,],finr[finr$TIME==3,],  finr[finr$TIME==2,]))

We then constructed a corresponding vector of outcomes, visit times and censoring indicators:

y <- finr$FUNCTION_MIN ti <- finr$TIME ce <- finr$C gfin$FUNCTION_MIN <- c(y[finr$TIME==2], y[finr$TIME==3],y[finr$TIME==4],   y[finr$TIME==3],y[finr$TIME==4], y[finr$TIME==4]) gfin$TIME <-c(ti[ti==2],ti[ti==3],ti[ti==4],ti[ti==3],ti[ti==4],ti[ti==4]) gfin$C <-c(finr$C[ti==2],finr$C[ti==3],finr$C[ti==4],finr$C[ti==3],   finr$C[ti==4],finr$C[ti==4])

We then constructed a variable that contains the lag time between exposure and outcome:

l <- length(finr$La[ti==2]) gfin$lag <- c(rep(0,length=l),rep(0,length=l), rep(0,length=l),   rep(1,length=l),rep(1,length=l),rep(2,length=l))

Censoring weights were obtained as:

wh <- (1-predict(modcb,newdata=finr[ti>1,],type="response"))   /(1-predict(modc,newdata=finr[ti>1,],type="response")) s <- ti[ti>1] w <-c(wh[s==2],wh[s==3],wh[s==4],wh[s==3]*wh[s==2],   wh[s==4]*wh[s==3],wh[s==3]*wh[s==4]*wh[s==2])

This allowed us to update our initial estimate from the earlier analysis of model [16] as follows:

psi0 <- coef(mody)[2] gfin$H <- gfin$FUNCTION_MIN-psi0*gfin$a*(gfin$lag==1) mody <- lm(H ~ La+La:lag+Lf+Lf:lag+SMOKE+factor(ALCOHOL)+factor(RACE)   +factor(EDUCATION)+COMORBIDITY+MARRIED+SEX+AGE+I(sqrt(AGE))   +I(AGE^2)+factor(OASEVERITY)+HIPPAIN+ANKLEPAIN+FOOTPAIN+TIME  +KNEEINJURY+BFUNCTION_MIN+factor(BMIGROUP)+BKNEESYMPTOM  +BWOMACPAIN+BCESDSBIN+LFUNCTION_MIN+LKNEESYMPTOM+LWOMACPAIN  +LCESDSBIN,data=gfin[(gfin$C==0)&(gfin$lag<2),],  weight=w[(gfin$C==0)&(gfin$lag<2)]) psi0 <- coef(mody)[c(2,34)]

A subsequent update was obtained as:

gfin$H <- gfin$FUNCTION_MIN-psi0[1]*gfin$a*(gfin$lag==1) gfin$H[gfin$lag==2] <- gfin$H[gfin$lag==2]-(psi0[1]*finr$a[finr$TIME==3]  +psi0[2]*finr$a[finr$TIME==2]) mody <- lm(H ~La+La:lag+Lf+Lf:lag+SMOKE+factor(ALCOHOL)+factor(RACE)  +factor(EDUCATION)+COMORBIDITY+MARRIED+SEX+AGE+I(sqrt(AGE))  +I(AGE^2)+factor(OASEVERITY)+HIPPAIN+ANKLEPAIN+FOOTPAIN+TIME  +KNEEINJURY+BFUNCTION_MIN+factor(BMIGROUP)+BKNEESYMPTOM  +BWOMACPAIN+BCESDSBIN+LFUNCTION_MIN+LKNEESYMPTOM+LWOMACPAIN  +LCESDSBIN,data=gfin[(gfin$C==0)),],weight=w[(gfin$C==0)]) psi0 <- coef(mody)[c(2,34)]

Upon iterating the code in the final dispay above one more time, a final estimate was obtained.


Brumback, B., Greenland, S., Redman, M., Kiviat, N., and Diehr, P. (2003). The intensity-score approach to adjusting for confounding. Biometrics, 59:274–285.10.1111/1541-0420.00034Search in Google Scholar PubMed

Bryan, J., Yu, Z., and Van Der Laan, M. J. (2004). Analysis of longitudinal marginal structural models. Biostatistics, 5:361–380.10.1093/biostatistics/kxg041Search in Google Scholar

Cole, S. R., and Hernán, M. A. (2008). Constructing inverse probability weights for marginal structural models. American Journal of Epidemiology, 168:656–664.10.1093/aje/kwn164Search in Google Scholar PubMed PubMed Central

Crump, R. K., Hotz, V. J., Imbens, G. W., and Mitnik, O. A. (2009). Dealing with limited overlap in estimation of average treatment effects. Biometrika, 96:187–199, URL in Google Scholar

Diggle, P., Heagerty, P., Liang, K. -Y., and Zeger, S. (2013). Analysis of Longitudinal Data. Oxford: OUP.Search in Google Scholar

Dunlop, D. D., Song, J., Semanik, P. A., Sharma, L., and Chang, R. W. (2011). Physical activity levels and functional performance in the osteoarthritis initiative: A graded relationship. Arthritis Rheum, 63:127–136.10.1002/art.27760Search in Google Scholar PubMed PubMed Central

Hernán, M. A., Brumback, B., and Robins, J. M. (2000). Marginal structural models to estimate the causal effect of Zidovudine on the survival of hiv-positive men. Epidemiology, 11:561–570.10.1097/00001648-200009000-00012Search in Google Scholar PubMed

Hernán, M. A., and Robins, J. M. (2006). Estimating causal effects from epidemiological data. Journal of Epidemiology and Community, 60:578–586.10.1136/jech.2004.029496Search in Google Scholar PubMed PubMed Central

Joffe, M. M. (2012). Structural nested models, g-estimation, and the healthy worker effect: The promise (mostly unrealized) and the pitfalls. Epidemiology, 23:220–222.10.1097/EDE.0b013e318245f798Search in Google Scholar PubMed

Joffe, M. M., Yang, W., and Feldman, H. (2010). Selective ignorability assumptions in causal inference. International Journal of Biostatistics, 6:Article 25.10.2202/1557-4679.1199Search in Google Scholar PubMed

Joffe, M. M., Yang, W. P., and Feldman, H. (2012). G-estimation and artificial censoring: Problems, challenges, and applications. Biometrics, 68:275–286. in Google Scholar PubMed

Lok, J. J., and DeGruttola, V. (2012). Impact of time to start treatment following infection with application to initiating haart in hiv-positive patients. Biometrics, 68:745–754. in Google Scholar

Mansournia, M. A., Danaei, G., Forouzanfar, M. H., Mahmoodi, M., Jamali, M., Mansournia, N., and Mohammad, K. (2012). Effect of physical activity on functional performance and knee pain in patients with osteoarthritis: Analysis with marginal structural models. Epidemiology, 23:631–640.10.1097/EDE.0b013e31824cc1c3Search in Google Scholar

Martinussen, T., Vansteelandt, S., Gerster, M., and Hjelmborg, Jv. B. (2011). Estimation of direct effects for survival data by using the Aalen additive hazards model. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73:773–788.10.1111/j.1467-9868.2011.00782.xSearch in Google Scholar

Picciotto, S., Hernán, M. A., Page, J. H., Young, J. G., and Robins, J. M. (2012). Structural nested cumulative failure time models to estimate the effects of interventions. Journal of the American Statistical Association, 107:886–900.10.1080/01621459.2012.682532Search in Google Scholar

Robins, J. (1986). A new approach to causal inference in mortality studies with sustained exposure periods – application to control of the healthy worker survivor effect. Mathematical Modelling, 7:1393–1512.10.1016/0270-0255(86)90088-6Search in Google Scholar

Robins, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. Communications in Statistics-Theory and Methods, 23:2379–2412.10.1080/03610929408831393Search in Google Scholar

Robins, J. M. (1997). Causal inference from complex longitudinal data. In: Latent Variable Modeling and Applications to Causality, M. Berkane (Ed.), 69–117. New York: Springer-Verlag.10.1007/978-1-4612-1842-5_4Search in Google Scholar

Robins, J. M. (2000). Statistical models in epidemiology: The environment and clinical trials. In: Marginal Structural Models Versus Structural Nested Models as Tools for Causal Inference, M. E. Halloran and D. Berry (Eds.), 95–133. New York: Springer-Verlag.Search in Google Scholar

Robins, J. M., Blevins, D., Ritter, G., and Wulfsohn, M. (1992a). G-estimation of the effect of prophylaxis therapy for pneumocystis Carinii pneumonia on the survival of aids patients. Epidemiology, 3:319–336.10.1097/00001648-199207000-00007Search in Google Scholar PubMed

Robins, J. M., Mark, S. D., and Newey, W. K. (1992b). Estimating exposure effects by modelling the expectation of exposure conditional on confounders. Biometrics, 48:479–495.10.2307/2532304Search in Google Scholar

Robins, J. M., Rotnitzky, A., and Zhao, L. P. (1995). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data. Journal of the American Statistical Association, 90:106–121.10.1080/01621459.1995.10476493Search in Google Scholar

Vansteelandt, S. (2007). On confounding, prediction and efficiency in the analysis of longitudinal and cross-sectional clustered data. Scandinavian Journal of Statistics, 34:478–498.10.1111/j.1467-9469.2006.00555.xSearch in Google Scholar

Vansteelandt, S. (2009). Estimating direct effects in cohort and case-control studies. Epidemiology, 20:851–860.10.1097/EDE.0b013e3181b6f4c9Search in Google Scholar PubMed

Vansteelandt, S., and Daniel, R. M. (2014). On regression adjustment for the propensity score. Statistics in Medicine, 33:4053–4072.10.1002/sim.6207Search in Google Scholar PubMed

Vansteelandt, S., and Joffe, M. (2014). Structural nested models and g-estimation: The partially realized promise. Statistical Science, 29:707–731.10.1214/14-STS493Search in Google Scholar

Published Online: 2016-5-25
Published in Print: 2016-12-1

© 2016 by De Gruyter

Downloaded on 1.3.2024 from
Scroll to top button