While standard meta-analysis pools the results from randomized trials that compare two treatments, network meta-analysis aggregates the results of randomized trials comparing a wider variety of treatment options. However, it is unclear whether the aggregation of effect estimates across heterogeneous populations will be consistent for a meaningful parameter when not all treatments are evaluated on each population. Drawing from counterfactual theory and the causal inference framework, we define the population of interest in a network meta-analysis and define the target parameter under a series of nonparametric structural assumptions. This allows us to determine the requirements for identifiability of this parameter, enabling a description of the conditions under which network meta-analysis is appropriate and when it might mislead decision making. We then adapt several modeling strategies from the causal inference literature to obtain consistent estimation of the intervention-specific mean outcome and model-independent contrasts between treatments. Finally, we perform a reanalysis of a systematic review to compare the efficacy of antibiotics on suspected or confirmed methicillin-resistant Staphylococcus aureus in hospitalized patients.
While individual studies are rarely used to inform scientific or medical decision making , multiple sources of evidence may be aggregated in order to offer more generalizable and precise comparisons between treatments [2, 3, 4, 5]. Meta-analysis, which is the statistical synthesis of multiple study results, is often considered the highest form of quantitative evidence due to its ability to combine all relevant information in the scientific literature. However, because of such issues as effect heterogeneity across study populations and methodology that does not necessarily account for all sources of bias, the status of meta-analysis as the “gold standard” of medical knowledge has been questioned .
Standard meta-analysis compares two treatments of interest (or, for instance, an active treatment and placebo). When many treatments for a common condition are tested and made available over time, the medical literature may then contain multiple randomized controlled trials (RCTs) with various treatment comparisons on potentially different populations. Without additional guidance, clinicians and patients are left to informally synthesize information in the available studies in order to determine an optimal treatment decision. A network meta-analysis statistically aggregates the results from the relevant RCTs in order to obtain an estimate of the contrast between each pair of treatments. In particular, this type of analysis can produce estimates of contrasts even when no RCT directly compared the two treatments of interest directly.
Each RCT in the network may be performed on populations that differ in terms of their baseline characteristics. These population-specific variables may affect the average response to treatment so that in order to combine inference involving the means, it might be beneficial to control for such variables . Furthermore, it has been noted that if these characteristics not only differentially affect response to treatment, but also the initial study design choice of which treatments to compare, then these variables may confound the overall effect estimate [6, 8]. As an example, Jansen et al. ,suggest that the baseline severity of patients recruited into a study can be related to the type of treatments investigated in the study and also affect the average outcome at the end of the study. As we demonstrate in this paper, such “study-level confounding” must be adjusted for in order to obtain consistent estimation of average treatment effects.
In this paper, we consider the setting where individual patient data are not available so that the observed data is limited to average covariate and outcome values in addition to study-level information (which we refer to as “aggregate” or study-level data). We begin by describing past parametric approaches to network meta-analysis where the parameter of interest is dependent on the model specification and where the absence of effect heterogeneity is often required a priori. Using the counterfactual framework, we propose a novel definition of a marginal and model-independent causal parameter of interest in network meta-analysis and delineate the assumptions required to estimate this parameter in the presence of measured study-level confounders. We are then able to clarify conditions under which a network meta-analysis is appropriate and when it might mislead decision making regardless of estimation method used. We describe several marginal estimation methods adapted from the single study causal inference setting, including a doubly robust and semiparametric locally efficient Targeted Maximum Likelihood Estimator, and then compare these methods in a simulation study. Finally, we perform a reanalysis of the systematic review by Bally et al. ,to compare the efficacy of antibiotics on suspected or confirmed methicillin-resistant Staphylococcus aureus (MRSA) in hospitalized patients.
Each RCT is assumed to randomly sample subjects from a wider population, called a superpopulation. Within the RCT, randomization assigns subjects to two or more groups, each one receiving a treatment. These groups are often referred to as treatment arms. Due to randomization and random sampling, each group is a representative sample from the superpopulation. Therefore, each arm can be thought of as a distinct study on the same superpopulation. The superpopulations targeted by the RCTs may differ in terms of their characteristics due to, for example, each trial’s physical and temporal location, the individual inclusion and exclusion criteria, and the recruitment sample size targets. Therefore, if effect heterogeneity exists (i. e. if the relative treatment effects at the subject level depend on baseline covariate values), one would not expect the average relative treatment effects to necessarily be equal across superpopulations.
More formally, the superpopulation is the conceptual group of essentially infinite size from which the study sample is selected . A measure of some outcome ( ) is taken on each subject in the RCT arm. In this article, we will generally consider the example where the sample mean and standard deviation of are the summary statistics computed in each RCT.
Let be the intervention received by subjects in arm of a particular RCT indexed by . For this arm, we observe an estimated mean outcome and standard deviation . Let where is study baseline information and is the number of arms in the study. For the th arm of RCT , let be the number of subjects and be the total number of RCTs in the sample.
Because we are interested in summarizing effects across multiple superpopulations, we are arguably attempting to estimate effects in a metapopulation that contains the individual superpopulations from each study. For the purpose of this paper, we define the metapopulation as the union of possible study superpopulations and define our parameters of interest with respect to this metapopulation. In particular, we assume that the individual vectors are independently drawn from the metapopulation and identically distributed.
Standard approaches in network meta-analysis where only aggregate data are observed place a hierarchical model on either the study-specific contrasts (e. g. the difference in means, ) or the arm-specific outcomes ( ) and specify a within-study correlation structure [3, 5, 11, 12]. As the absence of effect heterogeneity is often required, a priori  and post-hoc  investigation of this assumption is routinely recommended. The reader is referred to published guidance [15, 16] and to an example of how heterogeneity was accounted for in an economic analysis . There has been recent heated debate about the appropriateness of arm-based estimation methods [11, 18].
The effect targeted in a hierarchical model depends on the contrast-type chosen and the parametrization of the model, and may or may not correspond to a marginal effect as we define further on. For binary outcomes, due to the non-collapsibility of the logistic regression model  in particular, adjustment for covariates in such a model changes the true value of the “effect” parameter being estimated. This type of modeling strategy may therefore be biased for the estimation of a marginal effect. Even in linear models, the inclusion of treatment interactions with covariates can also bias the value of the coefficient of treatment relative to the marginal effect. Zhang et al. ,and Zhang et al. ,take a missing data perspective and model the arm-specific outcomes using a Bayesian hierarchical model to estimate marginal parameters. While neither approach has yet been extended to incorporate covariates, the former paper assumes that treatments are applied to studies at-random while the latter allows for estimation in a not-at-random context by explicitly specifying the unobservable selection mechanism.
While adjustment for covariates is rare in practice, Jansen et al. ,introduced the notion of adapting Pearl’s causal directed acyclic graphs (DAGs) to this setting  in order to assist in covariate selection. As a general rule, Jansen et al. ,advocate for the adjustment of all modifiers of the relative treatment effects across comparisons. They also discourage adjustment for covariates that are not effect modifiers due to the fact that they may induce bias in the meta-analysis.
Let be the potential (or counterfactual) outcome of a random subject drawn from the metapopulation had that subject received treatment . In an RCT, each study arm produces an estimate of the superpopulation-specific mean of the outcome under the treatment assigned. Let the true mean of the potential outcome under treatment for the superpopulation targeted in study be denoted where represents the superpopulation targeted in study . Let be the standard deviation of the potential outcomes in under treatment . Now suppose that each superpopulation is independently drawn from a metapopulation, , the union of all possible study superpopulations indexed by the set . A marginal target parameter in a meta-analysis is , which represents the mean outcome under treatment on the metapopulation. The standard deviation of the overall outcome distribution is , representing the within and between study heterogeneity in the outcome under treatment. Due to treatment arm randomization and random sampling, is an unbiased estimate of , the mean potential outcome under the observed treatment, and is a consistent estimate of , the potential standard deviation under the observed treatment.
For two treatments, and , with corresponding means and , we can define a causal effect as the contrast between the mean outcome when the entire metapopulation is treated according to one treatment versus another. For instance, for binary outcomes we may define the causal risk difference as and the causal risk ratio as .
The patient sample in any given study arm may not be representative of the metapopulation, for which the effect of interest is defined. In addition, because treatment was not randomly allocated across different RCTs, the collection of mean outcomes observed under a given treatment may not be representative of the metapopulation under treatment . At the design stage, the decision of which treatments to include as arms within an RCT may be influenced by the characteristics of the superpopulation on which the study is taking place. For instance, consider the example of planning a study for a superpopulation with higher disease severity from Jansen et al. . Studies including patients with severe disease are more likely to include an arm with an aggressive treatment. If this occurs, the mean outcome under the aggressive treatment may be different than in a less severe superpopulation. In this situation, we would say that the treatment-mean outcome relationship is confounded at the study level by severity.
Similar to Alonso et al. , we assume that heterogeneity in the different superpopulations targeted in the individual RCTs implies that each RCT estimates a different causal effect. Like Zhang et al. , we take an “arm-based” approach to the problem. Like Jansen et al. , we draw a causal DAG in order to conceptualize the relationship between treatment, study results, and population-specific characteristics. We arbitrarily choose to intervene on the arm labeled in each study. We write , the vector of sample sizes across arms. We will also define , the vector of treatment assignments evaluated in study , and to mean the treatment vector excluding some arm .
Many of the assumptions presented in detail in Section 4.3 are drawn explicitly using the study-level DAGs in Figure 1(a). The nodes of the DAG represent variables measured at the level of the RCT and the arrows between them represent the effect of the parent on the child node. For example, the absence of an arrow from to represents a component of the “no interference” assumption that the treatment in one arm will not affect the outcome in another. The arrow from to is present because the sample size within a study arm will affect the distribution of the outcome summary statistics.
The sample size node is determined by the sample size calculation made in the study design phase and also by the success of recruitment. This calculation is inherently conditional on the superpopulation being evaluated, as superpopulation characteristics are taken into account when hypothesizing an effect size and standard error. This calculation is also conditional on the treatments being compared.
Causal DAGs can be used as a tool to identify which variables must be controlled for in the meta-analysis in order to estimate the treatment-specific metapopulation mean outcome. Depending on some underlying statistical assumptions that we will investigate in detail in the following sections, these DAGs may simplify to Figure 1(b). This happens because we can ignore the mediation path through in order to estimate the total effect of the treatment on outcome. Under these conditions, assuming independence between the variables in , the analysis must adjust for all common causes of treatment selection and study outcome distribution.
Note that the recommendations based on this DAG differ from those of Jansen et al. , who say that the analysis must adjust exclusively for effect modifiers. The assumptions that we list in Section 4.3 are explicitly required in the steps we take in Section 4.2 in order to obtain identifiability of the meta-analysis parameter of interest.
Suppose we observe the aggregate data , independently drawn and identically distributed. Using the nonparametric structural equation modeling (NPSEM) of Pearl , the metapopulation mean outcome, , can be shown to be identifiable (that is, known with infinite data) under the several conditions outlined and discussed in Section 4.3.
At the study design stage for RCT , the superpopulation is randomly drawn from the metapopulation . The selection of determines the population-level covariates . The number of study arms and the treatments compared in the study, the multivariate , are drawn conditional on . The sample size calculation is carried out based on the choice of treatment comparison and on the sub-population characteristics (i. e. based on the expected effect and precision in that sub-population). This calculation is approximate and the resulting sample size also depends on the success of recruitment. Therefore, the sample sizes for the treatment arms, , are not deterministic, but are drawn conditional on , and .
The second stage operates at the individual level once subjects are recruited and randomly assigned treatment. Suppose each subject in arm of study has continuous outcome (under treatment ). Each is independently drawn from a distribution with mean and standard deviation . The empirical mean outcome in arm of study is therefore . The standard deviation is estimated as . In addition, subject recruitment yields summary characteristics of the superpopulation, which we assume to include complete information about the covariates that were known at study conception and contributed to the treatment choice. We assume in the following that we do not observe the subject-level data.
Let represent the set of estimated summary statistics of the outcome variable from study arm . For instance, we might have that . Correspondingly, let be the set of estimates of the counterfactual summary statistics that would arise had arm been assigned treatment .
Assuming no interference between arms and that the distribution of in one arm of a study is conditionally independent of the outcomes in the others and also independent of the total number of arms, the NPSEM that we assume can then be written as
The probability density function arising from the NPSEM without intervention can be decomposed as
where corresponds to the density function for , corresponds to the density function for conditional on , and corresponds to the conditional density function for . Within each RCT, corresponds to the conditional density function for and is the conditional (joint) density for the measured summary statistic(s) in arm .
Define an intervention as the assignment of treatment strategy to an arbitrary arm in each study. In other words, for all we set for a single arbitrary arm . The remaining non- arms receive potential treatments . The joint density for the counterfactual data can be obtained through the G-formula . This joint density function can be written as
where is defined as the conditional (joint) density of the treatments assigned to non- arms.
Suppose we have that , meaning that each study reported the sample means and sample standard deviations of a continuous outcome. We then make the key structural assumption that where is the counterfactual sample size in study arm . Let represent an individual recruited into study arm in the counterfactual scenario. The independence assumption arises naturally from the distributional assumption that because and are the sample mean and standard deviation in superpopulation when is the treatment assigned. Asymptotically, we have that and are independent normal variables when the subject-level outcomes are drawn from a distribution with zero skew, such that , p. 46]. We show in Appendix A.1 that under this assumption can be decomposed in such a way that the mean outcome, , can be written independently of the non- arms, resulting in the simple equality . Under the unconfoundedness assumption , and under the consistency assumption (see next section) we may write the G-formula  . Therefore, this quantity is identifiable from the data.
Identifiability without assuming this structural independence is possible, and we describe the additional causal assumptions required for this setting in Appendix A.2.
If the original study outcomes are binary (such that ), the study means are the proportions of subjects with the indicated outcome. Therefore, has a binomial distribution with true probability of outcome . Then, . Similarly, the study arm estimate of the standard deviation is . In this case, the likelihood will not include a component for so no independence assumption is necessary. The resulting G-formula is still and will rely on the same unconfoundedness assumption that .
For convenience, here we list the assumptions needed for the identification of , corresponding with the NPSEM in Section 4.2.1 and the DAGs in Figure 1. We also comment on the meaning and plausibility of these assumptions in the hypothetical situation where each individual RCT has full compliance. Under full compliance, each RCT arm produces a consistent estimate of the mean outcome in the superpopulation under full adherence to the assigned treatment.
No interference. The use of the above counterfactual notation presupposes that the treatment assigned to one study does not affect the counterfactual outcome of another study . A secondary level of interference within an individual study involves the treatment in one study arm affecting the outcomes in another study arm. This means that the estimates and do not depend on the treatment received by another arm of the same RCT. The assumption of no interference will generally not hold for certain studies of infectious disease. For example, an effective vaccine in one arm may impact the outcome of an unvaccinated subject in the control arm, because the unvaccinated subject will be less likely to be exposed to the disease through herd immunity.
Unconfoundedness. (Weak) unconfoundedness  is required for the identification of . In this context, unconfoundedness is the assumption that the counterfactual sample means under a treatment are independent of the true treatment received conditional on measured covariates. Specifically, this means that . In the example DAG of Figure 1(a), this corresponds to measuring all the components of node . The validity of this assumption is entirely dependent on the subject-matter, how RCTs in the field are designed, and on the information reported in the RCTs.
Consistency. The consistency assumption in this context states that the counterfactual mean of a study arm under a given treatment is the same as the observed result. With notation, this is equivalent to stating that when . Having different definitions of treatment across studies may violate this assumption if all are categorized under the same treatment type and this variation has an impact on the outcome . For example, there may be different drug dosages and lengths of follow-up across studies. Disregarding these differences will violate consistency if the various treatment-types have differential effects on the patient outcomes. With some additional unconfoundedness requirements, one might surmount this obstacle using the approach described in VanderWeele and Hernán . [Note that this definition of consistency corresponds with the causal assumption and is distinct from the network meta-analysis meaning of the term in e. g. .]
Positivity. Finally, we need to evaluate both theoretical and practical positivity. Theoretical positivity is the assumption that, conditional only on variables required for unconfoundedness, all studies had a positive probability of being assigned each treatment under investigation. Practical positivity is the condition that for every level of the characteristics , there is an estimated positive probability of receiving treatment.
It is important to note that treatment comparisons are based on the same and that the target parameter relies on the definition of this metapopulation. If positivity does not hold on some subpopulations it would be necessary to exclude all studies (and corresponding superpopulations) that contain such subpopulations.
It is furthermore important to note that the positivity assumption is not the same as requiring that all studies could have realistically been assigned each treatment. In particular, certain treatments may not have been available when some older trials were carried out. If year of study is not required to unconfound the analysis, then the unconditional probability may still be non-zero.
G-Computation procedures based on the G-formula in Section 4.2 can be used to estimate the target parameter. Here we define a simple procedure resulting from the data requirement that the sample mean and standard deviation are independent within a study arm. This procedure allows for simple frequentist estimation of the mean effect of treatment.
This procedure requires estimates for the conditional expectation for a given value of treatment. First we must note that while the conditional mean of is independent of , its distribution is not. In particular, we have that
Because is a consistent estimate of the superpopulation-level variance under treatment , we are able to estimate this variance.
A model for the regression on may be fit by pooling over all arms regardless of treatment assignment. In order to obtain the Best Linear Unbiased Estimator, we can weight by . Using this model fit, we predict , i. e. the predicted mean under treatment for each study. The G-Computation estimate is then .
The standard error for the G-Computation estimate is usually computed through nonparametric bootstrap methods . Bootstrap resampling must be done by resampling studies, rather than arms, similar to what is done in a study with clustering .
Likelihood methods, such as G-Computation, require correct parametric specification of the outcome model, which may be difficult to specify. An alternative approach is to utilize propensity score methods, which require the estimation of a model for the treatment received by the arm. For a given treatment type , let be an estimate of the probability , called the generalized propensity score .
Despite the small sample size in standard network meta-analysis, one might attempt inverse probability of treatment weighting (IPTW) for the estimation of the marginal parameter. Let represent the observed outcome of the arm of study that received treatment (or N/A if no arm of study received treatment ). An IPTW estimator for multiple treatments  can be represented as
Intuitively, this estimator takes a mean of with only the arms treated according to . It then adjusts this estimate to remove the confounding bias caused by the baseline variables.
The consistency of this estimator can be shown as follows.
Targeted Minimum Loss-based Estimation (TMLE) [31, 32] is a framework for the construction of semi-parametric estimators generally applied to the estimation of causal quantities. The TMLE procedure is carried out by first fitting a model for the expected value of the arm-based means, which, under the causal assumptions, can equivalently be written as the expectation of the potential outcome had the study evaluated treatment , . As in the G-Computation procedure, this model can be estimated by weighing each observation by . For each arm in the study, we use this model to obtain , predictions of the sample mean of each trial under treatment . These predictions are then updated by fitting a no-intercept logistic regression using study arms that evaluated treatment . This logistic regression is fit with outcome , offset , and single covariate , corresponding with the inverse probability weights. Denote the estimate of the coefficient from this regression as . The updated predictions are then , which is calculated for each study. The final targeted estimate for is . Note that in order to perform the update step, the means and outcome must be transformed to (0,1) and then subsequently transformed back to the original scale . This can be done using real or empirical bounds.
This TMLE is consistent under correct specification of the propensity score model or the model for the expected value of the mean outcome (the property of double robustness). If both of these models are correct, then TMLE is asymptotically efficient in the class of regular, asymptotically linear estimators in the semiparametric model space . More details and a proof of consistency are included in Appendix A.3.
In this section we demonstrate that we can obtain consistent estimation of the target parameter under the NPSEM using the proposed estimators. We also compare the efficiency of each approach.
While the proposed estimators do not restrict the number of study arms, we fix all simulated studies to have exactly two treatment arms for simplicity. We are interested in estimating the mean outcome of the metapopulation under treatment for each of four treatments of interest. For each study , we generate the population average characteristic, from a Poisson distribution with mean 2. The probabilities of receiving a given treatment are calculated conditional on the value of . Two treatment options are then sampled without replacement using the calculated probabilities. Treatments 2 and 4 are generated to be less likely to be chosen with larger . The sample size (which we allowed to be common to both arms in the study) is drawn from a Poisson distribution with mean linear in and . For each subject within each arm, we draw a baseline covariate from a Gaussian distribution with mean and constant variance. We set to be the treatment-specific coefficients. Outcome values are drawn from a Gaussian: . A summary of the data-generation is presented in Table 1.
|Variable||Study design: for each|
|Number of arms|
|sampled without replacement with probabilities|
|(study recruitment) where|
|Within-study: for each ,|
|Observed data: for each ,|
|, , and where|
The sample statistics from each study arm are calculated by taking the mean and standard deviation of within each arm. The true treatment-specific superpopulation means are . We are interested in estimating a subset of the contrasts between the treatments, specifically marginal mean differences , , and . Note that random effects were not generated in this simple simulation study.
We tested the three methods described in the text (G-Computation, IPTW and TMLE) for and simulated studies. We used logistic regression models conditional on the covariate for the generalized propensity score for IPTW and TMLE. We ran two scenarios: incorrect and correct outcome model specification. For the correct scenario, linear regression models for the outcome adjusting for treatment type and covariate were used in G-Computation and TMLE. For the incorrect scenario, the outcome was scaled to and logistic regression models were used. We also display results for an unadjusted estimator that merely takes the mean difference in treatment-specific outcomes when available. Variance and confidence intervals were estimated using the nonparametric cluster bootstrap  where study is considered the cluster (and arms are the individual observations). In Table 2, we present statistics describing the quality of the estimation of all contrasts with treatment 1. These statistics are the percent finite sample bias (“ Bias”), the standard deviation of the estimates over the simulated data (“SE-MC”), the bootstrap-estimated standard error (“SE-BS”), and the percentage of the 95 % confidence intervals that capture the true effect size (“ Cov”). Bootstrap resamples that did not allow for an estimate of the contrast (i. e. if either of the treatments did not appear in the resampled data set) were discarded, potentially biasing this standard error estimate.
The unadjusted estimator was greatly biased for the first and third contrasts, indicating that those two contrasts were highly confounded by the simulated study-level covariate. The correctly specified G-Computation estimator had the lowest bias throughout, the smallest standard errors, and near optimal confidence interval coverage. This is to be expected as G-Computation is a function of maximum likelihood parameter estimates with correct parametric specification of the necessary component of the likelihood (namely, the conditional mean of the outcome). However, with an incorrectly specified outcome model, the estimator was biased which caused the coverage to suffer for the third contrast.
IPTW was the most biased estimator and also had the largest variance. The bias largely dissipated when the sample size was increased to studies. IPTW had good coverage except for the third treatment contrast where treatment 4 was rare. The slower convergence of IPTW in the contrast involving treatment 4 can be explained by a higher variance of the estimated weights for that treatment compared to the others. The performance of IPTW has previously been seen to suffer when data support for certain exposure levels is sparse (i. e. under near practical positivity violations) . Truncation of the propensity score at 5 % and 10 % respectively (that is, replacing the bottom of the propensity score with the th percentile)  increases the bias for the first and third contrasts while reducing the variance, with no effect on the coverage (results not shown).
TMLE with correct outcome model specification had bias comparable to G-Computation but slightly higher for . For , the standard error of TMLE was comparable to that of G-Computation but for it was up to 80 times larger. Regardless, correctly specified TMLE had good coverage throughout. Notably, the bootstrap standard error estimates were comparable to the Monte-Carlo standard error for but diverged for IPTW and TMLE when . Certain implementations of TMLE are more sensitive to near practical positivity violations [33, 35, 36], hence the need for the robust version that involves the logistic regression for the update of the predictions [as described in ,and for our specific setting in Section 5.3]. When the outcome model was misspecified, TMLE also accrued bias for the first and third contrasts, with magnitude comparable to the misspecified G-Computation. This bias decreased with more studies due to the double robustness of TMLE (making this estimator consistent even when the outcome model is misspecified). Coverage only suffered for the third contrast which was the most biased.
|Correctly specified models|
|Misspecified outcome model|
We illustrate this causal inference approach and the adapted estimation methods in network meta-analysis with an example from infectious disease research. An increase in MRSA has spurred investigation of comparative efficacy of different antibiotic treatment options. While the antibiotic vancomycin has been the standard treatment for decades, treatment failures have been noted in patients with serious infections . Interest therefore lies in whether alternative antibiotics are as effective as the standard. Bally et al.  performed a systematic review and Bayesian network meta-analyses of RCTs of parenteral antibiotics used for treating hospitalized adults with complicated skin and soft-tissue infections (cSSTIs) and hospital-acquired or ventilator-associated pneumonia.
We consider the target population of interest to be the population of clinical trial participants with suspected or confirmed MRSA cSSTIs or pneumonia, with corresponding studies published until May 2012. The site of infection and confirmation of MRSA represent important differences in the entrance criteria of the various studies. 24 studies were found. Patients were randomized based on suspicion of MRSA in all but three studies for which the protocol specified confirmation of presence of MRSA at baseline. 14 studies enrolled subjects with cSSTIs, 7 studies enrolled subjects with hospital-acquired or ventilator-associated pneumonia, and 3 studies allowed for either indication. The original network meta-analysis of Bally et al. ,analyzed each infection site in separate analyses and therefore obtained stratified estimates. Based on the theory we developed, we can account for the potentially different treatment effects in each subpopulation by controlling for subpopulation type as a covariate in the analysis. By doing so, we ask a higher-level yet still clinically interesting question: “Are the alternative therapies as effective as the standard antibiotic for the treatment of suspected or confirmed MRSA?” Because infection site, MRSA confirmation, and study year can potentially affect the choice of investigated therapies and the outcomes, these three covariates (labeled ) should be adjusted for in order to minimize confounding bias.
The outcome of interest is clinical test of cure for all subjects who received at least one dose of treatment (a standard measure in infectious disease research). Four papers evaluated the outcome only on a subset of patients selected post-randomization; as this does not conform to our definition of the RCT-specific parameter of interest, we considered these outcomes missing. For our analysis, we chose to compare vancomycin with the two most prevalent alternatives: telavancin and linezolid. In total, 47 study arms evaluated one of these three treatments and 36 had an observed outcome. Of the remaining treatments, tigecycline, daptomycin, and ceftaroline were each evaluated in three study arms, and a regime of quinupristin/dalfopristin was evaluated in one arm. All of this information is available in the data extraction Table 3.
|Katz et al., 2008||42||48||vancomycin||1||2007||cSSTI||0|
|Arbeit et al., 2004||162||266||vancomycin||2||2001||cSSTI||0|
|Breedt et al., 2005||216||250||vancomycin||4||2003||cSSTI||0|
|Sacchidanand et al., 2005||196||255||vancomycin||5||2003||cSSTI||0|
|Stryjewski et al., 2008||307||429||vancomycin||6||2006||cSSTI||0|
|Stryjewski et al., 2006||81||95||vancomycin||8||2004||cSSTI||0|
|Corey et al., 2010||297||347||vancomycin||9||2007||cSSTI||0|
|Wilcox et al., 2010||289||338||vancomycin||10||2007||cSSTI||0|
|Talbot et al., 2007||26||32||vancomycin||11||2005||cSSTI||0|
|Weigelt et al., 2005||402||573||vancomycin||12||2003||cSSTI||0|
|Stevens et al., 2002||54||87||vancomycin||13||1999||cSSTI||0|
|Wunderinket al., 2003||128||302||vancomycin||15||2000||pneumonia||0|
|Rubenstein et al., 2001||73||192||vancomycin||16||1999||pneumonia||0|
|Rubenstein et al., 2011||221||374||vancomycin||17||2007||pneumonia||0|
|Fagon et al., 2000||67||148||vancomycin||19||1996||pneumonia||0|
|Lin et al., 2008||NA||33||linezolid||20||2005||cSSTI||0|
|Kohno et al., 2007||NA||51||linezolid||22||2004||cSSTI||0|
|Florescu et al., 2008||NA||70||tigecycline||24||2005||cSSTI||0|
|Itani et al., 2010||223||276||linezolid||25||2007||cSSTI||1|
|Wunderink et al., 2008||NA||30||linezolid||26||2005||pneumonia||1|
|Wunderink et al., 2012||102||186||linezolid||27||2010||pneumonia||1|
We ran four methods to obtain estimates of the counterfactual relative risk of both contrasts with the comparator vancomycin. The methods are 1) a ratio of the unadjusted mean outcomes using all available arms (called “No Adjust”), 2) a random effects regression for the arm-specific study outcomes using a log-link and a study-specific intercept (“RE Arm”), 3) G-Computation where a random effects logistic regression weighted by the inverse standard errors is used to predict the conditional mean outcomes, and 4) TMLE with a weighted logistic random effects model for the outcome and LASSO-penalized logistic regressions (to handle the sparse data) for the propensity score and a missing data model using the R library glmnet Friedman et al.  The missing outcomes required that the TMLE algorithm include fitting a model to estimate the probability of a missing outcome in each study; the TMLE update step was therefore modified to use a product of the propensity score and the probability of observing the outcome in place of . To estimate the standard errors and confidence intervals, the built-in functions in the library lme4 were used for the random effects model, and the clustered nonparametric bootstrap (1,000 times 54 resamples of 27 studies with replacement) was used for the other methods.
The results of the network meta-analysis are presented graphically in Figure 2 (and numerically in the Appendix Figure 4). We also included the results of the studies that contrasted the two treatments directly. For the comparison of telavancin versus vancomycin, all estimators include the null in the confidence interval. The random effect regression and G-Computation produce estimates of the relative risk close to one, indicating near equivalence of treatments while the point estimate of TMLE was further from the null (in the direction of the superiority of vancomycin). Notably, the confidence interval for the TMLE in the first contrast is much wider than the others. The unadjusted method produced a point estimate in the direction of the superiority of telavancin, demonstrating that the correction for study-level confounding impacted the analysis. For the comparison of linezolid versus vancomycin, the random effects regression, G-Computation and TMLE agree on the superiority of linezolid. The original study by Bally et al. ,also found some suggestion of a superior effect of linezolid compared to vancomycin but for both subpopulations the confidence intervals were large and spanned the null.
We can also easily obtain estimates of the contrast between telavancin and linezolid. The G-Computation and TMLE produce risk ratios for clinical success of 0.94 ( confidence interval = 0.92,0.94) and 0.85 (0.71,1.12), respectively, with G-Computation concluding the superiority of linezolid. As no RCT directly contrasted these two antibiotics, this demonstrates another general advantage of network meta-analysis, which is the ability to formally compare treatments using only indirect evidence of their relative performance.
If we are to interpret the summary statistics as estimates of the relative causal effects of antibiotic choice on successful treatment, the causal assumptions in Section 4.3 need to be satisfied. Each of the studies evaluated the clinical efficacy of the treatments, which is defined on patients who had received at least one dose of the study drug. Because randomized treatment was first-line therapy (administered intravenously in-hospital) and the success of treatment was determined clinically, each trial estimated the relative effect under full adherence. No interference: No interference is credible in this case because all subjects were already suspected or confirmed to have MRSA upon entry to the study. Therefore, the choice of treatment in the other arm wouldn’t have an effect on existing infections nor the success of treatment. Unconfoundedness: The unconfoundedness assumption relies on whether year, infection type, and whether MRSA was confirmed were sufficient to control for confounding at the study-level. This assumption could be violated if prognostic demographic variables were involved in the study design stage. However, prognostic markers such as diabetes and peripheral vascular disease (for cSSTI) and mechanical ventilation, APACHE II score, clinical markers of severity, and presence of organ dysfunction [for pneumonia) are unlikely to determine the choice of initial therapy , Niederman . Consistency: The dosage regimens varied somewhat across studies but were all considered to be at therapeutic levels. However, the length of time to the evaluation time point for each treatment type varied within and between studies (e. g. 7–14 days for telavancin versus 12–28 days for linezolid]. If this corresponds to meaningfully different treatment durations (and/or periods of time lapsed before evaluation), this would indicate different definitions of interventions across studies, and thus a violation of the consistency assumption. Positivity: All subjects in the study were indicated to receive any of the treatments evaluated.
In this paper, we nonparametrically define the parameter of interest in a network meta-analysis with direct and indirect comparisons using the counterfactual framework often employed in causal inference. This definition of the parameter of interest is model-independent and is interpretable on what we define as a metapopulation, the union of all superpopulations. Such an approach allows for a straight-forward description of what is being estimated, which is accessible even without an understanding of the estimation methods being used. In particular, we can interpret the marginal effects defined in this paper as the relative mean outcome had all subjects in the metapopulation been assigned to each treatment versus another. If a specific population is of interest and not represented by the metapopulation, with some conditions it may also be possible to more generally transport effect estimation, as described by Bareinboim and Pearl .
We have presented a set of conditions under which identifiability of the parameter of interest is possible. Identifiability allows for a clear description of when the parameter of interest can and cannot be estimated. For instance, the non-interference requirement casts doubt on the synthesis of studies that allow for treatment switching, crossover, or group contamination. The assumptions that we made allowed for the simplification of the relevant components of the observed data likelihood so that arm-based inference is possible.
One might alternatively specify the RCT-estimated contrast as the “outcome of interest” (rather than use the arm-specific outcome as we did). However, under this alternative, the propensity score would then be defined as the probability of a trial directly contrasting a given treatment pair. For standard network meta-analysis sample sizes, this would most often produce practical positivity problems, indicating the need for extrapolation using the outcome model (and thereby creating estimators that are very sensitive to model misspecification). In particular, two treatments that had never been directly compared would have no data support in this model.
If all treatments are selected completely at random into studies (or if only two treatments have ever been available to compare) then a standard unadjusted analysis using those arms assigned the desired treatments would be consistent. If we weaken this assumption and replace it with conditional exchangeability, then the estimators introduced in this paper are appropriate in that they allow for the adjustment of study-level covariates.
Our methods also allow for a wider inclusion criteria of studies in a systematic review. It is often the case that systematic reviews will exclude studies because they do not evaluate the exact desired clinical endpoint. Using our proposed methods, we can avoid selection bias due to studies excluded only for this reason. To do so, we would artificially censor the outcomes of studies that do not estimate the desired outcome-type of interest. The censored outcomes of these studies might then be considered “missing at random” conditional on the study baseline information which should still be included in the analysis (both in the propensity score model and the missing data model).
For the analysis of continuous individual-level outcomes, we assumed independence between the sample mean and standard deviation within each study arm. While we chose to present our identifiability argument under this assumption, it is not ultimately necessary. However, it is not straight-forward to propose a valid Monte Carlo or Bayesian estimation approach to the setting with dependent sample means and standard deviations. In some cases, it may be possible to transform the individual-level data to remove the skew, but this relies on access to each study’s raw data, in which case an individual patient data analysis would be preferable.
In the simulation study, we show that certain estimators adopted from the causal inference literature can produce valid estimates of effect contrasts under the identifiability conditions described. In particular, G-Computation and TMLE might lend themselves well to network meta-analysis, which is characterized by small sample sizes and low prevalence for certain treatments. IPTW was seen to be sensitive to rare treatment assignment and G-Computation and TMLE were seen to be somewhat sensitive to model misspecification. Some general benefits of using TMLE are that it is double robust and can incorporate nonparametric (or machine learning) estimation of the propensity score and outcome model which can help avoid bias from model misspecification vander Laan and Rose . More methods development and investigations are needed to address extremely rare treatments and how (or whether) TMLE can be adapted to be robust in this setting.
The application we presented compared the results of random effects regression, G-Computation, and TMLE in a network meta-analysis of the relative efficacy of treatment options for MRSA infection. The random effects regression and G-Computation produced small confidence intervals relative to the direct contrasts of the individual RCTs though TMLE only did for one comparison investigated. In contrast to the analysis in the original article that used unadjusted contrast-based hierarchical Bayesian modeling on the separate subpopulations of infection types, our analyses concluded that there is evidence to support the superiority of linezolid over vancomycin. We also noted the poor stability of IPTW in this example and generally do not recommend this estimator when the data support for certain treatment levels is sparse. Finally, using this data example, we demonstrated how the causal assumptions should be listed and critiqued in order to stimulate discussion about the appropriateness of causal interpretations in specific contexts.
The framework we present formally assumes that we are restricting our analyses to studies evaluating a common parameter-type. If there was only partial-adherence in the RCTs, our framework does not allow for the mixing of intent-to-treat parameter estimates with adherence-adjusted parameter estimates. [Estimation of the adherence-adjusted parameters in RCTs is described in . The same restriction applies to the results of observational studies if the parameter type estimated in the observational study is not the same as in the clinical trials. Specifically, treatment adherence and outcome need to be defined identically across studies, and all studies whose endpoints are included must estimate the same mean treatment-specific counterfactual outcome. Although it is common practice to include different parameter types in a meta-analysis, our formalization of the target parameter reveals that a causal interpretation of the resulting effect estimate may be quite challenging.
In addition to the issues we describe, there are many other concerns about aggregating study results in various settings. For instance, one might question the independence between RCTs happening close in time, or the systematic review inclusion criteria. We believe our framework provides additional structure to the ongoing discussion about the validity of network meta-analysis and will help stimulate solutions to the remaining challenges.
1. Slavin RE. Best evidence synthesis. An intelligent alternative to meta-analysis. J Clin Epidemiol 1995;48:9–18. Search in Google Scholar
2. Lumley T. Network meta-analysis for indirect treatment comparisons. Stat Med 2004;21:2313–2324. Search in Google Scholar
3. Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med 2004;23:3105–3124. Search in Google Scholar
4. Caldwell DM, Ades AE, Higgins JPT. Simultaneous comparison of multiple treatments: Combining direct and indirect evidence. BMJ 2005;331:897–900. Search in Google Scholar
5. Salanti G, Higgins JPT, Ades AE, Ioannidis JPA. Evaluation of networks of randomized trials. Stat Methods Med Res 2008;17:279–301. Search in Google Scholar
6. Berlin JA, Golub RM. “Meta-analysis as evidence: Building a better pyramid. J Am Med Assoc 2014;312:603–606. Search in Google Scholar
7. Salanti G, Marinho V, Higgins JPT. A case study of multiple-treatments meta-analysis demonstrates that covariates should be considered. J Clin Epidemiol 2009;62:857–864. Search in Google Scholar
8. Jansen JP, Schmid CH, Salanti G. Directed acyclic graphs can help understand bias in indirect and mixed treatment comparisons. J Clin Epidemiol 2012;65:798–807. Search in Google Scholar
9. Bally M, Dendukuri N, Sinclair A, Ahern SP, Poisson M, Brophy J. A network meta-analysis of antibiotics for treatment of hospitalised patients with suspected or proven meticillin-resistant Staphylococcus aureus infection. Int J Antimicrob Agents 2012;40:479–495. Search in Google Scholar
10. Robins JM. Confidence intervals for causal parameters. Stat Med 1988;7:773–785. Search in Google Scholar
11. Dias S, Sutton AJ, Ades AE, Welton NJ. A generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. ed Decis Making 2013a;33:607–617. Search in Google Scholar
12. Zhang J, Carlin BP, Neaton JD, Soon GG, Nie L, Kane R, et al. Network meta-analysis of randomized clinical trials: Reporting the proper summaries. Clin Trials 2014;11:246–262. Search in Google Scholar
13. Cope S, Zhang J, Saletan S, Smiechowski B, Jansen JP, Schmid P. A process for assessing the feasibility of a network meta-analysis: A case study of everolimus in combination with hormonal therapy versus chemotherapy for advanced breast cancer. BMC Med 2014;12(93). Search in Google Scholar
14. Lu G, Ades AE. Assessing evidence inconsistency in mixed treatment comparisons. J Am Stat Assoc 2006;101:447–459. Search in Google Scholar
15. Dias S, Sutton AJ, Welton NJ, Ades AE. Evidence synthesis for decision making 3: Heterogeneitysubgroups, meta-regression, bias, and bias-adjustment. Med Decision Making 2013b;33:618–640. Search in Google Scholar
16. Jansen PJ, Trikalinos T, Cappelleri JC, Daw J, Andes S, Eldessouki R, et al. Indirect treatment comparison/network meta-analysis study questionnaire to assess relevance and credibility to inform health care decision making: An ispor-amcp-npc good practice task force report. Value in Health 2014;17:157–173. Search in Google Scholar
17. Welton NJ, Soares MO, Palmer S, Ades AE, Harrison D, Shankar-Hari M, et al. Accounting for heterogeneity in relative treatment effects for use in cost-effectiveness models and value-of-information analyses. Med Decision Making 2015;35:608–621. Search in Google Scholar
18. Hong H, Chu H, Zhang J, Carlin BP. Rejoinder to the discussion of “A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons,” by S. Dias and A.E. Ades. Res Synth Methods 2016;7:29–33. Search in Google Scholar
19. Gail MH, Wieand S, Piantadosi S. Biased estimates of treatment effect in randomized experiments with nonlinear regressions and omitted covariates. Biometrika 1984;71:431–444. Search in Google Scholar
20. Zhang J, Chu H, Hong H, Virnig BA, Carlin BP. Bayesian hierarchical models for network meta-analysis incorporating nonignorable missingness. Stat Methods Med Res 2015; doi:10.1177/0962280215596185 Search in Google Scholar
21. Pearl J. 2nd ed. Causality: Models, Reasoning, and Inference. Cambridge University Press: New York, NY, 2009 Search in Google Scholar
22. Alonso A, Van der Elst W, Molenberghs G, Buyse M, Burzykowski T. On the relationship between the causal-inference and meta-analytic paradigms for the validation of surrogate endpoints. Biometrics 2015;71:15–24. Search in Google Scholar
23. Robins JM. A new approach to causal inference in mortality studies with a sustained exposure period – application to control of the healthy worker survivor effect. Math Model 1986;7:1393–1512. Search in Google Scholar
24. Ferguson TS. A course in large sample theory, Texts in statistical science. Chapman & Hall/CRC: London, UK, 1996. Search in Google Scholar
25. Rubin DB. Randomization analysis of experimental data: The fisher randomization test comment. J Am Stat Assoc 1980;75:591–593. Search in Google Scholar
26. Imbens GW. The role of the propensity score in estimating dose-response functions. Biometrika 2000;87:706–710. Search in Google Scholar
27. Cole SR, Frangakis CE. The consistency statement in causal inference: A definition or an assumption? Epidemiology 2009;20:3–5. Search in Google Scholar
28. VanderWeele TJ, Hernán MA. Causal inference under multiple versions of treatment. Journal of Causal Inference 2013;1:1–20. Search in Google Scholar
29. Snowden JM, Rose S, Mortimer KM. Implementation of g-computation on a simulated data set: Demonstration of a causal inference technique. Am J Epidemiol 2011;173:731–738. Search in Google Scholar
30. Efron B, Tibshirani RJ. An Introduction to the Bootstrap, Monographs on Statistics and Applied Probability. Chapman & Hall/CRC: Boca Raton, FL, 1994. Search in Google Scholar
31. van der Laan MJ, Rubin D. Targeted maximum likelihood learning. Int J Biostat 2006;2. Article 11. Search in Google Scholar
32. van der Laan MJ, Rose S. Targeted Learning: Causal Inference for Observational and Experimental Data Springer, Springer Series in Statistics 2011. Springer: New York, NY. Search in Google Scholar
33. Gruber S, van der Laan MJ. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. Int J Biostat 2010;6. Article 26. Search in Google Scholar
34. Cole SR, Hernán MA. Constructing inverse probability weights for marginal structural models. Am J Epidemiol 2008;168:656–664. Search in Google Scholar
35. Schnitzer ME, Moodie EEM, Platt RW. Targeted maximum likelihood estimation for marginal time-dependent treatment effects under density misspecification. Biostatistics 2013;14:1–14. Search in Google Scholar
36. Porter KE, Gruber S, van der Laan MJ, Sekhon JS. The relative performance of targeted maximum likelihood estimators. Int J Biostat 2011;7:1–34. Search in Google Scholar
37. Liu C, Bayer A, Cosgrove SE, Daum RS, Fridkin SK, Gorwitz RJ, et al. Clinical practice guidelines by the infectious diseases society of america for the treatment of methicillin-resistant staphylococcus aureus infections in adults and children. Clin Infect Dis 2011;52. e18–e55. Search in Google Scholar
38. Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw 2010;33:1–22. http://www.jstatsoft.org/v33/i01/. Search in Google Scholar
39. Lipsky BA, Itani KM, Weigelt JA, Joseph W, Paap CM, Reisman A, et al The role of diabetes mellitus in the treatment of skin and skin structure infections caused by methicillin-resistant staphylococcus aureus: Results from three randomized controlled trials. Int J Infect Dis 2011;15. e140–e146. Search in Google Scholar
40. Niederman MS. Hospital-acquired pneumonia, health care-associated pneumonia, ventilator-associated pneumonia, and ventilator-associated tracheobronchitis: Definitions and challenges in trial design. Clin Infect Dis 2010;51(Suppl 1). S12–S7. Search in Google Scholar
41. Bareinboim E, Pearl J. Meta-transportability of causal effects: A formal approach. Proceedings of the 16th International Conference on Artificial Intelligence and Statistics, 2013. Search in Google Scholar
42. Hernán MA, Hernández-Díaz S. Beyond the intention to treat in comparative effectiveness research. Clin trials 2012;9:48–55. Search in Google Scholar
43. Tsiatis AA. Semiparametric Theory and Missing Data Springer, Springer Series in Statistics. Springer: New York, NY, 2006. Search in Google Scholar
44. van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. Springer series in statistics. New York: Springer Verlag. 2003. Search in Google ScholarA
The joint counterfactual distribution can be decomposed as where and
Let be the set of possible treatments. The target of our analysis is the study arm counterfactual outcome under treatment , or . This mean can be written as
where the integral for can be a multiple integral, taken over the domain of potentially multivariate . Now we note that for identically distributed and conditionally independent draws
because we assume that the study size has no effect on the individual-level outcome. It follows that is conditionally independent of . The expression for then simplifies to . In order for the conditional expectation to be estimable from the observed data, we require the unconfoundedness assumption . With respect to the example DAG in Figure 1(a), this corresponds to having measured all components of . If this assumption holds in addition to the consistency of treatment for , we may write to establish identifiability.
It may not be plausible to assume conditional independence between and . In this case, the relevant part of the distribution of the observed data counterfactuals is
The target parameter can be estimated as a multiple integral over each and each density component in . Identifiability in this case requires a list of unconfoundedness assumptions: , , and . Assuming the DAG in Figure 1(a) in the main manuscript, this requires having measured all components of , , and . It also requires the consistency assumption for , , and . Under these assumptions, we can rewrite the relevant density component as
Since each component of this density is estimable from the data, we have identifiability of the target parameter in this case as well.
The local semiparametric efficiency and estimation consistency of the TMLE we describe can be derived very similarly to the standard observational data setting (with a single categorical exposure variable) for the estimation of the average treatment effect . To give more insight into how this extends to the network meta-analysis case, we present some additional details and a proof of double robustness.
The efficient influence function for parameter of interest with only aggregate observed data is
Note that the TMLE update step produces values of that solve the empirical efficient influence function equation:
so that it follows that the TMLE is a locally efficient estimator [43, 44]. Specifically, the logistic regression update step with single covariate solves the score equation and in the last TMLE step we set .
First suppose that for increasing values of , the generalized propensity score converges to some but that converges to the true values . We then have that
Now suppose that converges to the true values but that converges to some function . We then have that
Therefore, if either of the models for or are consistent, then the TMLE for is also consistent as the efficient influence function equation is consistent for .
Table 3 presents the full study list from the systematic review of Bally et al.  and the data that we used in the analysis in Section 7. Table 4 presents the numerical results that we obtained from our analyses, corresponding with Figure 2. The full reference list is below.
|TEL vs VAN||LIN vs VAN||TEL vs LIN|
|Method||Est||SE||95 % CI||EST||SE||95 % CI|
Arbeit R. D., Maki D., Tally F. P., Campanaro E., Eisenstein B. I. Daptomycin 98-01 and 99-01 Investigators. Clinical Infectious Diseases Vol. 38, 2004:1673–1681. The safety and efficacy of daptomycin for the treatment of complicated skin and skin-structure infections. Search in Google Scholar
Breedt J., Teras J., Gardovskis J., Maritz F. J., Vaasna T., Ross D. P., Gioud-Paquet M., Dartois N., Ellis-Grosse E. J., Loh E. and Tigecycline 305 cSSSI Study Group. Safety and efficacy of tigecycline in treatment of skin and skin structure infections: Results of a double-blind phase 3 comparison study with vancomycin-aztreonam. Antimicrobial Agents and Chemotherapy 2005;49:4658–4666. Search in Google Scholar
Corey G. R., Wilcox M. H., Talbot G. H., Thye D., Friedland D., Baculik T. and CANVAS 1 investigators, 2010;Canvas 1: The first phase iii, randomized, double-blind study evaluating ceftaroline fosamil for the treatment of patients with complicated skin and skin structure infections,” Journal of Antimicrobial Chemotherapy, 65 Suppl 4, iv41–51. Search in Google Scholar
Fagon J., Patrick H., Haas D. W., Torres A., Gibert C., Cheadle W. G., Falcone R. E., Anholm J. D., Paganin F., Fabian T. C., Lilienthal F. “Treatment of gram-positive nosocomial pneumonia. prospective randomized comparison of quinupristin/dalfopristin versus vancomycin. nosocomial pneumonia group,” American Journal of Respiratory and Critical Care Medicine 2000;161:753–762. Search in Google Scholar
Florescu I., Beuran M., Dimov R., Razbadauskas A., Bochan M., Fichev G., Dukart G., Babinchak T., Cooper C. A., Ellis-Grosse E. J., Dartois N., Gandjini H. 307 Study Group, 2008;Efficacy and safety of tigecycline compared with vancomycin or linezolid for treatment of serious infections with methicillin-resistant staphylococcus aureus or vancomycin-resistant enterococci: a phase 3, multicentre, double-blind, randomized study,” Journal of Antimicrobial Chemotherapy, 62 Suppl 1, i17–28. Search in Google Scholar
Itani K. M., Dryden H. M. S., Bhattacharyya M.J., Kunkel A., Baruch M., Weigelt J. A. Efficacy and safety of linezolid versus vancomycin for the treatment of complicated skin and soft-tissue infections proven to be caused by methicillin-resistant staphylococcus aureus. American Journal of Surgery 2010;199:804–816. Search in Google Scholar
Katz D. E., Lindfield K. C., Steenbergen J. N., Benziger D. P., Blackerby K. J., Knapp A. G., Martone W. J. “A pilot study of high-dose short duration daptomycin for the treatment of patients with complicated skin and skin structure infections caused by gram-positive bacteria,” International Journal of Clinical Practice 2008;62:1455–1464. Search in Google Scholar
Rubinstein E., Cammarata S., Oliphant T., Wunderink R. and Linezolid Nosocomial Pneumonia Study Group. Linezolid (pnu-100766) versus vancomycin in the treatment of hospitalized patients with nosocomial pneumonia: A randomized, double-blind, multicenter study. Clinical Infectious Diseases 2001;32:402–412. Search in Google Scholar
Rubinstein E., Lalani T., Corey G. R., Kanafani Z. A., Nannini E. C., Rocha M. G., Rahav G., Niederman M. S., Kollef M. H., Shorr A. F., Lee P. C., Lentnek A. L., Luna C. M., Fagon J. Y., Torres A., Kitt M. M., Genter F. C., Barriere S. L., Friedland H. D., Stryjewski M. E., Study Group ATTAIN. “Telavancin versus vancomycin for hospital-acquired pneumonia due to gram-positive pathogens,” Clinical Infectious Diseases 2011;52:31–40. Search in Google Scholar
Sacchidanand S., Penn R. L., Embil J. M., Campos M. E., Curcio D., Ellis-Grosse E., Loh E., Rose G. “Efficacy and safety of tigecycline monotherapy compared with vancomycin plus aztreonam in patients with complicated skin and skin structure infections. Results from a phase 3, randomized, double-blind trial,” International Journal of Infectious Diseases 2005;9:251–261. Search in Google Scholar
Stryjewski M. E., Chu V. H., O’Riordan W. D., Warren B.L., Dunbar L.M., Young D.M., Vall´ee M., Fowler V.G. J., Morganroth J., and FAST 2 Investigator Group. Barriere S., Kitt M. M., Corey G. R. Telavancin versus standard therapy for treatment of complicated skin and skin structure infections caused by gram-positive bacteria: Fast 2 study. Antimicrobial Agents and Chemotherapy 2006;50:862–867. Search in Google Scholar
Stryjewski M. E., Graham D. R., Wilson S. E., O’Riordan W., Young D., Lentnek A., Ross D. P., Fowler V.G., Hopkins A., Friedland H. D., Barriere S. L., Kitt M. M., Corey G. R. Assessment of Telavancin in Complicated Skin and Skin-Structure Infections Study (2008): “Telavancin versus vancomycin for the treatment of complicated skin and skin-structure infections caused by gram-positive organisms. Clinical Infectious Diseases 1683–1693;46. Search in Google Scholar
Talbot G. H., Thye D., Das A., Ge Y. 2007;Phase 2 study of ceftaroline versus standard therapy in treatment of complicated skin and skin structure infections,” Antimicrobial Agents and Chemotherapy, 51:3612–3616. Search in Google Scholar
Wilcox M. H., Corey G. R., Talbot G. H., Thye D., Friedland D., Baculik T. and CANVAS 2 investigators, 2010;Canvas 2: The second phase iii, randomized, double-blind study evaluating ceftaroline fosamil for the treatment of patients with complicated skin and skin structure infections,” Journal of Antimicrobial Chemotherapy, 65 Suppl 4, iv53–65. Search in Google Scholar
Wunderink R. G., Cammarata S. K., Oliphant T. H., Kollef M. H. and Linezolid Nosocomial Pneumonia Study Group. Continuation of a randomized, double-blind, multicenter study of linezolid versus vancomycin in the treatment of patients with nosocomial pneumonia. Clinical Therapeutics 2003;25:980–992. Search in Google Scholar
Wunderink R. G., Mendelson M. H., Somero M. S., Fabian T. C., May A. K., Bhattacharyya H., Leeper K. V. J., Solomkin J. S. Early microbiological response to linezolid vs vancomycin in ventilator-associated pneumonia due to methicillin-resistant staphylococcus aureus. Chest 2008;134:1200–1207. Search in Google Scholar
Wunderink R. G., Niederman M. S., Kollef M. H., Shorr A. F., Kunkel M. J., Baruch A., McGee W. T., Reisman A., Chastre J. “Linezolid in methicillin-resistant staphylococcus aureus nosocomial pneumonia. A randomized, controlled study,” Clinical Infectious Disease 2012;54:621–629. Search in Google Scholar
©2016 by De Gruyter
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.