Show Summary Details
More options …

# The International Journal of Biostatistics

Ed. by Chambaz, Antoine / Hubbard, Alan E. / van der Laan, Mark J.

IMPACT FACTOR 2018: 1.309

CiteScore 2018: 1.11

SCImago Journal Rank (SJR) 2018: 1.325
Source Normalized Impact per Paper (SNIP) 2018: 0.715

Mathematical Citation Quotient (MCQ) 2018: 0.03

Online
ISSN
1557-4679
See all formats and pricing
More options …
Volume 10, Issue 1

# Targeted Estimation of Nuisance Parameters to Obtain Valid Statistical Inference

Mark J. van der Laan
Published Online: 2014-02-11 | DOI: https://doi.org/10.1515/ijb-2012-0038

## Abstract

In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.

## 1 Introduction and overview

This introduction provides an atlas for the contents of this article. It starts with formulating the role of estimation of nuisance parameters to obtain asymptotically linear estimators of a target parameter of interest. This demonstrates the need to target this estimator of the nuisance parameter in order to make the estimator of the target parameter asymptotically linear when the model for the nuisance parameter is large. The general approach to obtain such a targeted estimator of the nuisance parameter is described. Subsequently, we present our concrete example to which we will apply this general method for targeted estimation of the nuisance parameter, and for which we establish a number of formal theorems. Finally, we discuss the link to previous articles that concerned some kind of targeting of the estimator of the nuisance parameter, and we provide an organization of the remainder of the article.

## 1.1 The role of nuisance parameter estimation

Suppose we observe n independent and identically distributed copies of a random variable O with probability distribution ${P}_{0}$. In addition, assume that it is known that ${P}_{0}$ is an element of a statistical model $\mathcal{M}$ and that we want to estimate ${\mathrm{\psi }}_{0}=\mathrm{\Psi }\left({P}_{0}\right)$ for a given target parameter mapping $\mathrm{\Psi }:\mathcal{M}\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}I\phantom{\rule{thickmathspace}{0ex}}R\phantom{\rule{1pt}{0ex}}$. In order to guarantee that ${P}_{0}\in \mathcal{M}$ one is forced to only incorporate real knowledge, and, as a consequence, such models $\mathcal{M}$ are always very large and, in particular, are infinite dimensional. We assume that the target parameter mapping is path-wise differentiable and let ${D}^{\ast }\left(P\right)$ denote the canonical gradient of the path-wise derivative of $\mathrm{\Psi }$ at $P\in \mathcal{M}$ [1]. An estimator ${\mathrm{\psi }}_{n}=\stackrel{ˆ}{\mathrm{\Psi }}\left({P}_{n}\right)$ is a functional $\stackrel{ˆ}{\mathrm{\Psi }}$ applied to the empirical distribution ${P}_{n}$ of ${O}_{1},\dots ,{O}_{n}$ and can thus be represented as a mapping $\stackrel{ˆ}{\mathrm{\Psi }}:{\mathcal{M}}_{NP}\phantom{\rule{1pt}{0ex}}↦\phantom{\rule{1pt}{0ex}}\phantom{\rule{thickmathspace}{0ex}}I\phantom{\rule{thickmathspace}{0ex}}R\phantom{\rule{1pt}{0ex}}$ from the non-parametric statistical model ${\mathcal{M}}_{NP}$ into the real line. An estimator $\stackrel{ˆ}{\mathrm{\Psi }}$ is efficient if and only if it is asymptotically linear with influence curve ${D}^{\ast }\left({P}_{0}\right)$: ${\mathrm{\psi }}_{n}-{\mathrm{\psi }}_{0}=\frac{1}{n}\sum _{i=1}^{n}{D}^{\ast }\left({P}_{0}\right)\left({O}_{i}\right)+{o}_{P}\left(1/\sqrt{n}\right).$

The empirical mean of the influence curve ${D}^{\ast }\left({P}_{0}\right)$ represents the first-order linear approximation of the estimator as a functional of the empirical distribution, and the derivation of the influence curve is a by-product of the application of the so-called functional delta-method for statistical inference based on functionals (i.e. $\stackrel{ˆ}{\mathrm{\Psi }}$) of the empirical distribution [24].

Suppose that $\mathrm{\Psi }\left(P\right)$ only depends on P through a parameter $Q\left(P\right)$ and that the canonical gradient depends on P only through $Q\left(P\right)$ and a nuisance parameter $g\left(P\right)$. The construction of an efficient estimator requires the construction of estimators ${Q}_{n}$ and ${g}_{n}$ of these nuisance parameters ${Q}_{0}$ and ${g}_{0}$, respectively. Targeted minimum loss-based estimation (TMLE) represents a method for construction of (e.g. efficient) asymptotically linear substitution estimators $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$, where ${Q}_{n}^{\ast }$ is a targeted update of ${Q}_{n}$ that relies on the estimator ${g}_{n}$ [57]. The targeting of ${Q}_{n}$ is achieved by specifying a parametric submodel $\left\{{Q}_{n}\left(\in \right):\in \right\}\subset \left\{Q\left(P\right):P\in \mathcal{M}\right\}$ through the initial estimator ${Q}_{n}$ and a loss function $O\phantom{\rule{2pt}{0ex}}↦\phantom{\rule{2pt}{0ex}}L\left(Q\right)\left(O\right)$ for ${Q}_{0}=arg{\phantom{\rule{thinmathspace}{0ex}}min}_{Q}\phantom{\rule{thinmathspace}{0ex}}{P}_{0}L\left(Q\right)\equiv \int L\left(Q\right)\left(o\right)d{P}_{0}\left(o\right)$, so that the generalized score ${\frac{d}{d\in }L\left({Q}_{n}\left(\in \right)\right)|}_{\in =0}$ spans a desired user-supplied estimating function $D\left({Q}_{n},{g}_{n}\right)$. In addition, one may decide to target ${g}_{n}$ by specifying a parametric submodel $\left\{{g}_{n}\left({\in }_{1}\right):{\in }_{1}\right\}\subset \left\{g\left(P\right):P\in \mathcal{M}\right\}$ and loss function $O\phantom{\rule{1pt}{0ex}}↦\phantom{\rule{1pt}{0ex}}L\left(g\right)\left(O\right)$ for ${g}_{0}=arg{\phantom{\rule{thinmathspace}{0ex}}min}_{g}\phantom{\rule{thinmathspace}{0ex}}{P}_{0}L\left(g\right)$, so that the generalized score ${\frac{d}{d{\in }_{1}}L\left({g}_{n}\left({\in }_{1}\right)\right)|}_{{\in }_{1}=0}$ spans another desired estimating function ${D}_{1}\left({g}_{n},{\mathrm{\eta }}_{n}\right)$ for some estimator ${\mathrm{\eta }}_{n}$ of nuisance parameter $\mathrm{\eta }$. The parameter $\in$ is fitted with MLE ${\in }_{n}=arg{\phantom{\rule{thinmathspace}{0ex}}min}_{\in }\phantom{\rule{thinmathspace}{0ex}}{P}_{n}L\left({Q}_{n}\left(\in \right)\right)$, providing the first-step update ${Q}_{n}^{1}={Q}_{n}\left({\in }_{n}\right)$, and similarly ${\in }_{1,n}=arg{min}_{{\in }_{1}}{P}_{n}L\left({g}_{n}\left({\in }_{1}\right)\right)$. This updating process that mapped a current fit $\left({Q}_{n},{g}_{n}\right)$ into an update $\left({Q}_{n}^{1},{g}_{n}^{1}\right)$ is iterated till convergence at which point the TMLE $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ solves ${P}_{n}D\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)=0$, i.e. the empirical mean of the estimating function equals zero at the final TMLE $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$. If one also targeted ${g}_{n}$, then it also solves ${P}_{n}{D}_{1}\left({g}_{n}^{\ast },{\mathrm{\eta }}_{n}\right)=0$. The submodel through ${Q}_{n}$ will depend on ${g}_{n}$, while the submodel through ${g}_{n}$ will depend on another nuisance parameter ${\mathrm{\eta }}_{n}$. By setting $D\left(Q,g\right)$ equal to the efficient influence curve ${D}^{\ast }\left(Q,g\right)$, the resulting TMLE solves the efficient influence curve estimating equation ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)=0$ and thereby will be asymptotically efficient when $\left({Q}_{n},{g}_{n}\right)$ is consistent for $\left({Q}_{0},{g}_{0}\right)$ under appropriate regularity conditions, where the targeting of ${g}_{n}$ is not needed.

The latter is shown as follows. By the property of the canonical gradient (in fact, any gradient) we have $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=-{P}_{0}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)+{R}_{n}\left({Q}_{n}^{\ast },{Q}_{0},{g}_{n},{g}_{0}\right)$, where ${R}_{n}$ involves integrals of second-order products of the differences $\left({Q}_{n}^{\ast }-{Q}_{0}\right)$ and $\left({g}_{n}-{g}_{0}\right)$. Combined with ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)=0$, this implies the following identity: $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)+{R}_{n}\left({Q}_{n}^{\ast },{Q}_{0},{g}_{n},{g}_{0}\right).$

The first term is an empirical process term that, under empirical process conditions (mentioned below), equals $\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left(Q,g\right)$, where $\left(Q,g\right)$ denotes the limit of $\left({Q}_{n}^{\ast },{g}_{n}\right)$, plus an ${o}_{P}\left(1/\sqrt{n}\right)$-term. This then yields $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left(Q,g\right)+{R}_{n}\left({Q}_{n}^{\ast },{Q}_{0},{g}_{n},{g}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right).$

To obtain the desired asymptotic linearity of $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$ one needs ${R}_{n}={o}_{P}\left(1/\sqrt{n}\right)$, which in general requires at minimal that both nuisance parameters are consistently estimated: $Q={Q}_{0}$ and $g={g}_{0}$. However, in many problems of interest, ${R}_{n}$ only involves a cross-product of the differences ${Q}_{n}^{\ast }-{Q}_{0}$ and ${g}_{n}-{g}_{0}$, so that ${R}_{n}$ converges to zero if either ${Q}_{n}^{\ast }$ is consistent or ${g}_{n}$ is consistent: i.e. $Q={Q}_{0}$ or $g={g}_{0}$. In this latter case, the TMLE is so-called double robust. Either way, the consistency of the TMLE relies now on one of the nuisance parameter estimators being consistent, thereby requiring the use of non-parametric adaptive estimation such as super-learning [810] for at least one of the nuisance parameters. If only one of the nuisance parameter estimators is consistent, and we are in the double robust scenario, then it follows that the bias is of the same order as the bias of the consistent nuisance parameter estimator. However, if the nuisance parameter estimator is not based on a correctly specified parametric model, but instead is a data-adaptive estimator, then this bias will be converging to zero at a rate slower than $1/\sqrt{n}$: i.e. $\sqrt{n}{R}_{n}$ converges to infinity as $n\text{\hspace{0.17em}}↦\text{\hspace{0.17em}}\mathrm{\infty }$. Thus, in that case, the estimator of the target parameter may thus be overly biased and thereby will not be asymptotically linear.

## 1.2 Targeting the fit of the nuisance parameter: general approach

In this article, we demonstrate that if $Q\ne {Q}_{0}$, then it is essential that the consistent nuisance parameter estimator ${g}_{n}$ be targeted toward the estimand so that the bias for the estimand becomes second order: that is, in our new TMLEs relying on consistent estimation of ${g}_{0}$ presented in this article one simultaneously updates ${g}_{n}$ into a ${g}_{n}^{\ast }$ so that certain smooth functionals of ${g}_{n}^{\ast }$, derived from the study of ${R}_{n}$, are asymptotically linear under appropriate conditions. Even if both estimators ${Q}_{n}^{\ast }$ and ${g}_{n}^{\ast }$ are consistent, but ${Q}_{n}^{\ast }$ might be converging at a slower rate than ${g}_{n}^{\ast }$, this targeting of the nuisance parameter estimator may still remove finite sample bias for the estimand. In addition, we also present such TMLE when only relying on one of the nuisance parameters to be consistently estimated, but not knowing which one: i.e. either $Q={Q}_{0}$ or $g={g}_{0}$. The same arguments applies to other double robust estimators, such as estimating equation based estimators and inverse probability of treatment weighted (IPTW) estimators [1116]. In fact, we demonstrate such a targeted IPTW-estimator in our next section.

The current article concerns the construction of such targeted IPTW and TMLE that are asymptotically linear under regularity conditions, even when only one of the nuisance parameters is consistent and the estimators of the nuisance parameters are highly data adaptive. In order to be concrete in this article, we will focus on a particular example. In such an example we can concretely present the second-order term ${R}_{n}$ mentioned above and thereby develop the concrete form of the TMLE.

The same approach for construction of such TMLE can be carried out in much greater generality, but that is beyond the scope of this article. Nonetheless, it is helpful for the reader to know that the general approach is the following (considering the case that $g={g}_{0}$, but Q can be misspecified): (1) approximate ${R}_{n}\left({Q}_{n}^{\ast },{Q}_{0},{g}_{n}^{\ast },{g}_{0}\right)={\mathrm{\Phi }}_{0,n}\left({g}_{n}^{\ast }\right)-{\mathrm{\Phi }}_{0,n}\left({g}_{0}\right)+{R}_{1,n}$ for some mapping ${\mathrm{\Phi }}_{0,n}$ that depends on ${P}_{0}$ (e.g. through ${Q}_{0}$) and the data (e.g. ${Q}_{n}^{\ast },{g}_{n}^{\ast }$), and where ${R}_{1,n}$ is a second-order term so that it is reasonable to assume ${R}_{1,n}={o}_{P}\left(1/\sqrt{n}\right)$; (2) approximate ${\mathrm{\Phi }}_{0,n}\left({g}_{n}^{\ast }\right)-{\mathrm{\Phi }}_{0,n}\left({g}_{0}\right)={\mathrm{\Phi }}_{n}\left({g}_{n}^{\ast }\right)-{\mathrm{\Phi }}_{n}\left({g}_{0}\right)+{R}_{2,n}$, where ${R}_{2,n}$ is a second-order term and ${\mathrm{\Phi }}_{n}$ is now a known (only based on data) mapping approximating ${\mathrm{\Phi }}_{0}$; (3) construct ${g}_{n}^{\ast }$ so that it is a TMLE of the target parameter ${\mathrm{\Phi }}_{n}\left({g}_{0}\right)$ thereby allowing an expansion ${\mathrm{\Phi }}_{n}\left({g}_{n}^{\ast }\right)-{\mathrm{\Phi }}_{n}\left({g}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}_{1,n}\left({P}_{0}\right)+{R}_{3,n}$ with ${D}_{1,n}\left({P}_{0}\right)$ being the efficient influence curve of ${\mathrm{\Phi }}_{n}\left({g}_{0}\right)$. That is, in step 3, ${g}_{n}^{\ast }$ is iteratively updated to solve ${P}_{n}{D}_{1,n}\left({g}_{n}^{\ast },{\mathrm{\eta }}_{n}\right)=0$ with ${D}_{1,n}\left({P}_{0}\right)$ depending on ${P}_{0}$ through ${g}_{0}$ and a nuisance parameter ${\mathrm{\eta }}_{0}$, so that ${\mathrm{\Phi }}_{n}\left({g}_{n}^{\ast }\right)$ is an asymptotically linear estimator of ${\mathrm{\Phi }}_{n}\left({g}_{0}\right)$ under regularity conditions. After these three steps, we have that ${R}_{n}\left({Q}_{n}^{\ast },{Q}_{0},{g}_{n}^{\ast },{g}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}_{1,n}\left({P}_{0}\right)+{R}_{1,n}+{R}_{2,n}+{R}_{3,n}$, where ${R}_{1,n}+{R}_{2,n}+{R}_{3,n}={o}_{P}\left(1/\sqrt{n}\right)$, and these steps provide us with the parameter ${\mathrm{\Phi }}_{n}\left({g}_{0}\right)$ that needs to be targeted by ${g}_{n}^{\ast }$, thereby telling us how to target ${g}_{n}^{\ast }$ in the TMLE of ${\mathrm{\psi }}_{0}$. In addition, we can then conclude that this TMLE is asymptotically linear with known influence curve ${D}^{\ast }\left(Q,{g}_{0}\right)+{D}_{1}\left({P}_{0}\right)$, where ${D}_{1}\left({P}_{0}\right)$ represents the limit of the efficient influence curve ${D}_{1,n}\left({P}_{0}\right)$ of ${\mathrm{\Phi }}_{n}\left({g}_{0}\right)$: $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=\left({P}_{n}-{P}_{0}\right)\left\{{D}^{\ast }\left(Q,{g}_{0}\right)+{D}_{1}\left({P}_{0}\right)\right\}+{o}_{P}\left(1/\sqrt{n}\right)$.

Let us now formulate our concrete example we will cover in this article. Let $O=\left(W,A,Y\right)\sim {P}_{0}$, W baseline covariates, A a binary treatment, and Y a final outcome. Let $\mathcal{M}$ be a model that makes at most some assumptions about the conditional distribution of A, given W, but leaves the marginal distribution of W and the conditional distribution of Y, given $A,W$, unspecified. Let $\mathrm{\Psi }:\mathcal{M}\phantom{\rule{1pt}{0ex}}↦\phantom{\rule{1pt}{0ex}}\phantom{\rule{thickmathspace}{0ex}}I\phantom{\rule{thickmathspace}{0ex}}R\phantom{\rule{1pt}{0ex}}$ be defined as $\mathrm{\Psi }\left(P\right)={E}_{P}{E}_{P}\left(Y|A=1,W\right)$, the so-called treatment specific mean controlling for the baseline covariates. The canonical gradient, also called the efficient influence curve, of $\mathrm{\Psi }$ at P is given by ${D}^{\ast }\left(P\right)\left(O\right)=A/g\left(1|W\right)\left(Y-\stackrel{ˉ}{Q}\left(1,W\right)\right)+\stackrel{ˉ}{Q}\left(1,W\right)-\mathrm{\Psi }\left(P\right)$, where $g\left(1|W\right)=P\left(A=1|W\right)$ is the propensity score and $\stackrel{ˉ}{Q}\left(a,W\right)={E}_{P}\left(Y|A=a,W\right)$ is the outcome regression [13]. Let $Q=\left({Q}_{W},\stackrel{ˉ}{Q}\right)$, where ${Q}_{W}$ is the marginal distribution of W, and note that $\mathrm{\Psi }\left(P\right)$ only depends on P through $Q=Q\left(P\right)$. For convenience, we will denote the target parameter with $\mathrm{\Psi }\left(Q\right)$ in order to not have to introduce additional notation. A targeted minimum loss-based estimator (TMLE) is a plug-in estimator $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$, where ${Q}_{n}^{\ast }$ is an update of an initial estimator ${Q}_{n}$ that relies on an estimator ${g}_{n}$ of ${g}_{0}$, and it has the property that it solves ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)=0$, where we used the notation $Pf=\int f\left(o\right)dP\left(o\right)$.

For this particular example, such TMLE are presented in Scharfstein et al. [17]; van der Laan and Rubin [7]; Bembom et al. [1821]; Rosenblum and van der Laan [22]; Sekhon et al. [23]; van der Laan and Rose [6, 24]. Since ${P}_{0}{D}^{\ast }\left(Q,g\right)={\mathrm{\psi }}_{0}-\mathrm{\Psi }\left(Q\right)+{P}_{0}\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)\left({\stackrel{ˉ}{g}}_{0}-\stackrel{ˉ}{g}\right)/\stackrel{ˉ}{g}$ [25, 26], where we use the notation $\stackrel{ˉ}{g}\left(W\right)=g\left(1|W\right)$ and $\stackrel{ˉ}{Q}\left(W\right)=\stackrel{ˉ}{Q}\left(1,W\right)$, and ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)=0$, we obtain the identity: $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-{\mathrm{\psi }}_{0}=\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)+{P}_{0}\left({\stackrel{ˉ}{Q}}_{0}-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\left({\stackrel{ˉ}{g}}_{0}-{\stackrel{ˉ}{g}}_{n}\right)/{\stackrel{ˉ}{g}}_{n}.$(1)

The first term equals $\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left(Q,g\right)+{o}_{P}\left(1/\sqrt{n}\right)$ if ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)$ falls in a ${P}_{0}$-Donsker class with probability tending to 1, and ${P}_{0}\left\{{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}\right)-{D}^{\ast }\left(Q,g\right){\right\}}^{2}\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}0$ in probability as $n\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}\mathrm{\infty }$ [4, 27]. If ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ and ${\stackrel{ˉ}{g}}_{n}$ are consistent for the true ${\stackrel{ˉ}{Q}}_{0}$ and ${\stackrel{ˉ}{g}}_{0}$, respectively, then the second term is a second-order term. If one now assumes that this second-order term is ${o}_{P}\left(1/\sqrt{n}\right)$, it has been proven that the TMLE is asymptotically efficient. This provides the general basis for proving asymptotic efficiency of TMLE when both ${Q}_{0}$ and ${g}_{0}$ are consistently estimated.

However, if only one of these nuisance parameter estimators is consistent, then the second term is still a first-order term, and it remains to establish that it is also asymptotically linear with a second-order remainder. For sake of discussion, suppose that ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ converges to a wrong $\stackrel{ˉ}{Q}$ while ${\stackrel{ˉ}{g}}_{n}$ is consistent. In that case, this remainder behaves in first order as ${P}_{0}\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)\left({\stackrel{ˉ}{g}}_{n}-{\stackrel{ˉ}{g}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}$. To establish that such a term is asymptotically linear requires that ${\stackrel{ˉ}{g}}_{n}$ solves a particular estimating equation: that is, ${\stackrel{ˉ}{g}}_{n}$ needs to be a TMLE itself targeting the required smooth functional of ${g}_{0}$. This is naturally achieved within the TMLE framework by specifying a submodel through ${g}_{n}$ and loss function with the appropriate generalized score, so that a TMLE update step involves both updating ${Q}_{n}$ and ${g}_{n}$, and the iterative TMLE algorithm now results in a final TMLE $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$, not only solving ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)=0$ but also these additional equations that allow us to establish asymptotic linearity of the desired smooth functional of ${g}_{n}^{\ast }$: see general description of TMLE above.

In this article, we present TMLE that targets ${g}_{n}$ in a manner that allows us to prove the desired asymptotic linearity of the second term in the right-hand side of eq. (1) when either ${\stackrel{ˉ}{g}}_{n}$ or ${\stackrel{ˉ}{Q}}_{n}$ is consistent, under conditions that require specified second-order terms to be ${o}_{P}\left(1/\sqrt{n}\right)$. The latter type of regularity conditions are typical for the construction of asymptotically linear estimators and are therefore considered appropriate for the sake of this article. Though it is of interest to study cases in which these second-order terms cannot be assumed to be ${o}_{P}\left(1/\sqrt{n}\right)$, this is beyond the scope of this article.

## 1.4 Relation to current literature on targeted nuisance parameter estimators

The construction of TMLE that utilizes targeting of the nuisance parameter ${g}_{n}$ has been carried out in earlier papers. For example, in van der Laan and Rubin [7], we target ${g}_{n}$ to obtain a TMLE that, beyond being double robust locally efficient, also equals the IPTW-estimator. In Gruber and van der Laan [29] we target ${g}_{n}$ to guarantee that, beyond being double robust locally efficient, also outperforms a user-supplied given estimator, based on the original idea of Rotnitzky et al. [28]. In that sense, the distinction of the current article with these previous articles is that ${g}_{n}^{\ast }$ is now targeted to guarantee that the TMLE remains asymptotically linear when ${Q}_{n}^{\ast }$ is misspecified. This task of targeting ${g}_{n}^{\ast }$ appears to be one step more complicated than in these previous articles, since the smooth functionals of ${g}_{n}^{\ast }$ that need to be targeted are themselves indexed by parameters of the true data distribution ${P}_{0}$, and thus unknown. As mentioned above, our strategy is to approximate these unknown smooth functionals by an estimated smooth functional and develop the targeted estimator ${g}_{n}^{\ast }$ that targets this estimated parameter of ${g}_{0}$.

The TMLEs presented in this article are always iterative and thereby rely on convergence of the iterative updating algorithm. Since the empirical risk increases at each updating step, such convergence is typically guaranteed by the existence of the MLE at each updating step (e.g. an MLE of coefficient in a logistic regression). Either way, in this article, we assume this convergence to hold. Since our assumptions of our theorems require ${g}_{n}^{\ast }\left(1|W\right)$ to be bounded away from zero, we demonstrate how this property can be achieved by using submodels for updating ${g}_{n}$ that guarantee this property. Detailed simulations will appear in a future article.

## 1.5 Organization

The organization of this paper is as follows. In Section 2, we introduce a targeted IPTW-estimator that relies on an adaptive consistent estimator of ${g}_{0}$, and we establish its asymptotic linearity with known influence curve, allowing for the construction of asymptotically valid confidence intervals based on this adaptive IPTW-estimator. In the remainder of the article, we focus on construction of TMLE involving the targeting of ${g}_{n}$ to establish the asymptotic linearity of the resulting TMLE under appropriate conditions. In Section 3, we introduce a novel TMLE that assumes that the targeted adaptive estimator ${g}_{n}^{\ast }$ is consistent for ${g}_{0}$, and we establish its asymptotic linearity. In Section 4, we introduce a novel TMLE that only assumes that either the targeted ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ or the targeted ${\stackrel{ˉ}{g}}_{n}^{\ast }$ is consistent, and we establish its asymptotic linearity with known influence curve. This TMLE needs to protect the asymptotic linearity under misspecification of either ${g}_{n}^{\ast }$ or ${\stackrel{ˉ}{Q}}_{n}^{\ast }$, and, as a consequence, relies on targeting of ${g}_{n}$ (in order to preserve asymptotic linearity when ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ is inconsistent), but also extra targeting of ${\stackrel{ˉ}{Q}}_{n}$ (in order to preserve asymptotic linearity when ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ is consistent, but ${g}_{n}$ is inconsistent). The explicit form of the influence curve of this TMLE allows us to construct asymptotic confidence intervals. Since this result allows statistical inference in the statistical model that only assumes that one of the estimators is consistent, and we refer to this as “double robust statistical inference”. Even though double robust estimators have been extensively presented in the current literature, double robust statistical inference in these large semi-parametric models has been a difficult topic: typically, one has suggested to use the non-parametric bootstrap, but there is no theory supporting that the non-parametric bootstrap is a valid method when the estimators rely on data-adaptive estimation.

In Section 5, we extend the TMLE of Section 3 (that relies on ${g}_{n}^{\ast }$ being consistent for ${g}_{0}$) to the case that ${g}_{n}^{\ast }$ converges to a possibly misspecified g but one that suffices for consistent estimation of ${\mathrm{\psi }}_{0}$ in the sense that $\mathrm{\Psi }\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ will be consistent. We present a corresponding asymptotic linearity theorem for this TMLE that is able to utilize the so-called collaborative double robustness of the efficient influence curve which states that $\mathrm{\Psi }\left(Q\right)={\mathrm{\psi }}_{0}$ if ${P}_{0}{D}^{\ast }\left(Q,g\right)=0$ and $g\in \mathcal{G}\left(Q,{P}_{0}\right)$ for a set $\mathcal{G}\left(Q,{P}_{0}\right)$ (including ${g}_{0}$). In order to construct a collaborative estimator ${g}_{n}^{\ast }$ that aims to converge to an element in $\mathcal{G}\left({Q}_{n}^{\ast },{P}_{0}\right)$ in collaboration with ${Q}_{n}^{\ast }$, we use the framework of collaborative targeted minimum loss-based estimator (C-TMLE) [20, 2935]. Our asymptotic linearity theorem can now be applied to this C-TMLE. Again, even though C-TMLEs have been presented in the current literature, statistical inference based on the C-TMLEs has been another challenging topic, and Section 5 provides us with a C-TMLE with known influence curve. We conclude this article with a discussion. The proofs of the theorems are presented in the Appendix.

## 1.6 Notation

In the following sections, we will use the following notation. We have $O=\left(W,A,Y\right)\sim {P}_{0}\in \mathcal{M}$, where $\mathcal{M}$ is a statistical model that makes only assumptions on the conditional distribution of A, given W. Let ${g}_{0}\left(a|W\right)={P}_{0}\left(A=a|W\right)$, and ${\stackrel{ˉ}{g}}_{0}\left(W\right)={P}_{0}\left(A=1|W\right)$. The target parameter is $\mathrm{\Psi }:\mathcal{M}\to \phantom{\rule{1pt}{0ex}}\phantom{\rule{thickmathspace}{0ex}}I\phantom{\rule{thickmathspace}{0ex}}R\phantom{\rule{1pt}{0ex}}$ defined by $\mathrm{\Psi }\left({P}_{0}\right)={E}_{{Q}_{W,0}}{\stackrel{ˉ}{Q}}_{0}\left(1,W\right)$, where ${\stackrel{ˉ}{Q}}_{0}\left(1,W\right)={E}_{{P}_{0}}\left(Y|A=1,W\right)$, which will also be denoted with ${\stackrel{ˉ}{Q}}_{0}\left(W\right)$, and ${Q}_{W,0}$ is the distribution of W under ${P}_{0}$. We also use the notation $\mathrm{\Psi }\left(Q\right)$, where $Q=\left({Q}_{W},\stackrel{ˉ}{Q}\right)$. In addition, ${D}^{\ast }\left(Q,g\right)$ denotes the efficient influence curve of $\mathrm{\Psi }$ at $\left(Q,g\right)$. We also use the following notation: ${H}_{A}\left({\stackrel{ˉ}{Q}}^{r},\stackrel{ˉ}{g}\right)={\stackrel{ˉ}{Q}}^{r}/\stackrel{ˉ}{g}$ ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}$ ${D}_{A}\left({\stackrel{ˉ}{Q}}^{r},\stackrel{ˉ}{g}\right)\left(A,W\right)={H}_{A}\left({\stackrel{ˉ}{Q}}^{r},\stackrel{ˉ}{g}\right)\left(W\right)\left(A-\stackrel{ˉ}{g}\left(W\right)\right)$ ${H}_{Y}\left(\stackrel{ˉ}{g}\right)\left(A,W\right)=A/\stackrel{ˉ}{g}\left(W\right)$ ${\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{Q},\stackrel{ˉ}{g}\right)={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$ ${\stackrel{ˉ}{Q}}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}\right)$ ${\stackrel{ˉ}{g}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{0}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ ${\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{g}\right)={E}_{0}\left(Y|A=1,\stackrel{ˉ}{g}\right)\phantom{\rule{1pt}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\mathrm{o}\mathrm{n}\mathrm{l}\mathrm{y}\phantom{\rule{thickmathspace}{0ex}}\mathrm{I}\mathrm{P}\mathrm{T}\mathrm{W}-\mathrm{S}\mathrm{e}\mathrm{c}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}\phantom{\rule{thickmathspace}{0ex}}2\phantom{\rule{1pt}{0ex}}$ ${\stackrel{ˉ}{Q}}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)\phantom{\rule{1pt}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\mathrm{o}\mathrm{n}\mathrm{l}\mathrm{y}\phantom{\rule{thickmathspace}{0ex}}\mathrm{I}\mathrm{P}\mathrm{T}\mathrm{W}-\mathrm{S}\mathrm{e}\mathrm{c}\mathrm{t}\mathrm{i}\mathrm{o}\mathrm{n}\phantom{\rule{thickmathspace}{0ex}}2\phantom{\rule{1pt}{0ex}}$ $\parallel f{\parallel }_{0}={\left({P}_{0}{f}^{2}\right)}^{0.5}.$

## 2 Statistical inference for IPTW-estimator when using super-learning to fit treatment mechanism

We first describe an IPTW-estimator that uses super-learning to fit the treatment mechanism ${g}_{0}$. Subsequently, we present this IPTW-estimator but now using an update of the super-learning fit of ${g}_{0}$, and we present a theorem establishing the asymptotic linearity of this targeted IPTW-estimator under appropriate conditions. Finally, we discuss how this targeted IPTW-estimator compares with an IPTW-estimator that relies on a parametric model to fit the treatment mechanism.

## 2.1 An IPTW-estimator using super-learning to fit the treatment mechanism

We consider a simple IPTW-estimator $\stackrel{ˆ}{\mathrm{\Psi }}\left({P}_{n}\right)={P}_{n}D\left(\stackrel{ˆ}{g}\left({P}_{n}\right)\right)$, where $D\left(g\right)\left(O\right)=YA/\stackrel{ˉ}{g}\left(W\right)$, and $\stackrel{ˆ}{g}:{\mathcal{M}}_{NP}\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}\mathcal{G}$ is an adaptive estimator of ${g}_{0}$ based on the log-likelihood loss function $L\left(g\right)\left(O\right)\equiv -logg\left(A|W\right)$. For a general presentation of an IPTW-estimator, we refer to Robins and Rotnitzky [11], van der Laan and Robins [13], and Hernan et al. [36]. We wish to establish conditions under which reliable statistical inference based on this estimator of ${\mathrm{\psi }}_{0}$ can be obtained. One might wish to estimate ${g}_{0}$ with ensemble learning, and, in particular, super-learning in which cross-validation [37] is used to determine the best weighted combination of a library of candidate estimators: van der Laan and Dudoit [8]; van der Laan et al. [9, 38, 39]; van der Vaart et al. [10]; Dudoit and van der Laan [40]; Polley et al. [41]; Polley and van der Laan [42]; van der Laan and Petersen [43]. The super-learner is a general template for construction of an adaptive estimator based on a library of candidate estimators, a loss function whose expectation is minimized over the parameter space by the true parameter value, a parametric family that defines “weighted” combinations of the estimators in the library. We will start with presenting a succinct description of a particular super-learner. Consider a library of estimators ${\stackrel{ˆ}{g}}_{j}:{\mathcal{M}}_{NP}↦\mathcal{G}$, $j=1,\dots ,J$ and a family of weighted (on logistic scale) combinations of these estimators $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˆ}{g}}_{\mathrm{\alpha }}\left(1|W\right)={\sum }_{j=1}^{J}{\mathrm{\alpha }}_{j}\phantom{\rule{thinmathspace}{0ex}}\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˆ}{g}}_{j}\left(1|W\right)$, indexed by vectors $\mathrm{\alpha }$ for which ${\mathrm{\alpha }}_{j}\in \left[0,1\right]$ and ${\sum }_{j}{\mathrm{\alpha }}_{j}=1$. Consider a random sample split ${B}_{n}\in \left\{0,1{\right\}}^{n}$ into a training sample $\left\{i:{B}_{n}\left(i\right)=0\right\}$ of size $n\left(1-p\right)$ and validation sample $\left\{i:{B}_{n}\left(i\right)=1\right\}$ of size np, and let ${P}_{n,{B}_{n}}^{1}$ and ${P}_{n,{B}_{n}}^{0}$ denote the empirical distribution of the validation sample and training sample, respectively. Define ${\mathrm{\alpha }}_{n}=arg\underset{\mathrm{\alpha }}{\phantom{\rule{thinmathspace}{0ex}}min}\phantom{\rule{thinmathspace}{0ex}}{E}_{{B}_{n}}{P}_{n,{B}_{n}}^{1}L\left({\stackrel{ˆ}{g}}_{\mathrm{\alpha }}\left({P}_{n,{B}_{n}}^{0}\right)\right)$ $=arg\underset{\mathrm{\alpha }}{\phantom{\rule{thinmathspace}{0ex}}min}\phantom{\rule{thinmathspace}{0ex}}{E}_{{B}_{n}}\frac{1}{np}\sum _{i:{B}_{n}\left(i\right)=1}L\left({\stackrel{ˆ}{g}}_{\mathrm{\alpha }}\left({P}_{n,{B}_{n}}^{0}\right)\right)\left({O}_{i}\right),$

as the choice of estimator that minimizes cross-validated risk. The super-learner of ${g}_{0}$ is defined as the estimator $\stackrel{ˆ}{g}\left({P}_{n}\right)={\stackrel{ˆ}{g}}_{{\mathrm{\alpha }}_{n}}\left({P}_{n}\right)$.

## 2.2 Asymptotic linearity of a targeted data-adaptive IPTW-estimator

The next theorem presents an IPTW-estimator that uses a targeted fit ${g}_{n}^{\ast }$ of ${g}_{0}$, involving the updating of an initial estimator ${g}_{n}$, and conditions under which this IPTW-estimator of ${\mathrm{\psi }}_{0}$ is asymptotically linear. For example, ${g}_{n}$ could be defined as a super-learner of the type presented above. In spite of the fact that such an IPTW-estimator uses a very data adaptive and hard to understand estimator ${g}_{n}$, this theorem shows that its influence curve is known and can be well estimated.

Theorem 1 We consider a targeted IPTW-estimator $\stackrel{ˆ}{\mathrm{\Psi }}\left({P}_{n}\right)={P}_{n}D\left({g}_{n}^{\ast }\right)$, where $D\left(g\right)\left(O\right)=YA/g\left(A|W\right)$, and ${g}_{n}^{\ast }$ is an update of an initial estimator ${g}_{n}$ of ${g}_{0}\in \mathcal{G}$ defined below.

Definition of targeted estimator ${g}_{n}^{\ast }$: Let ${\stackrel{ˉ}{Q}}_{n}^{r\ast }$ be obtained by non-parametric estimation of the regression function ${E}_{{P}_{0}}\left(Y|A=1,{\stackrel{ˉ}{g}}_{n}\left(W\right)\right)$ treating ${\stackrel{ˉ}{g}}_{n}$ as a fixed covariate (i.e. function of W). This yields an estimator ${H}_{n}^{r\ast }\equiv {\stackrel{ˉ}{Q}}_{n}^{r\ast }/{\stackrel{ˉ}{g}}_{n}$ of ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}$, where ${\stackrel{ˉ}{Q}}_{0}^{r}={E}_{{P}_{0}}\left(Y|A=1,{\stackrel{ˉ}{g}}_{0}\right)$. Consider the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}+\in {H}_{n}^{r\ast }$, and fit $\in$ with the MLE ${\in }_{n}=arg\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\underset{\in }{max}\phantom{\rule{thinmathspace}{0ex}}{P}_{n}log{g}_{n}\left(\in \right).$

We define ${g}_{n}^{\ast }={g}_{n}\left({\in }_{n}\right)$ as the corresponding targeted update of ${g}_{n}$. This TMLE ${g}_{n}^{\ast }$ satisfies ${P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=0.$

Empirical process condition: Assume that $D\left({g}_{n}^{\ast }\right),{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ fall in a ${P}_{0}$-Donsker class with probability tending to 1.

Negligibility of second-order terms: Define ${\stackrel{ˉ}{Q}}_{0,n}^{r}\equiv {E}_{{P}_{0}}\left(Y|A=1,{\stackrel{ˉ}{g}}_{0}\left(W\right),{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)$. Assume ${\stackrel{ˉ}{g}}_{n}^{\ast }>\mathrm{\delta }>0$ with probability tending to 1 and assume $\parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}={o}_{P}\left(1\right)$ $\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}^{2}={o}_{P}\left(1/\sqrt{n}\right)$ $\parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)$ $\parallel {\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right).$

Then, $\stackrel{ˆ}{\mathrm{\Psi }}\left({P}_{n}\right)-{\mathrm{\psi }}_{0}=\left({P}_{n}-{P}_{0}\right)IC\left({P}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right),$

where $IC\left({P}_{0}\right)\left(O\right)=YA/{g}_{0}\left(A|W\right)-{\mathrm{\psi }}_{0}-{H}_{0}^{r}\left(W\right)\left(A-{\stackrel{ˉ}{g}}_{0}\left(W\right)\right).$

So under the conditions of this theorem, we can construct an asymptotic 0.95-confidence interval ${\mathrm{\psi }}_{n}±1.96{\mathrm{\sigma }}_{n}/\sqrt{n}$ based on this targeted IPTW-estimator ${\mathrm{\psi }}_{n}=\stackrel{ˆ}{\mathrm{\Psi }}\left({P}_{n}\right)$, where ${\mathrm{\sigma }}_{n}^{2}={P}_{n}I{C}_{n}^{2}=\frac{1}{n}\sum _{i=1}^{n}I{C}_{n}\left({O}_{i}{\right)}^{2},$

and $I{C}_{n}\left(O\right)=YA/{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)-{\mathrm{\psi }}_{n}-{H}_{n}^{r\ast }\left(W\right)\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)$ is the plug-in estimator of the influence curve $IC\left({P}_{0}\right)$ obtained by plugging in ${g}_{n}$ or ${g}_{n}^{\ast }$ for ${g}_{0}$ and ${\stackrel{ˉ}{Q}}_{n}^{r\ast }$ for ${\stackrel{ˉ}{Q}}_{0}^{r}$.

Regarding the displayed second-order term conditions, we note that these are satisfied if ${\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}$ converges to zero w.r.t. ${L}^{2}\left({P}_{0}\right)$-norm at rate ${o}_{P}\left({n}^{-1/4}\right)$, ${\stackrel{ˉ}{g}}_{n}^{\ast }>\mathrm{\delta }>0$ for some $\mathrm{\delta }>0$ with probability tending to 1 as $n\text{\hspace{0.17em}}↦\text{\hspace{0.17em}}\mathrm{\infty }$, and the product of the rates at which ${\stackrel{ˉ}{g}}_{n}^{\ast }$ converges to ${\stackrel{ˉ}{g}}_{0}$ and $\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{Q}}_{0,n}^{r}\right)$ converges to ${\stackrel{ˉ}{Q}}_{0}^{r}$ is ${o}_{P}\left(1/\sqrt{n}\right)$.

Regarding the empirical process condition, we note that an example of a Donsker class is the class of multivariate real-valued functions with uniform sectional variation norm bounded by a universal constant [44]. It is important to note that if each estimator in the library falls in such a class, then also the convex combinations fall in that same class [4]. So this Donsker condition will hold if it holds for each of the candidate estimators in the library of the super-learner.

## 2.3 Comparison of targeted data-adaptive IPTW and an IPTW using parametric model

Consider an IPTW-estimator using a MLE ${g}_{n,1}$ according to a parametric model for ${g}_{0}$, and let us contrast this IPTW-estimator with an IPTW-estimator defined in the above theorem based on an initial super-learner ${g}_{n}$ that includes ${g}_{n,1}$ as an element of the library of estimators. Let us first consider the case that the parametric model is correctly specified. In that case ${g}_{n,1}$ converges to ${g}_{0}$ at a parametric rate $1/\sqrt{n}$. From the oracle inequality for cross-validation [8, 10, 38], it follows that ${g}_{n}$ also converges at the rate $1/\sqrt{n}$ to ${g}_{0}$ possibly up to a $\sqrt{\mathrm{l}\mathrm{o}\mathrm{g}\text{\hspace{0.17em}}n}$-factor in case the number of algorithms in the library is of the order ${n}^{q}$ for some fixed q. As a consequence, all the consistency and second-order term conditions for the IPTW-estimator using a targeted ${g}_{n}^{\ast }$ based on ${g}_{n}$ hold. If one uses estimators in the library of algorithms that have a uniform sectional variation norm smaller than a $\mathcal{M}<\mathrm{\infty }$ with probability tending to 1, then also a weighted average of these estimators will have uniform sectional variation norm smaller than $\mathcal{M}<\mathrm{\infty }$ with probability tending to 1. Thus, in that case we will also have that $D\left({g}_{n}^{\ast }\right),{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ fall in a ${P}_{0}$-Donsker class. Examples of estimators that control the uniform sectional variation norm are any parametric model with fewer than K main terms that themselves have a uniform sectional variation norm, but also penalized least-squares estimators (e.g. Lasso) using basis functions with bounded uniform sectional variation norm, and one could map any estimator into this space of functions with universally bounded uniform sectional variation norm through a smoothing operation. Thus, under this restriction on the library, the IPTW-estimator using the super-learner is asymptotically linear with influence curve $IC\left({P}_{0}\right)\left(O\right)$ as stated in the theorem. We note that $IC\left({P}_{0}\right)$ is the efficient influence curve for the target parameter ${E}_{{P}_{0}}{E}_{{P}_{0}}\left(Y|A=1,{\stackrel{ˉ}{g}}_{0}\left(W\right)\right)$ if the observed data were $\left({\stackrel{ˉ}{g}}_{0}\left(W\right),A,Y\right)$ instead of $O=\left(W,A,Y\right)$.

The parametric IPTW-estimator is asymptotically linear with influence curve $O\phantom{\rule{thinmathspace}{0ex}}↦YA/{g}_{0}\left(A|W\right)-{\mathrm{\psi }}_{0}-\mathrm{\Pi }\left(YA/{\stackrel{ˉ}{g}}_{0}\left(W\right)|{T}_{g}\right)$, where ${T}_{g}$ is the tangent space of the parametric model for ${g}_{0}$, and $\mathrm{\Pi }\left(f|{T}_{g}\right)$ denotes the projection of f onto ${T}_{g}$ in the Hilbert space ${L}_{0}^{2}\left({P}_{0}\right)$ [13]. This IPTW-estimator could be less or more efficient than the IPTW-estimator using the targeted super-learner depending on the actual tangent space of the parametric model.

For example, if the parametric model happens to have a score equal to $O\text{\hspace{0.17em}}↦\text{\hspace{0.17em}}{\stackrel{ˉ}{Q}}_{0}\left(W\right)\left(A/{\stackrel{ˉ}{g}}_{0}\left(W\right)-1\right)$, then the parametric IPTW-estimator would be asymptotically efficient. Of course, a standard parametric model is not tailored to correspond with such optimal scores, but this shows that we cannot claim superiority of one versus the other in the case that the parametric model for ${g}_{0}$ is correctly specified.

If, on the other hand, the parametric model is misspecified, then the IPTW-estimator using ${g}_{n,1}$ is inconsistent. However, the super-learner ${g}_{n}$ will be consistent if the library contains a non-parametric adaptive estimator, and will perform asymptotically as well as the oracle selector among all the weighted combinations of the algorithms in the library. To conclude, the IPTW-estimator using super-learning to estimate ${g}_{0}$ will be as good as the IPTW-estimator using a correctly specified parametric model (included in the library of the super-learner), but will remain consistent and asymptotically linear in a much larger model than the parametric IPTW-estimator relying on the true ${g}_{0}$ being an element of the parametric model.

## 3 Statistical inference for TMLE when using super-learning to consistently fit treatment mechanism

In the next subsection, we present a TMLE that targets the fit of the treatment mechanism, analog to the targeted IPTW-estimator presented above. In addition, this subsection presents a formal asymptotic linearity theorem demonstrating that this TMLE will be asymptotically linear even when ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ is inconsistent under reasonable conditions. We conclude this section with a subsection showing how the iterative updating of the treatment mechanism can be carried out in such a way that the final fit of the treatment mechanism is still bounded away from zero, as required to obtain a stable estimator.

## 3.1 Asymptotic linearity of a TMLE using a targeted estimator of the treatment mechanism

The following theorem presents a novel TMLE and corresponding asymptotic linearity with specified influence curve, where we rely on consistent estimation of ${g}_{0}$. The TMLE still uses the same updating step for the estimator of ${\stackrel{ˉ}{Q}}_{0}$ as the regular TMLE [7], but uses a novel updating step for the estimator of ${g}_{0}$, analogue to the updating step of the IPTW-estimator in the previous section. We remind the reader of the importance of using the logistic fluctuations as working-submodels for ${\stackrel{ˉ}{Q}}_{0}$ in the definition of the TMLE, guaranteeing that the TMLE update stays within the bounded parameter space (see, e.g. Gruber and van der Laan [19]).

Theorem 2

Iterative targeted MLE of ${\mathrm{\psi }}_{0}$:

Definitions: Given $\stackrel{ˉ}{Q},\stackrel{ˉ}{g}$, let ${\stackrel{ˉ}{Q}}_{n}^{r}\left(\stackrel{ˉ}{Q},\stackrel{ˉ}{g}\right)$ be a consistent estimator of the regression ${\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{Q},\stackrel{ˉ}{g}\right)={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$ of $\left(Y-\stackrel{ˉ}{Q}\right)$ on $\stackrel{ˉ}{g}\left(W\right)$ and $A=1$. Let $\left({g}_{n},{\stackrel{ˉ}{Q}}_{n}\right)$ be an initial estimator of $\left({g}_{0},{\stackrel{ˉ}{Q}}_{0}\right)$.

Initialization: Let ${g}_{n}^{0}={g}_{n},{\stackrel{ˉ}{Q}}_{n}^{0}={\stackrel{ˉ}{Q}}_{n}$, and ${\stackrel{ˉ}{Q}}_{n}^{r0}={\stackrel{ˉ}{Q}}_{n}^{r}\left({\stackrel{ˉ}{Q}}_{n}^{0},{\stackrel{ˉ}{g}}_{n}^{0}\right)$. Let $k=0$.

Updating step for ${g}_{n}^{k}$: Consider the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}^{k}+\in {H}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{rk},{\stackrel{ˉ}{g}}_{n}^{k}\right)$, and fit $\in$ with the MLE ${\in }_{n}=arg\underset{\in }{\phantom{\rule{thinmathspace}{0ex}}max}\phantom{\rule{thinmathspace}{0ex}}{P}_{n}log{g}_{n}^{k}\left(\in \right).$

We define ${g}_{n}^{k+1}={g}_{n}^{k}\left({\in }_{n}\right)$ as the corresponding update of ${g}_{n}^{k}$. This ${g}_{n}^{k+1}$ satisfies $\frac{1}{n}\sum _{i=1}^{n}{H}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{rk},{\stackrel{ˉ}{g}}_{n}^{k}\right)\left({W}_{i}\right)\left({A}_{i}-{\stackrel{ˉ}{g}}_{n}^{k+1}\left({W}_{i}\right)\right)=0.$

Updating step for ${\stackrel{ˉ}{Q}}_{n}^{k}$: Let $-L\left(\stackrel{ˉ}{Q}\right)\left(O\right)\equiv Ylog\stackrel{ˉ}{Q}\left(A,W\right)+\left(1-Y\right)log\left(1-\stackrel{ˉ}{Q}\left(A,W\right)\right)$ be the quasi-log-likelihood loss function for ${\stackrel{ˉ}{Q}}_{0}={E}_{0}\left(Y|A=1,W\right)$ (allowing that Y is continuous in $\left[0,1\right]$). Consider the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{Q}}_{n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\text{\hspace{0.17em}}\mathrm{i}\mathrm{t}{\stackrel{ˉ}{Q}}_{n}^{k}+\in {H}_{Y}\left({g}_{n}^{k}\right)$, and let ${\in }_{n}=arg\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{min}_{\in }\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{P}_{n}L\left({\stackrel{ˉ}{Q}}_{n}^{k}\left(\in \right)\right)$. Define ${\stackrel{ˉ}{Q}}_{n}^{k+1}={\stackrel{ˉ}{Q}}_{n}^{k}\left({\in }_{n}\right)$ as the resulting update. Define ${\stackrel{ˉ}{Q}}_{n}^{rk+1}={\stackrel{ˉ}{Q}}_{n}^{r}\left({\stackrel{ˉ}{Q}}_{n}^{k+1},{\stackrel{ˉ}{g}}_{n}^{k+1}\right)$.

Iterating till convergence: Now, set $k←k+1$, and iterate this updating process mapping a $\left({g}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{rk}\right)$ into $\left({g}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{rk+1}\right)$ till convergence or till large enough K so that the estimating equations (2) below are solved up till an ${o}_{P}\left(1/\sqrt{n}\right)$-term. Denote the limit of this iterative procedure with $\left({g}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{r\ast }\right)$.

Plug-in estimator: Let ${Q}_{n}^{\ast }=\left({Q}_{W,n},{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$, where ${Q}_{W,n}$ is the empirical distribution estimator of ${Q}_{W,0}$. The TMLE of ${\mathrm{\psi }}_{0}$ is defined as $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$.

Estimating equations solved by TMLE: This TMLE $\left({Q}_{n}^{\ast },{g}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{r\ast }\right)$ solves ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)=0$ ${P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=0.$(2)

Empirical process condition: Assume that ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$, ${D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ falls in a ${P}_{0}$-Donsker class with probability tending to 1 as $n\text{\hspace{0.17em}}↦\mathrm{\infty }$.

Negligibility of second-order terms: Define $\begin{array}{c}{\stackrel{ˉ}{Q}}_{0,n}^{r}\left(W\right)\equiv {E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}\left(1,W\right)|A=1,{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right),{\stackrel{ˉ}{g}}_{0}\left(W\right)\right)\\ {\stackrel{ˉ}{Q}}_{0}^{r}\left(W\right)\equiv {E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}\left(1,W\right)|A=1,{\stackrel{ˉ}{g}}_{0}\left(W\right)\right)\\ {H}_{0,n}^{r}={\stackrel{ˉ}{Q}}_{0,n}^{r}/{\stackrel{ˉ}{g}}_{n}^{\ast }\\ {H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0},\end{array}$

where ${\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)$ is treated as a fixed covariate (i.e. function of W) in the conditional expectation ${\stackrel{ˉ}{Q}}_{0,n}^{r}$. Assume that there exists a $\mathrm{\delta }>0$, so that ${\stackrel{ˉ}{g}}_{n}^{\ast }>\mathrm{\delta }>0$ with probability tending to 1, and $\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1\right)$ $\parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}={o}_{P}\left(1\right)$ $\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)$ $\parallel {\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)$ $\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}^{2}={o}_{P}\left(1/\sqrt{n}\right)$ $\parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right).$

Then, $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-{\mathrm{\psi }}_{0}=\left({P}_{n}-{P}_{0}\right)IC\left({P}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right),$

where $IC\left({P}_{0}\right)={D}^{\ast }\left(Q,{g}_{0}\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)$.

Thus, under the assumptions of this theorem, an asymptotic 0.95-confidence interval is given by ${\mathrm{\psi }}_{n}^{\ast }±1.96{\mathrm{\sigma }}_{n}/\sqrt{n}$, where ${\mathrm{\sigma }}_{n}^{2}={P}_{n}I{C}_{n}^{2}$, and $I{C}_{n}={D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$.

## 3.2 Using a $\mathbit{\delta }$-specific submodel for targeting g that guarantees the positivity condition

The following is an application of the constrained logistic regression approach of the type presented in Gruber and van der Lann [19] for the purpose of estimation of ${\stackrel{ˉ}{g}}_{0}$ respecting the constraint that ${\stackrel{ˉ}{g}}_{0}>\mathrm{\delta }>0$ for a known $\mathrm{\delta }>0$. Recall that $A\in \left\{0,1\right\}$. Suppose that it is known that ${\stackrel{ˉ}{g}}_{0}\left(W\right)\in \left(\mathrm{\delta },1\right]$ for some $\mathrm{\delta }>0$, a condition the asymptotic linearity of our proposed estimators relies upon. Define ${A}_{\mathrm{\delta }}\equiv \frac{A-\mathrm{\delta }}{1-\mathrm{\delta }}$. We have ${\stackrel{ˉ}{g}}_{0}\left(W\right)=\mathrm{\delta }+\left(1-\mathrm{\delta }\right){\stackrel{ˉ}{g}}_{0,\mathrm{\delta }}$, where ${\stackrel{ˉ}{g}}_{0,\mathrm{\delta }}={E}_{0}\left({A}_{\mathrm{\delta }}|W\right)$ is a regression that is known to be between $\left[0,1\right]$. Let ${g}_{\mathrm{\delta },n}^{0}$ be an initial estimator of the true conditional distribution ${g}_{\mathrm{\delta },0}$ of ${A}_{\mathrm{\delta }}$, given W, which implies an estimator ${\stackrel{ˉ}{g}}_{n}^{0}=\mathrm{\delta }+\left(1-\mathrm{\delta }\right){\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{0}$ of ${\stackrel{ˉ}{g}}_{0}$. Let $k=0$. Consider the following submodel for the conditional distribution of ${A}_{\mathrm{\delta }}$, given W, through a given estimator ${g}_{\mathrm{\delta },n}^{k}$: $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}+\in {H}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{rk},{\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}\right).$

The MLE is simply obtained with logistic regression of ${A}_{\mathrm{\delta }}$ on W (see, e.g. Gruber and van der Lann [19]) based on the quasi-log-likelihood loss function: ${\in }_{n}=arg\underset{\in }{\phantom{\rule{thinmathspace}{0ex}}min\phantom{\rule{thinmathspace}{0ex}}}{P}_{n}L\left({\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}\left(\in \right)\right),$

where $-L\left({\stackrel{ˉ}{g}}_{\mathrm{\delta }}\right)\left(O\right)={A}_{\mathrm{\delta }}log{\stackrel{ˉ}{g}}_{\mathrm{\delta }}\left(W\right)+\left(1-{A}_{\mathrm{\delta }}log\left(1-{\stackrel{ˉ}{g}}_{\mathrm{\delta }}\left(W\right)\right)$

is the quasi-log-likelihood loss. The update ${\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k+1}={\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}\left({\in }_{n}\right)$ implies an update ${\stackrel{ˉ}{g}}_{n}^{k+1}=\mathrm{\delta }+\left(1-\mathrm{\delta }\right){\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k+1}$ of ${\stackrel{ˉ}{g}}_{n}^{k}=\mathrm{\delta }+\left(1-\mathrm{\delta }\right){\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}$, and, by construction ${\stackrel{ˉ}{g}}_{n}^{k+1}>\mathrm{\delta }>0$. The above submodel ${\stackrel{ˉ}{g}}_{n}^{k}\left(\in \right)=\mathrm{\delta }+\left(1-\mathrm{\delta }\right){\stackrel{ˉ}{g}}_{\mathrm{\delta },n}^{k}\left(\in \right)$ and corresponding loss function $L\left(\stackrel{ˉ}{g}\right)=L\left({\stackrel{ˉ}{g}}_{\mathrm{\delta }}\right)$ generates the same score equation as the submodel and loss function used in Theorem 2. Therefore, the TMLE algorithm presented in Theorem 2 but now using this $\mathrm{\delta }$-specific logistic regression model solves the same estimating equations, so that the same Theorem 2 immediately applies. However, using this submodel we have now guaranteed that ${\stackrel{ˉ}{g}}_{n}^{k}>\mathrm{\delta }>0$ for all k in the iterative TMLE algorithm, and thereby that ${\stackrel{ˉ}{g}}_{n}^{\ast }>\mathrm{\delta }>0$.

## 4 Double robust statistical inference for TMLE when using super-learning to fit outcome regression and treatment mechanism

In this section, our aim is to present a TMLE that is asymptotically linear with known influence curve if either ${g}_{0}$ or ${Q}_{0}$ is consistently estimated, but we do not need to know which one. Again, this requires a novel way of targeting the estimators ${g}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }$ in order to arrange that the relevant smooth functionals of these nuisance parameter estimators are indeed asymptotically linear under appropriate second-order term conditions. In this case, we also need to augment the submodel for the estimator of ${\stackrel{ˉ}{Q}}_{0}$ with another clever covariate: that is, our estimator of ${\stackrel{ˉ}{Q}}_{0}$ needs to be double targeted, once for solving the efficient influence curve equation, but also for achieving asymptotic linearity in the case that the estimator of ${g}_{0}$ is misspecified.

Theorem 3

Definitions: For any given $\stackrel{ˉ}{g},\stackrel{ˉ}{Q}$, let ${\stackrel{ˉ}{g}}_{n}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ and ${\stackrel{ˉ}{Q}}_{n}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ be consistent estimators of ${\stackrel{ˉ}{g}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{Q},\stackrel{ˉ}{g}\right)$ and ${\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$, respectively (e.g. using a super-learner or other non-parametric adaptive regression algorithm). Let ${\stackrel{ˉ}{Q}}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ and ${\stackrel{ˉ}{g}}_{n}^{r}={\stackrel{ˉ}{g}}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ denote these estimators applied to the TMLEs $\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ defined below.

Iterative targeted MLE of ${\mathrm{\psi }}_{0}$:

Initialization: Let $\left({g}_{n}$, ${\stackrel{ˉ}{Q}}_{n}\right)$ be an initial estimator of $\left({g}_{0},{\stackrel{ˉ}{Q}}_{0}\right)$. Let ${g}_{n}^{0}={g}_{n}$, ${\stackrel{ˉ}{Q}}_{n}^{0}={\stackrel{ˉ}{Q}}_{n}$ and let $k=0$. Let ${\stackrel{ˉ}{g}}_{n}^{r,k}={\stackrel{ˉ}{g}}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{k}\right)$ be obtained by non-parametrically regressing A on ${\stackrel{ˉ}{Q}}_{n}^{k},{\stackrel{ˉ}{g}}_{n}^{k}$. Let ${\stackrel{ˉ}{Q}}_{n}^{r,k}={\stackrel{ˉ}{Q}}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{k}\right)$ be obtained by non-parametrically regressing $Y-{\stackrel{ˉ}{Q}}_{n}^{k}$ on $A=1,{\stackrel{ˉ}{g}}_{n}^{k}$.

Updating step: Consider the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}^{k}+\in {H}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r,k},{\stackrel{ˉ}{g}}_{n}^{k}\right)$, and fit $\in$ with the MLE ${\in }_{A,n}=arg\underset{\in }{\phantom{\rule{thinmathspace}{0ex}}max\phantom{\rule{thinmathspace}{0ex}}}{P}_{n}log{g}_{n}^{k}\left(\in \right).$

Define the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{Q}}_{n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{Q}}_{n}^{k}+{\in }_{1}{H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{k}\right)+{\in }_{2}{H}_{Y}^{1}\left({\stackrel{ˉ}{g}}_{n}^{r,k},{\stackrel{ˉ}{g}}_{n}^{k}\right)$, where ${H}_{Y}^{1}\left({\stackrel{ˉ}{g}}^{r},\stackrel{ˉ}{g}\right)\equiv \frac{A}{{\stackrel{ˉ}{g}}^{r}}\frac{{\stackrel{ˉ}{g}}^{r}-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}$

Let ${\in }_{Y,n}=arg{min}_{\in }{P}_{n}L\left({\stackrel{ˉ}{Q}}_{n}^{k}\left(\in \right)\right)$ be the MLE, where $L\left(\stackrel{ˉ}{Q}\right)$ is the quasi-log-likelihood loss.

We define ${g}_{n}^{k+1}={g}_{n}^{k}\left({\in }_{A,n}\right)$ as the corresponding targeted update of ${g}_{n}^{k}$, and ${\stackrel{ˉ}{Q}}_{n}^{k+1}={\stackrel{ˉ}{Q}}_{n}^{k}\left({\in }_{Y,n}\right)$ as the corresponding update of ${\stackrel{ˉ}{Q}}_{n}^{k}$. Let ${\stackrel{ˉ}{g}}_{n}^{r,k+1}={\stackrel{ˉ}{g}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{k+1}\right)$ and ${\stackrel{ˉ}{Q}}_{n}^{r,k+1}={\stackrel{ˉ}{Q}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{k+1}\right)$.

Iterate till convergence: Now, set $k←k+1$, and iterate this updating process mapping a $\left({g}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{k},{\stackrel{ˉ}{g}}_{n}^{rk},{\stackrel{ˉ}{Q}}_{n}^{rk}\right)$ into $\left({g}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{k+1},{\stackrel{ˉ}{g}}_{n}^{rk+1},{\stackrel{ˉ}{Q}}_{n}^{rk+1}\right)$ till convergence or till large enough K so that the following three estimating equations are solved up till an ${o}_{P}\left(1/\sqrt{n}\right)$-term: $\begin{array}{c}{P}_{n}{D}^{\ast }\left({Q}_{n}^{K},{g}_{n}^{K}\right)={o}_{P}\left(1/\sqrt{n}\right)\\ {P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r,K},{\stackrel{ˉ}{g}}_{n}^{K}\right)={o}_{P}\left(1/\sqrt{n}\right)\\ {P}_{n}{D}_{Y}\left({\stackrel{ˉ}{Q}}_{n}^{K},{\stackrel{ˉ}{g}}_{n}^{r,K},{\stackrel{ˉ}{g}}_{n}^{K}\right)={o}_{P}\left(1/\sqrt{n}\right),\end{array}$

where $\begin{array}{c}{D}_{Y}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)={H}_{Y}^{1}\left({\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)\left(Y-\stackrel{ˉ}{Q}\right).\end{array}$

Final substitution estimator: Denote the limits of this iterative procedure with ${\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{r\ast },{g}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }$. Let ${Q}_{n}^{\ast }=\left({Q}_{W,n},{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$, where ${Q}_{W,n}$ is the empirical distribution estimator of ${Q}_{W,0}$. The TMLE of ${\mathrm{\psi }}_{0}$ is defined as $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$.

Equations solved by TMLE: $\begin{array}{c}{o}_{P}\left(1/\sqrt{n}\right)={P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)\\ {o}_{P}\left(1/\sqrt{n}\right)={P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ {o}_{P}\left(1/\sqrt{n}\right)={P}_{n}{D}_{Y}\left({\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right).\end{array}$

Empirical process condition: Assume that ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$, ${D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$, ${D}_{Y}\left({\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ fall in a ${P}_{0}$-Donsker class with probability tending to 1 as $n\text{\hspace{0.17em}}↦\text{\hspace{0.17em}}\mathrm{\infty }$.

Negligibility of second-order terms: Define ${\stackrel{ˉ}{Q}}_{0,n}^{r}={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}\right)$ and ${\stackrel{ˉ}{g}}_{0,n}^{r}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$. Assume that there exists a $\mathrm{\delta }>0$ so that ${\stackrel{ˉ}{g}}_{n}>\mathrm{\delta }>0$ with probability tending to 1, that ${\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }$ are consistent for $\stackrel{ˉ}{g},\stackrel{ˉ}{Q}$ w.r.t. $\parallel \cdot {\parallel }_{0}$-norm, where either $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$ or $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$, and assume that the following second-order terms are ${o}_{P}\left(1/\sqrt{n}\right)$: $\begin{array}{c}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1\right)\\ \parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}={o}_{P}\left(1\right)\\ \parallel {\stackrel{ˉ}{g}}_{n}^{r\ast }-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}={o}_{P}\left(1\right)\\ \parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}^{2}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{g}}_{n}^{r\ast }-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right).\end{array}$

Then, $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-{\mathrm{\psi }}_{0}=\left({P}_{n}-{P}_{0}\right)IC\left({P}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right),$

where $IC\left({P}_{0}\right)={D}^{\ast }\left(Q,g\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)-{D}_{Y}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right).$

Note that consistent estimation of the influence curve $IC\left({P}_{0}\right)$ relies on consistency of ${\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{Q}}_{n}^{r\ast }$ as estimators of ${\stackrel{ˉ}{g}}_{0}^{r},{\stackrel{ˉ}{Q}}_{0}^{r}$, and estimators ${\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }$ converging to a $\stackrel{ˉ}{Q},\stackrel{ˉ}{g}$ for which either $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$ or $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$. These estimators imply an estimated influence curve $I{C}_{n}$. An asymptotic 0.95-confidence interval is given by ${\mathrm{\psi }}_{n}^{\ast }±1.96{\mathrm{\sigma }}_{n}/\sqrt{n}$, where ${\mathrm{\sigma }}_{n}^{2}={P}_{n}I{C}_{n}^{2}$.

If $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$, then ${E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)=\stackrel{ˉ}{g}$, and therefore ${D}_{Y}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)=0$ for all $\stackrel{ˉ}{Q}$. If $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$, then it follows that ${\stackrel{ˉ}{Q}}_{0}^{r}=0$, and thus that ${D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)=0$ for all $\stackrel{ˉ}{g}$. In particular, if both $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$ and $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$, then $IC\left({P}_{0}\right)={D}^{\ast }\left({Q}_{0},{g}_{0}\right)$. We also note that if $\stackrel{ˉ}{g}\ne {\stackrel{ˉ}{g}}_{0}$, but $\stackrel{ˉ}{g}$ is a true conditional distribution of A, given some function ${W}^{r}$ of W for which $\stackrel{ˉ}{Q}\left(W\right)$ is only a function of ${W}^{r}$, then it follows that ${E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)=\stackrel{ˉ}{g}$ and thus ${D}_{Y}=0$.

As shown in the final remark of the Appendix, the condition of Theorem 3 that either $g={g}_{0}$ or $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$ can be weakened to $\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ having to satisfy ${P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)/\stackrel{ˉ}{g}=0$, allowing for the analysis of collaborative double robust TMLE, as discussed in the next section. However, as shown in the next section, if one arranges in the TMLE algorithm that ${\stackrel{ˉ}{g}}_{n}^{\ast }={\stackrel{ˉ}{g}}_{n}^{r\ast }$ (i.e. ${\stackrel{ˉ}{g}}_{n}^{\ast }$ already non-parametrically adjusts for ${\stackrel{ˉ}{Q}}_{n}^{\ast }$), then there is no need for the extra targeting in ${\stackrel{ˉ}{Q}}_{n}^{k}$, and the influence curve will be ${D}^{\ast }\left(Q,g\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)$.

## 5 Collaborative double robust inference for C-TMLE when using super-learning to fit outcome regression and reduced treatment mechanism

We first review the theoretical underpinning for collaborative estimation of nuisance parameters, in this case, the outcome regression and treatment mechanism. Subsequently, we explain that the desired collaborative estimation can be achieved by applying the previously established template for construction of a C-TMLE to a TMLE that solves certain estimating equations when given an initial estimator of $\left({Q}_{0},{g}_{0}\right)$. This C-TMLE template involves (1) creating a sequence of TMLEs $\left(\left({g}_{n,k}^{\ast },{Q}_{n,k}^{\ast }\right):k=1,\dots ,K\right)$ constructed in such a manner that the empirical risk of both ${g}_{n,k}^{\ast }$ and ${Q}_{n,k}^{\ast }$ is decreasing in k, and (2) using cross-validation to select the k for which ${Q}_{n,k}^{\ast }$ is the best fit of ${Q}_{0}$. Subsequently, we present this TMLE that maps an initial of $\left({Q}_{0},{g}_{0}\right)$ into targeted estimators solving the desired estimating equations and establish its asymptotic linearity under appropriate conditions, including that the initial estimator of $\left({Q}_{0},{g}_{0}\right)$ is collaboratively consistent. Finally, we present a concrete C-TMLE algorithm that uses this TMLE algorithm as its basis, so that our theorem can be applied to this C-TMLE: a C-TMLE is still a TMLE, but it is a TMLE based on a data adaptively selected initial estimator that is collaboratively consistent, so that we can apply the same theorem to this C-TMLE.

## 5.1 Motivation and theoretical underpinning of collaborative double robust estimation of nuisance parameters

We note that ${P}_{0}{D}^{\ast }\left(Q,g\right)={P}_{0}\left\{\frac{A}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)+\stackrel{ˉ}{Q}-\mathrm{\Psi }\left(Q\right)\right\}$. If ${Q}_{W}={Q}_{W,0}$, this reduces to ${P}_{0}{D}^{\ast }\left(Q,g\right)={P}_{0}\left\{\frac{A}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)\right\}=\mathrm{\Psi }\left({Q}_{0}\right)-\mathrm{\Psi }\left(Q\right)+{P}_{0}\left\{\frac{A-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)\right\}.$

Let $\mathcal{G}$ be the class of all possible distributions of A, given W, and let ${g}_{0}\in \mathcal{G}$ be the true conditional distribution of A given W. We define the set $\mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)\equiv \left\{g:\in \mathcal{G}:0={P}_{0}\left(A-\stackrel{ˉ}{g}\right)\frac{{\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}}{\stackrel{ˉ}{g}}\right\}$. For any $g\in \mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$, we have ${P}_{0}{D}^{\ast }\left(Q,g\right)=\mathrm{\Psi }\left({Q}_{0}\right)-\mathrm{\Psi }\left(Q\right)$. Suppose we have an estimator $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ satisfying ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)=0$ and converging to a $\left(Q,g\right)$ so that $g\in \mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$. Then it follows that ${P}_{0}{D}^{\ast }\left(Q,g\right)=0$ and ${P}_{0}{D}^{\ast }\left(Q,g\right)=\mathrm{\Psi }\left({Q}_{0}\right)-\mathrm{\Psi }\left(Q\right)$, thereby establishing that $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$ is a consistent estimator of $\mathrm{\Psi }\left({Q}_{0}\right)$. Let us state this crucial result as a lemma

Lemma 1 (van der Laan and Gruber [33]) If ${P}_{0}\left(A-\stackrel{ˉ}{g}\right)\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)/\stackrel{ˉ}{g}=0$, and ${P}_{0}{D}^{\ast }\left(Q,g\right)=0$, then $\mathrm{\Psi }\left(Q\right)={\mathrm{\psi }}_{0}$. More generally, ${P}_{0}{D}^{\ast }\left(Q,g\right)=\mathrm{\Psi }\left({Q}_{0}\right)-\mathrm{\Psi }\left(Q\right)+{P}_{0}\left(A-\stackrel{ˉ}{g}\right)\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)/\stackrel{ˉ}{g}$.

We note that $\mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$ contains the true conditional distributions ${g}_{0}^{r}$ of A, given ${W}^{r}$, for which $\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}^{r}$ is a function of ${W}^{r}$, i.e. for which $\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}$ only depends on W through ${W}^{r}$. We refer to such distributions as reduced treatment mechanisms. However, it contains many more conditional distributions since any conditional distribution g for which $\left(A-\stackrel{ˉ}{g}\left(W\right)\right)$ is orthogonal to $\left({\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}\right)/\stackrel{ˉ}{g}$ in ${L}_{0}^{2}\left({P}_{0}\right)$ is an element of $\mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$. We refer to van der Laan and Gruber [33] and Gruber and van der Laan [29] for the introduction and general notion of collaborative double robustness.

## 5.2 C-TMLE

The general C-TMLE introduced in van der Laan and Gruber [33] provides a template for construction of a TMLE $\left({g}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ satisfying ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)=0$ and converging to a $\left(g,\stackrel{ˉ}{Q}\right)$ with $g\in \mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$ so that ${P}_{0}{D}^{\ast }\left(Q,g\right)=0$ and thereby $\mathrm{\Psi }\left(Q\right)-\mathrm{\Psi }\left({Q}_{0}\right)=0$. Thus C-TMLE provides a template for construction of targeted MLEs that exploit the collaborative double robustness of TMLEs in the sense that a TMLE will be consistent as long as $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ converges to a $\left(Q,g\right)$ for which $g\in \mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$. The goal is not to estimate the true treatment mechanism, but instead to construct a ${g}_{n}^{\ast }$ that converges to a conditional distribution given a reduction ${W}^{r}$ of W that is an element of $\mathcal{G}\left({P}_{0},\stackrel{ˉ}{Q}\right)$. We could state that, just as the propensity score provides a sufficient dimension reduction for the outcome regression, so does, given $\stackrel{ˉ}{Q}$, $\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)$ provide a sufficient dimension reduction for the propensity score regression in the TMLE. The current literature appears to agree that propensity score estimators are best evaluated with respect to their effect on estimation of the causal effect of interest, not by metrics such as likelihoods or classification rates [4548], and the above-stated general collaborative double robustness provides a formal foundation for such claims.

The general C-TMLE has been implemented and applied to point treatment and longitudinal data [20, 2933, 35]. A C-TMLE algorithm relies on a TMLE algorithm that maps an initial $\left({\stackrel{ˉ}{Q}}_{n},{g}_{n}\right)$ into a TMLE $\left({\stackrel{ˉ}{Q}}_{n}^{\ast },{g}_{n}^{\ast }\right)$ and uses this algorithm in combination with a targeted variable selection algorithm for generating candidate models for the propensity score to generate a sequence of candidate TMLEs $\left({g}_{n}^{\ast k},{\stackrel{ˉ}{Q}}_{n}^{\ast k}\right)$, increasingly non-parametric in k, and finally uses cross-validation to select the best TMLE among these candidates estimators of ${\stackrel{ˉ}{Q}}_{0}$.

## 5.3 A TMLE that allows for collaborative double robust inference

Our next theorem presents a TMLE algorithm and a corresponding influence curve under the assumption that the propensity score correctly adjusts for the possibly misspecified $\stackrel{ˉ}{Q}$ and ${\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q}={E}_{0}\left(Y-\stackrel{ˉ}{Q}\left(W\right)|A=1,W\right)$. The presented TMLE algorithm already arranges that this TMLE indeed non-parametrically adjusts for $\stackrel{ˉ}{Q}$. In the next subsection, we will present an actual C-TMLE algorithm that generates a TMLE for which the propensity score is targeted to adjust for $\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}$, so that this theorem can be applied.

Theorem 4

Definitions: For any given $\stackrel{ˉ}{g},\stackrel{ˉ}{Q}$, let ${\stackrel{ˉ}{g}}_{n}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ and ${\stackrel{ˉ}{Q}}_{n}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ be consistent estimators of ${\stackrel{ˉ}{g}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ and ${\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$, respectively (e.g. using a super-learner or other non-parametric adaptive regression algorithm). Let ${\stackrel{ˉ}{Q}}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ and ${\stackrel{ˉ}{g}}_{n}^{r\ast }={\stackrel{ˉ}{g}}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ denote these estimators applied to the TMLE $\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ defined below.

“Score” equations the TMLE should solve: Below, we describe an iterative TMLE algorithm that results in estimators ${\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{Q}}_{n}^{r\ast }$, ${g}_{n}^{\ast }$, ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ that solve the following equations: $0={P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ $0={P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right).$(3)

Iterative targeted MLE of ${\mathrm{\psi }}_{0}$:

Initialization: Let ${\stackrel{ˉ}{Q}}_{n}$ and ${g}_{n}$ (e.g. aiming to adjust for ${\stackrel{ˉ}{Q}}_{n}-{\stackrel{ˉ}{Q}}_{0}$) be initial estimators.

Let ${\stackrel{ˉ}{Q}}_{n}^{0}={\stackrel{ˉ}{Q}}_{n}$, ${\stackrel{ˉ}{g}}_{n}^{0}={\stackrel{ˉ}{g}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n},{\stackrel{ˉ}{Q}}_{n}^{0}\right)$, and ${\stackrel{ˉ}{Q}}_{n}^{r0}={\stackrel{ˉ}{Q}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{0},{\stackrel{ˉ}{Q}}_{n}^{0}\right)$.

Updating step: Consider the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{n}^{k}+\in {H}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r,k},{\stackrel{ˉ}{g}}_{n}^{k}\right)$, and fit $\in$ with the MLE ${\in }_{A,n}=arg\underset{\in }{\phantom{\rule{thickmathspace}{0ex}}max\phantom{\rule{thickmathspace}{0ex}}}{P}_{n}log{g}_{n}^{k}\left(\in \right).$

Define the submodel $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{Q}}_{n}^{k}\left(\in \right)=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{Q}}_{n}^{k}+\in {H}_{Y}\left({g}_{n}^{k}\right)$ and let $L\left(\stackrel{ˉ}{Q}\right)$ be the quasi-log-likelihood loss function for ${\stackrel{ˉ}{Q}}_{0}$. Let ${\in }_{Y,n}=arg{\phantom{\rule{thinmathspace}{0ex}}min}_{\in }\phantom{\rule{thinmathspace}{0ex}}{P}_{n}L\left({\stackrel{ˉ}{Q}}_{n}^{k}\left(\in \right)\right)$ be the MLE. Let ${\stackrel{ˉ}{Q}}_{n}^{k+1}={\stackrel{ˉ}{Q}}_{n}^{k}\left({\in }_{Y,n}\right)$, ${\stackrel{ˉ}{g}}_{n}^{k+1}={\stackrel{ˉ}{g}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{k}\left({\in }_{A,n}\right),{\stackrel{ˉ}{Q}}_{n}^{k+1}\right)$, and ${\stackrel{ˉ}{Q}}_{n}^{rk+1}={\stackrel{ˉ}{Q}}_{n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{k+1}\right)$.

Iterating till convergence: Now, set $k←k+1$ and iterate this updating process mapping a $\left({g}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{k},{\stackrel{ˉ}{Q}}_{n}^{rk}\right)$ into $\left({g}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{k+1},{\stackrel{ˉ}{Q}}_{n}^{rk+1}\right)$ till convergence or till large enough K so that the following estimating equations are solved up till an ${o}_{P}\left(1/\sqrt{n}\right)$-term: $\begin{array}{c}{o}_{P}\left(1/\sqrt{n}\right)={P}_{n}{D}^{\ast }\left({Q}_{n}^{K},{g}_{n}^{K}\right)\\ {o}_{P}\left(1/\sqrt{n}\right)={P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{rK},{\stackrel{ˉ}{g}}_{n}^{K}\right).\end{array}$

Final substitution estimator: Denote these limits (in k) of this iterative procedure with ${g}_{n}^{\ast }$, ${\stackrel{ˉ}{Q}}_{n}^{\ast }$, ${\stackrel{ˉ}{Q}}_{n}^{r\ast }$. Let ${Q}_{n}^{\ast }=\left({Q}_{W,n},{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$, where ${Q}_{W,n}$ is the empirical distribution estimator of ${Q}_{W,0}$. The TMLE of ${\mathrm{\psi }}_{0}$ is defined as $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$.

Assumption on limits $\stackrel{ˉ}{g},\stackrel{ˉ}{Q}$ of ${\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }$: Assume that $\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ is consistent for $\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$ w.r.t. $\parallel \cdot {\parallel }_{0}$-norm, where $\stackrel{ˉ}{g}\left(W\right)={E}_{{P}_{0}}\left(A|{W}^{r}\right)$ for some function ${W}^{r}\left(W\right)$ of W for which $\stackrel{ˉ}{Q}$ only depends on W through ${W}^{r}$, and assume that ${P}_{0}\frac{\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}}{\stackrel{ˉ}{g}}\left(A-\stackrel{ˉ}{g}\right)=0$, where the latter holds, in particular, if $\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}$ only depends on W through ${W}^{r}$ (e.g. ${\stackrel{ˉ}{g}}_{n}^{\ast }$ involves non-parametric adjustment by $\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{0}$). As a consequence, we have $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}^{r}$.

Empirical process condition: Assume that ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$, ${D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ fall in a ${P}_{0}$-Donsker class with probability tending to 1 as $n\text{\hspace{0.17em}}↦\text{\hspace{0.17em}}\mathrm{\infty }$.

Negligibility of second-order terms: Define ${\stackrel{ˉ}{Q}}_{0,n}^{r}\equiv {E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right).$

Assume that the following conditions hold for each of the following possible definitions of ${\stackrel{ˉ}{g}}_{0,n}^{r}$: ${E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$, ${E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$, ${E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$. Note that ${\stackrel{ˉ}{g}}_{0}^{r}={E}_{0}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{0}\left(A|\stackrel{ˉ}{g}\right)=\stackrel{ˉ}{g}$ is the limit of each of these choices for ${\stackrel{ˉ}{g}}_{0,n}^{r}$.

We assume $\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }$ are bounded away from $\mathrm{\delta }>0$ with probability tending to one, and $\begin{array}{c}\parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}={o}_{P}\left(1\right)\\ \parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}^{2}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)\\ \parallel {\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right).\end{array}$

Then, $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-{\mathrm{\psi }}_{0}=\left({P}_{n}-{P}_{0}\right)IC\left({P}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right),$

where $IC\left({P}_{0}\right)={D}^{\ast }\left(Q,g\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right).$

Thus, consistency of this TMLE relies upon the consistency of ${\stackrel{ˉ}{Q}}_{n}^{r\ast }$ as an estimator of ${\stackrel{ˉ}{Q}}_{0}^{r}$, and estimator $\left({\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ converging to a $\left(\stackrel{ˉ}{Q},\stackrel{ˉ}{g}\right)$ for which $\stackrel{ˉ}{g}$ equals a true conditional mean of A, given ${W}^{r}$, and ${\stackrel{ˉ}{Q}}_{0}-\stackrel{ˉ}{Q},\stackrel{ˉ}{Q}$ only depend on W through ${W}^{r}$. Since ${\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}$ depends on how well ${\stackrel{ˉ}{g}}_{n}^{\ast }$ approximates $\stackrel{ˉ}{g}$, ${\stackrel{ˉ}{Q}}_{n}^{r\ast }-{\stackrel{ˉ}{Q}}_{0}^{r}$ depends on how well $\left({\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ approximates $\left(\stackrel{ˉ}{Q},\stackrel{ˉ}{g}\right)$, beyond the behavior of the non-parametric regression defining ${\stackrel{ˉ}{Q}}_{n}^{r}$. In addition, ${\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{0}^{r}$ depends on either how well ${\stackrel{ˉ}{g}}_{n}^{\ast }$ approximates $\stackrel{ˉ}{g}$ or how well ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ approximates $\stackrel{ˉ}{Q}$. As a consequence, it follows that each of the second-order terms displayed in the theorem involves square differences of approximation errors ${\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}$ and ${\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}$.

It is also interesting to note that the algebraic form of the influence curve of this TMLE is identical to the influence curve of the TMLE of Theorem 2 that relied on ${\stackrel{ˉ}{g}}_{n}^{\ast }$ being consistent for ${\stackrel{ˉ}{g}}_{0}$.

## 5.4 A C-TMLE algorithm

The TMLE algorithm presented in Theorem 4 maps an initial estimator $\left({Q}_{n}^{0},{g}_{n}^{0}\right)$ into an updated estimator $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ that solves the two estimating equations (3), allowing for statistical inference with known influence curve if the initial estimator $\left({Q}_{n}^{0},{g}_{n}^{0}\right)$ is collaboratively consistent (i.e. the limits of $\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ satisfy the condition in the theorem). The updating algorithm results in a ${g}_{n}^{\ast }$ that non-parametrically adjusts for ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ itself, and thus for its limit $\stackrel{ˉ}{Q}$ in the limit. The condition on the limit g was that it should non-parametrically adjust not only for $\stackrel{ˉ}{Q}$ but also for $\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}$. If the initial estimator ${g}_{n}^{0}$ already adjusted for an approximation of ${\stackrel{ˉ}{Q}}_{n}^{0}-{\stackrel{ˉ}{Q}}_{0}$, for example, $\left({g}_{n}^{0},{Q}_{n}^{0}\right)$ is already a C-TMLE, then this condition might hold approximately. Nonetheless, we want to present a C-TMLE algorithm that simultaneously fits g in response to $\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}$, but also carries out the non-parametric adjustment by $\stackrel{ˉ}{Q}$. The latter is normally not part of the C-TMLE algorithm, but we want to enforce this in order to be able to apply Theorem 3 and thereby obtain a known influence curve. We achieve this goal in this subsection by applying the C-TMLE algorithm as presented by van der Laan and Gruber [49] and to the particular TMLE algorithm presented in Theorem 4.

First, we compute a set of K univariate covariates ${W}_{1},\dots ,{W}_{K}$, i.e. functions of W, which we will refer to as main terms, even though a term could be an interaction term or a super-learning fit of the regression of A on a subset of the components of W. Let $\mathrm{\Omega }=\left\{{W}_{1},\dots ,{W}_{K}\right\}$ be the full collection of main terms. In the previous subsection, we defined an algorithm that maps an initial $\left(Q,g\right)$ into a TMLE $\left({Q}^{\ast },{g}^{\ast }\right)$. Let $O\text{\hspace{0.17em}}↦L\left(Q\right)\left(O\right)$ be the loss function for ${Q}_{0}$.

The general template of a C-TMLE algorithm is the following: given a TMLE algorithm that maps any initial $\left(Q,g\right)$ into a TMLE $\left({Q}^{\ast },{g}^{\ast }\right)$, the C-TMLE algorithm generates a sequence of increasing sets ${\mathcal{S}}^{k}\subset \mathrm{\Omega }$ of k main terms, where each set ${\mathcal{S}}^{k}$ has an associated estimator ${g}^{k}$ of ${g}_{0}$, and simultaneously it generates a corresponding sequence of ${Q}^{k}$, $k=1,\dots ,K$, where both ${g}^{k}$ and ${Q}^{k}$ are increasingly non-parametric in k. Here increasingly non-parametric means that the empirical mean of the loss function of the fit is decreasing in k. This sequence $\left({g}^{k},{Q}^{k}\right)$ maps into a corresponding sequence of TMLEs $\left({g}^{k\ast },{Q}^{k\ast }\right)$ using the TMLE algorithm presented in Theorem 4. In this variable selection algorithm, the choice of the next main term to add, mapping ${\mathcal{S}}^{k}$ into ${\mathcal{S}}^{k+1}$, is based on how much the TMLE using the g-fit implied by ${\mathcal{S}}^{k+1}$, using ${Q}^{k}$ as initial estimator, improves the fit of the corresponding TMLE ${Q}^{k\ast }$ for ${Q}_{0}$. Cross-validation is used to select k among these candidate TMLEs ${Q}^{k\ast }$, $k=1,\dots ,K$, where the last TMLE ${Q}^{K\ast }$ uses the most aggressive bias reduction by being based on the most non-parametric estimator ${g}^{K}$ implied by $\mathrm{\Omega }$.

In order to present a precise C-TMLE algorithm we will first introduce some notation. For a given subset of main terms $\mathcal{S}\subset \mathrm{\Omega }$, let ${\mathcal{S}}^{c}$ be its complement within $\mathrm{\Omega }$. In the C-TMLE algorithm, we use a forward selection algorithm that augments a given set ${\mathcal{S}}^{k}$ into a next set ${\mathcal{S}}^{k+1}$ obtained by adding the best main term among all main terms in the complement ${\mathcal{S}}^{k,c}$ of ${\mathcal{S}}^{k}$. Each choice $\mathcal{S}$ corresponds with an estimator of ${g}_{0}$. In other words, the algorithm iteratively updates a current estimate ${g}^{k}$ into a new estimate ${g}^{k+1}$, but the criterion for g does not measure how well g fits ${g}_{0}$; it measures how well the TMLE of ${Q}_{0}$ that uses this g (and as initial estimator ${Q}^{k}$) fits ${Q}_{0}$.

Given a set ${\mathcal{S}}^{k}$, an initial ${g}^{k-1},{Q}^{k-1}$, we define a corresponding ${g}^{k}$ obtained by MLE-fitting of $\mathrm{\beta }$ in the logistic regression working model $\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}^{k}=\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}{\stackrel{ˉ}{g}}_{0}^{r}\left({\stackrel{ˉ}{g}}^{k-1},{\stackrel{ˉ}{Q}}^{k-1}\right)+\sum _{j\in {\mathcal{S}}^{k}}{\mathrm{\beta }}_{j}{W}_{j},$

where we remind the reader of the definition ${\stackrel{ˉ}{g}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{0}\left(A|\stackrel{ˉ}{Q}\left(W\right),\stackrel{ˉ}{g}\left(W\right)\right)$. Thus, this estimator ${g}^{k}$ involves non-parametric adjustment by ${\stackrel{ˉ}{g}}^{k-1},{\stackrel{ˉ}{Q}}^{k-1}$, augmented with a linear regression component implied by ${\mathcal{S}}^{k}$. This function mapping ${\mathcal{S}}^{k},{g}^{k-1},{Q}^{k-1}$ into a fit ${g}^{k}$ will be denoted with $g\left({\mathcal{S}}^{k},{g}^{k-1},{Q}^{k-1}\right)$. This also allows us to define a mapping from $\left({Q}^{k},{\mathcal{S}}^{k},{Q}^{k-1},{g}^{k-1}\right)$ into a TMLE $\left({Q}^{k\ast },{g}^{k\ast }\right)$ defined by the TMLE algorithm of Theorem 4 applied to initial ${Q}^{k}$ and ${g}^{k}=g\left({\mathcal{S}}^{k},{g}^{k-1},{Q}^{k-1}\right)$. We will denote this mapping into ${Q}^{k\ast }$ with $\mathrm{T}\mathrm{M}\mathrm{L}\mathrm{E}\left({Q}^{k},{\mathcal{S}}^{k},{Q}^{k-1},{g}^{k-1}\right)$.

The C-TMLE algorithm defined below generates a sequence $\left({Q}^{k},{\mathcal{S}}^{k}\right)$ and thereby corresponding TMLEs $\left({Q}^{k\ast },{g}^{k\ast }\right)$, $k=0,\dots ,K$, where ${Q}^{k}$ represents an initial estimate, ${\mathcal{S}}^{k}$ a subset of main terms that defines ${g}^{k}$, and ${Q}^{k\ast },{g}^{k\ast }$ the corresponding TMLE that starts with $\left({Q}^{k},{g}^{k}\right)$. These TMLEs ${Q}^{k\ast }$ represent subsequent updates of the initial estimator ${Q}^{0}$. The corresponding main term set ${\mathcal{S}}^{k}$ that defines ${g}^{k}$ in this k-specific TMLE, increases in k, one unit at a time: ${\mathcal{S}}^{0}$ is empty, $|{\mathcal{S}}^{k+1}|=|{\mathcal{S}}^{k}|+1$, ${\mathcal{S}}^{K}=\mathrm{\Omega }$. The C-TMLE uses cross-validation to select k, and thereby to select the TMLE ${Q}^{k\ast }$ that yields the best fit of ${Q}_{0}$ among the $K+1$ k-specific TMLEs $\left({Q}^{k\ast }:k=0,\dots ,K\right)$ that are increasingly aggressive in their bias-reduction effort. This C-TMLE algorithm is defined as follows and uses the same format as presented in Wang et al. [35]:

Initiate algorithm: Set initial TMLE. Let $k=0$, and ${Q}^{k}={Q}^{0}$, ${g}^{\mathrm{s}\mathrm{t}\mathrm{a}\mathrm{r}\mathrm{t}}$ be initial estimates of ${Q}_{0}$, ${g}_{0}$, and let ${\mathcal{S}}^{0}$ be the empty set. Let ${g}^{k}=g\left({\mathcal{S}}^{0},{Q}^{0},{g}^{\mathrm{s}\mathrm{t}\mathrm{a}\mathrm{r}\mathrm{t}}\right)$. This defines an initial TMLE ${Q}^{0\ast }=\mathrm{T}\mathrm{M}\mathrm{L}\mathrm{E}\left({Q}^{0},{\mathcal{S}}^{0},{Q}^{0},{g}^{0}\right).$

Determine next TMLE. Determine the next best main term to add: ${\mathcal{S}}^{k+1,\mathrm{c}\mathrm{a}\mathrm{n}\mathrm{d}}=arg\underset{\left\{{\mathcal{S}}^{k}\cup \left\{{W}_{j}\right\}:{W}_{j}\in {\mathcal{S}}^{k,c}\right\}}{\phantom{\rule{thinmathspace}{0ex}}min\phantom{\rule{thinmathspace}{0ex}}}{P}_{n}L\left(\mathrm{T}\mathrm{M}\mathrm{L}\mathrm{E}\left({Q}^{k},{\mathcal{S}}^{k}\cup \left\{{W}_{j}\right\},{Q}^{k-1},{g}^{k-1}\right)\right).$

If ${P}_{n}L\left(\mathrm{T}\mathrm{M}\mathrm{L}\mathrm{E}\left({Q}^{k},{\mathcal{S}}^{k+1,\mathrm{c}\mathrm{a}\mathrm{n}\mathrm{d}},{Q}^{k-1},{g}^{k-1}\right)\right)\le {P}_{n}L\left({Q}^{k\ast }\right),$

then $\left({\mathcal{S}}^{k+1}={\mathcal{S}}^{k+1,\mathrm{c}\mathrm{a}\mathrm{n}\mathrm{d}},{Q}^{k+1}={Q}^{k}\right)$, else ${Q}^{k+1}={Q}^{k\ast }$, and ${\mathcal{S}}^{k+1}=arg\underset{\left\{{\mathcal{S}}^{k}\cup \left\{{W}_{j}\right\}:{W}_{j}\in {\mathcal{S}}^{k,c}\right\}}{\phantom{\rule{thinmathspace}{0ex}}min\phantom{\rule{thinmathspace}{0ex}}}{P}_{n}L\left(\mathrm{T}\mathrm{M}\mathrm{L}\mathrm{E}\left({Q}^{k\ast },{\mathcal{S}}^{k}\cup \left\{{W}_{j}\right\},{Q}^{k-1},{g}^{k-1}\right)\right).$

[In words: If the next best main term added to the fit of ${E}_{{P}_{0}}\left(A|W\right)$ yields a TMLE of ${E}_{{P}_{0}}\left(Y|A,W\right)$ that improves upon the previous TMLE ${Q}^{k\ast }$, then we accept this best main term, and we have our next $\left({Q}^{k+1},{\mathcal{S}}^{k+1}\right)$ and corresponding TMLE ${Q}^{k+1\ast },{g}^{k+1\ast }$ (which still uses the same initial estimate of ${Q}_{0}$ as ${Q}^{k\ast }$ uses). Otherwise, reject this best main term, update the initial estimate in the candidate TMLEs to the previous TMLE ${Q}^{k\ast }$ of ${E}_{{P}_{0}}\left(Y|A,W\right)$, and determine the best main term to add again. This best main term will now always result in an improved fit of the corresponding TMLE of ${Q}_{0}$, so that we now have our next TMLE ${Q}^{k+1\ast },{g}^{k+1}$ (which now uses a different initial estimate than ${Q}^{k\ast }$ used).]

Iterate. Run this from $k=1$ to K at which point ${\mathcal{S}}^{K}=\mathrm{\Omega }$. This yields a sequence $\left({Q}^{k},{g}^{k}\right)$ and corresponding TMLE $\left({Q}^{k\ast },{g}_{k}^{\ast }\right)$, $k=0,\dots ,K$.

This sequence of candidate TMLEs ${Q}^{k\ast }$ of ${Q}_{0}$ has the following property: the estimates ${g}^{k}$ are increasingly non-parametric in k and ${P}_{n}L\left({Q}^{k\ast }\right)$ is decreasing in k, $k=0,\dots ,K$. It remains to select k. For that purpose we use V-fold cross-validation. That is, for each of the V splits of the sample in a training and validation sample, we apply the above algorithm for generating a sequence of candidate estimates $\left({Q}^{k\ast }:k\right)$ to a training sample, and we evaluate the empirical mean of the loss function at the resulting ${Q}^{k\ast }$ over the validation sample, for each $k=0,\dots ,K$. For each k we take the average over the V splits of the k-specific performance measure over the validation sample, which is called the cross-validated risk of the k-specific TMLE. We select the k that has the best cross-validated risk, which we denote with ${k}_{n}$. Our final C-TMLE of ${Q}_{0}$ is now defined as ${Q}_{n}^{\ast }={Q}^{{k}_{n}\ast }$, and the TMLE of ${\mathrm{\psi }}_{0}$ is defined as ${\mathrm{\psi }}_{n}^{\ast }=\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$.

Fast version of above C-TMLE: We could carry out the above C-TMLE algorithm but replacing the TMLE that maps an initial $\left(Q,g\right)$ into $\left({Q}^{\ast },{g}^{\ast }\right)$ replaced by the first step of the TMLE that maps $\left(Q,g\right)$ into $\left({Q}^{1},{g}^{1}\right)$. In that manner, the selection of the sets ${\mathcal{S}}^{k}$ is based on the bias reduction achieved in a first step of the TMLE algorithm, and most bias reduction occurs in the first step. After having selected the final one-step TMLE ${Q}^{{k}_{n}1}$ and corresponding ${g}^{{k}_{n}}$, one should still carry out the full TMLE algorithm so that the final ${Q}_{n}^{\ast }={Q}^{{k}_{n}\ast },{g}^{{k}_{n}\ast }$ is a real TMLE solving the estimating equations of Theorem 4.

Statistical inference for C-TMLE: Let ${\stackrel{ˉ}{Q}}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast },{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ be the final estimator of ${\stackrel{ˉ}{Q}}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}\left(\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$, a by-product of the TMLE algorithm. An estimate of the influence curve of ${\mathrm{\psi }}_{n}^{\ast }$ is given by $I{C}_{n}={D}^{\ast }\left({Q}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right).$

The asymptotic variance of $\sqrt{n}\left({\mathrm{\psi }}_{n}^{\ast }-{\mathrm{\psi }}_{0}\right)$ can thus be estimated with ${\mathrm{\sigma }}_{n}^{2}=1/n{\sum }_{i=1}^{n}I{C}_{n}\left({O}_{i}{\right)}^{2}$. An asymptotically valid 0.95-confidence interval for ${\mathrm{\psi }}_{0}$ is given by ${\mathrm{\psi }}_{n}^{\ast }±1.96{\mathrm{\sigma }}_{n}^{\ast }/\sqrt{n}$.

## 6 Discussion

Targeted minimum loss-based estimation allows us to construct plug-in estimators $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$ of a path-wise differentiable parameter $\mathrm{\Psi }\left({Q}_{0}\right)$ utilizing the state of the art in ensemble learning such as super-learning, while guaranteeing that the estimator ${Q}_{n}^{\ast }$ and an estimator ${g}_{n}^{\ast }$ of the nuisance parameter the TMLE utilizes in its targeting step solve a set of user-supplied estimating equations, empirical means of estimating functions. These estimating functions can be selected so that the resulting TMLE of ${\mathrm{\psi }}_{0}$ has certain statistical properties such as being efficient, or guaranteed to be more efficient than a given user-supplied estimator [28, 29], and so on. However, most importantly, these estimating equations are necessary to make the TMLE asymptotically linear, i.e. to make the TMLE unbiased enough so that the first-order linear expansion can be used for statistical inference. For example, by selecting the estimating functions to be equal to the canonical gradient of $\mathrm{\Psi }:\mathcal{M}\phantom{\rule{1pt}{0ex}}\text{\hspace{0.17em}}↦\phantom{\rule{1pt}{0ex}}\phantom{\rule{thickmathspace}{0ex}}I\phantom{\rule{thickmathspace}{0ex}}R\phantom{\rule{1pt}{0ex}}$ one arranges that $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$ is asymptotically efficient under conditions that assume consistency of ${Q}_{n}^{\ast }$ and ${g}_{n}^{\ast }$.

However, we noted that this level of targeting is insufficient if one only relies on consistency of ${g}_{n}^{\ast }$, even when that suffices for consistency of $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$. Under such weaker assumptions, additional targeting is necessary so that a specific smooth functional of ${g}_{n}^{\ast }$ is asymptotically linear, which requires that an unknown smooth function of ${g}_{n}^{\ast }$ is itself a TMLE. The joint targeting of ${Q}_{n}^{\ast }$ and ${g}_{n}^{\ast }$ is achieved by a TMLE that also solves the extra equations making this smooth function of ${g}_{n}^{\ast }$ asymptotically linear, allowing one to establish asymptotic linearity of $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$ under milder conditions that assume that the second-order terms are negligible relative to the first-order linear approximation.

In this article we also pushed this additional level of targeting to a new level by demonstrating how it allows for double robust statistical inference, and that even if we estimate the nuisance parameter in a complicated manner that is based on a criterion that cares about how it helps the estimator to fit ${\mathrm{\psi }}_{0}$, as used by the C-TMLE, we can still determine a set of additional estimating equations that need to be targeted by the TMLE in order to establish asymptotic linearity and thereby valid statistical inference based on the central limit theorem. This allows us now to use the sophisticated but often necessary C-TMLE while still preserving valid statistical inference under regularity conditions.

It remains to evaluate the practical benefit of the modifications of IPTW, TMLE, and C-TMLE as presented in this article for both estimation and assessment of uncertainty. We plan to address this in future research.

Even though we focussed in this article on a particular concrete estimation problem, TMLE is a general tool and our TMLE and theorems can be generalized to general statistical models and path-wise differentiable statistical target parameters.

We note that this targeting of nuisance parameter estimators in the TMLE is not only necessary to get a known influence curve but also necessary to make the TMLE asymptotically linear. So it does not simply suffice to run a bootstrap as an alternative of influence curve based inference, since the bootstrap can only work if the estimator is asymptotically linear so that it has an existing limit distribution. In addition, the established asymptotic linearity with known influence curve has the important by-product that one now obtains statistical inference with no extra computational cost. This is particularly important in these large semi-parametric models that require the utilization of aggressive machine learning methods in order to cover the model-space, making the estimators by necessity very computer intensive, so that a (disputable) bootstrap method might simply be too computer extensive.

## Acknowledgments

This research was supported by an NIH grant R01 AI074345-06. The author is grateful for the excellent, helpful, and insightful comments of the reviewers.

## Proof of Theorem 1

To start with we note: $\begin{array}{c}{P}_{n}D\left({g}_{n}^{\ast }\right)-{P}_{0}D\left({g}_{0}\right)=\left({P}_{n}-{P}_{0}\right)D\left({g}_{0}\right)+{P}_{n}\left(D\left({g}_{n}^{\ast }\right)-D\left({g}_{0}\right)\right)\\ =\left({P}_{n}-{P}_{0}\right)\left(D\left({g}_{0}\right)-{\mathrm{\psi }}_{0}\right)+{P}_{0}\left(D\left({g}_{n}^{\ast }\right)-D\left({g}_{0}\right)\right)+\left({P}_{n}-{P}_{0}\right)\left(D\left({g}_{n}^{\ast }\right)-D\left({g}_{0}\right)\right).\end{array}$

The first term of this decomposition yields the first component $D\left({g}_{0}\right)-{\mathrm{\psi }}_{0}$ of the influence curve. Since ${g}_{n}^{\ast }$ falls in Donsker class the rightmost term is ${o}_{P}\left(1/\sqrt{n}\right)$ if ${P}_{0}\left(D\left({g}_{n}^{\ast }\right)-D\left({g}_{0}\right){\right)}^{2}↦0$ in probability. So it remains to analyze the term ${P}_{0}\left(D\left({g}_{n}^{\ast }\right)-D\left({g}_{0}\right)\right)$. We now note $\begin{array}{c}{P}_{0}\left(D\left({g}_{n}^{\ast }\right)-D\left({g}_{0}\right)\right)={P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A\left\{1/{g}_{n}^{\ast }-1/{g}_{0}\right\}={P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A\left\{\left({g}_{0}-{g}_{n}^{\ast }\right)/\left({g}_{n}^{\ast }{g}_{0}\right)\right\}\\ ={P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A\left\{{g}_{0}-{g}_{n}^{\ast }\right\}/{g}_{0}^{2}+{P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A{\left({g}_{0}-{g}_{n}^{\ast }\right)}^{2}/\left({g}_{0}^{2}{g}_{n}^{\ast }\right).\end{array}$

By our assumptions, the last term ${P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A\left({g}_{0}-{g}_{n}^{\ast }{\right)}^{2}/{g}_{0}^{2}{g}_{n}^{\ast }={P}_{0}{\stackrel{ˉ}{Q}}_{0}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}{\right)}^{2}/\left({\stackrel{ˉ}{g}}_{0}{\stackrel{ˉ}{g}}_{n}^{\ast }\right)={o}_{P}\left(1/\sqrt{n}\right).$

So it remains to study: ${P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A\left\{{g}_{0}-{g}_{n}^{\ast }\right\}/{g}_{0}^{2}={P}_{0}{\stackrel{ˉ}{Q}}_{0}\left({\stackrel{ˉ}{g}}_{0}-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)/{\stackrel{ˉ}{g}}_{0}.$

Note that this equals $-\left\{{\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)\right\}$, where ${\mathrm{\Psi }}_{1}\left(g\right)={P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}}{{\stackrel{ˉ}{g}}_{0}}\stackrel{ˉ}{g}$ is an unknown smooth parameter of g. Our strategy is to first approximate this parameter by an easier (still unknown) parameter ${\mathrm{\Psi }}_{1}^{r}\left(g\right)={P}_{0}{\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}\stackrel{ˉ}{g}$ resulting in a second-order term: ${\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)={\mathrm{\Psi }}_{1}^{r}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}^{r}\left({g}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right)$. This is carried out in the next lemma. The efficient influence curve of a target parameter $\mathrm{\Phi }:\stackrel{ˉ}{g}\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}{P}_{0}H\stackrel{ˉ}{g}$ (which treats ${P}_{0}$ as known) at ${g}_{0}$ is given by $H\left(A-{\stackrel{ˉ}{g}}_{0}\right)$. Thus, one likes to construct ${\stackrel{ˉ}{g}}_{n}^{\ast }$ so that it solves the empirical mean of ${H}_{0}^{r}\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ for ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}$, so that ${\stackrel{ˉ}{g}}_{n}^{\ast }$ targets the parameter ${\mathrm{\Psi }}_{1}^{r}\left({g}_{0}\right)$. However, ${H}_{0}^{r}$ is unknown. Therefore, instead ${\stackrel{ˉ}{g}}_{n}^{\ast }$ is constructed to solve the empirical mean of an estimate ${H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ of the efficient influence curve ${H}_{0}^{r}\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$, and we will show that this indeed suffices to establish the asymptotic linearity of ${\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)$.

Lemma 2 Define ${\mathrm{\Psi }}_{1}\left(g\right)={P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}}{{\stackrel{ˉ}{g}}_{0}}\stackrel{ˉ}{g}$, ${\mathrm{\Psi }}_{1}^{r}\left(g\right)={P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}^{r}}{{\stackrel{ˉ}{g}}_{0}}\stackrel{ˉ}{g}$, ${\stackrel{ˉ}{Q}}_{0,n}^{r}\equiv {E}_{{P}_{0}}\left(Y|A=1,{\stackrel{ˉ}{g}}_{0}\left(W\right),{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)$, and ${\stackrel{ˉ}{Q}}_{0}^{r}={E}_{{P}_{0}}\left(Y|A=1,{\stackrel{ˉ}{g}}_{0}\left(W\right)\right)$, where ${\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)$ is treated as a fixed function of W when calculating the conditional expectation. Assume ${R}_{1,n}\equiv {P}_{0}\left({\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}={o}_{P}\left(1/\sqrt{n}\right).$

Then, ${\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)={\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)+{R}_{1,n}.$

Proof of Lemma 2: Note that $\begin{array}{rl}{\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)& ={P}_{0}Y\phantom{\rule{thinmathspace}{0ex}}A\left\{{g}_{n}^{\ast }-{g}_{0}\right\}/{g}_{0}^{2}\\ ={P}_{0}{\stackrel{ˉ}{Q}}_{0,n}^{r}A\left\{{g}_{n}^{\ast }-{g}_{0}\right\}/{g}_{0}^{2}\\ ={P}_{0}{\stackrel{ˉ}{Q}}_{0,n}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}\\ ={P}_{0}{\stackrel{ˉ}{Q}}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}+{P}_{0}\left({\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}.\square \end{array}$

Since we assumed ${R}_{1,n}={o}_{P}\left(1/\sqrt{n}\right)$, it remains to prove that ${\mathrm{\Psi }}_{1}^{r}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}^{r}\left({g}_{0}\right)={P}_{0}{\stackrel{ˉ}{Q}}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)/{\stackrel{ˉ}{g}}_{0}$ is asymptotically linear. Recall that ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}$, and ${H}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r\ast }/{\stackrel{ˉ}{g}}_{n}^{\ast }$, where ${\stackrel{ˉ}{Q}}_{n}^{r\ast }$ is obtained by regressing Y on the initial estimator ${\stackrel{ˉ}{g}}_{n}\left(W\right)$ and $A=1$.

The next step of the proof is the following series of equalities $\begin{array}{rl}{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}\left({D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\right)+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& \int \left({H}_{n}^{r\ast }-{H}_{0}^{r}\right)\left(W\right)\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)d{P}_{0}\left(W,A\right)+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& \int \left({H}_{n}^{r\ast }-{H}_{0}^{r}\right)\left(W\right)\left({\stackrel{ˉ}{g}}_{0}-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left(W\right)d{P}_{0}\left(W\right)+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& {R}_{2,n}+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right),\end{array}$

where, by assumption, ${R}_{2,n}={o}_{P}\left(1/\sqrt{n}\right)$. We now note that $\begin{array}{rl}{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=\phantom{\rule{thinmathspace}{0ex}}& \int {H}_{0}^{r}\left(W\right)\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)d{P}_{0}\left(A,W\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& \int {H}_{0}^{r}\left(W\right){\stackrel{ˉ}{g}}_{0}\left(W\right)d{P}_{0}\left(W\right)-\int {H}_{0}^{r}\left(W\right){\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)d{P}_{0}\left(W\right)\\ \equiv \phantom{\rule{thinmathspace}{0ex}}& {\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)-{\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right).\end{array}$

Thus, we have $-{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)={\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)-{R}_{2,n},$

from which we deduce that, by Lemma 2 and ${P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=0$, that $\begin{array}{rl}{\mathrm{\Psi }}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({\stackrel{ˉ}{g}}_{0}\right)& =-{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)+{R}_{1,n}+{R}_{2,n}\\ & =\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)+{R}_{1,n}+{R}_{2,n}\\ & =\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)+{R}_{1,n}+{R}_{2,n}+{R}_{3,n},\end{array}$

where we defined ${R}_{3,n}=\left({P}_{n}-{P}_{0}\right)\left({D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)\right).$

By our assumptions, ${R}_{3,n}={o}_{P}\left(1/\sqrt{n}\right)$, so that it follows that ${\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right)$. □

## Proof of Theorem 2

One easily checks that $\begin{array}{rl}\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=& -{P}_{0}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{0}\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& -{P}_{0}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)+{P}_{0}\left({D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{0}\right)\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& \left({P}_{n}-{P}_{0}\right){D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)+{P}_{0}\left\{{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{0}\right)\right\},\end{array}$

because ${P}_{n}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)=0$ by eq. (2). If ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ falls in a ${P}_{0}$-Donsker class and ${P}_{0}\left\{{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}^{\ast }\left(Q,{g}_{0}\right){\right\}}^{2}={o}_{P}\left(1\right)$ for some possibly misspecified limit Q of ${Q}_{n}^{\ast }$, then the first term on the right-hand side equals $\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left(Q,{g}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right)$, giving us the first component ${D}^{\ast }\left(Q,{g}_{0}\right)$ of the influence curve of $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$. The second term can be written as $A+B$ with $\begin{array}{c}A={P}_{0}\left(\left\{{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{0}\right)\right\}-\left\{{D}^{\ast }\left(Q,{g}_{n}^{\ast }\right)-{D}^{\ast }\left(Q,{g}_{0}\right)\right\}\right)\\ B={P}_{0}\left\{{D}^{\ast }\left(Q,{g}_{n}^{\ast }\right)-{D}^{\ast }\left(Q,{g}_{0}\right)\right\}.\end{array}$

The first term A equals $-{P}_{0}\left({H}_{Y}\left({g}_{n}^{\ast }\right)-{H}_{Y}\left({g}_{0}\right)\right)\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right),$

where ${H}_{Y}\left(g\right)\left(A,W\right)=A/\stackrel{ˉ}{g}\left(W\right)$. By our assumptions, this term is ${o}_{P}\left(1/\sqrt{n}\right)$. Thus, it suffices to establish asymptotic linearity of ${\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)={P}_{0}{D}^{\ast }\left(Q,{g}_{n}^{\ast }\right)$ as an estimator of ${\mathrm{\Psi }}_{1}\left({g}_{0}\right)={P}_{0}{D}^{\ast }\left(Q,{g}_{0}\right)$. We have $\begin{array}{c}{\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)=-{P}_{0}\left(Y-\stackrel{ˉ}{Q}\right)\frac{A}{{\stackrel{ˉ}{g}}_{n}^{\ast }{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ =-{P}_{0}{\stackrel{ˉ}{Q}}_{0,n}^{r}\frac{A}{{\stackrel{ˉ}{g}}_{n}^{\ast }{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ =-{P}_{0}{\stackrel{ˉ}{Q}}_{0,n}^{r}\frac{1}{{\stackrel{ˉ}{g}}_{n}^{\ast }}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right),\end{array}$

where ${\stackrel{ˉ}{Q}}_{0,n}^{r}$ appeared by writing the expectation w.r.t. ${P}_{0}$ as an expectation of the conditional expectation, given $A,{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right),{\stackrel{ˉ}{g}}_{0}\left(W\right)$. Let ${H}_{0,n}^{r}={\stackrel{ˉ}{Q}}_{0,n}^{r}/{\stackrel{ˉ}{g}}_{n}^{\ast }$ and recall ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}$, where ${\stackrel{ˉ}{Q}}_{0}^{r}={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}\left(W\right)|A=1,{\stackrel{ˉ}{g}}_{0}\left(W\right)\right)$. The last term can be written as $-{P}_{0}{H}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)-{P}_{0}\left({H}_{0,n}^{r}-{H}_{0}^{r}\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right).$

By our assumptions, the second term above is ${o}_{P}\left(1/\sqrt{n}\right)$. Thus, in order to establish asymptotic linearity of ${\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)$, it suffices to establish asymptotic linearity of ${\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)={P}_{0}{H}_{0}^{r}{\stackrel{ˉ}{g}}_{n}^{\ast }$ as an estimator of ${\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)={P}_{0}{H}_{0}^{r}{\stackrel{ˉ}{g}}_{0}$, where ${P}_{0}$ and ${H}_{0}^{r}$ are treated as known.

The estimator ${\stackrel{ˉ}{g}}_{n}^{\ast }$ was constructed to target ${P}_{0}{H}_{n}^{r\ast }{\stackrel{ˉ}{g}}_{0}$ instead where we recall that ${H}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r\ast }/{\stackrel{ˉ}{g}}_{n}^{\ast }$. That is, our targeted estimator ${g}_{n}^{\ast }$ solves the efficient influence curve equation ${P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=0$ for the parameter ${P}_{0}{H}_{n}^{r\ast }{\stackrel{ˉ}{g}}_{0}$ of ${\stackrel{ˉ}{g}}_{0}$. We now note that $\begin{array}{rl}{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=\phantom{\rule{thinmathspace}{0ex}}& \int {H}_{0}^{r}\left(W\right)\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)d{P}_{0}\left(A,W\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& \int {H}_{0}^{r}\left(W\right){\stackrel{ˉ}{g}}_{0}\left(W\right)d{P}_{0}\left(W\right)-\int {H}_{0}^{r}\left(W\right){\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)d{P}_{0}\left(W\right)\\ \equiv \phantom{\rule{thinmathspace}{0ex}}& {\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)-{\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right).\end{array}$

We have $\begin{array}{c}{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)={P}_{0}\left\{{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\right\}+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\int \left({H}_{n}^{r\ast }-{H}_{0}^{r}\right)\left(W\right)\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)d{P}_{0}\left(W,A\right)+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\int \left({H}_{n}^{r\ast }-{H}_{0}^{r}\right)\left(W\right)\left({\stackrel{ˉ}{g}}_{0}-{\stackrel{ˉ}{g}}_{n}^{\ast }\left(W\right)\right)d{P}_{0}\left(W\right)+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\equiv {R}_{2,n}+{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{n}^{\ast }\right),\end{array}$

where ${R}_{2,n}={o}_{P}\left(1/\sqrt{n}\right)$, by assumption. Combining the last two equations yields: $\begin{array}{rl}{\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}^{r}\left({\stackrel{ˉ}{g}}_{0}\right)& =-{P}_{0}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{R}_{2,n}\\ & =\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{R}_{2,n}\\ & =\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)-{R}_{2,n}+{R}_{3,n},\end{array}$

where we defined ${R}_{3,n}=\left({P}_{n}-{P}_{0}\right)\left({D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)\right).$

We have that ${R}_{3,n}={o}_{P}\left(1/\sqrt{n}\right)$ if ${D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)$ falls in a ${P}_{0}$-Donsker class with probability tending to 1, and ${P}_{0}\left\{{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right){\right\}}^{2}↦0$ in probability when $n\text{\hspace{0.17em}}↦\phantom{\rule{thinmathspace}{0ex}}\mathrm{\infty }$. Thus, we have proven that ${\mathrm{\Psi }}_{1}^{r}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}^{r}\left({g}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right)$. Thus, ${\mathrm{\Psi }}_{1}\left({g}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({g}_{0}\right)=-\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},{\stackrel{ˉ}{g}}_{0}\right)+{o}_{P}\left(1/\sqrt{n}\right).\square$

## Proof of Theorem 3

As outlined in Section 1, we have $\begin{array}{c}\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=-{P}_{0}{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)+{P}_{0}\left({\stackrel{ˉ}{Q}}_{0}-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\frac{{\stackrel{ˉ}{g}}_{0}-{\stackrel{ˉ}{g}}_{n}^{\ast }}{{\stackrel{ˉ}{g}}_{n}^{\ast }}\\ =\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)+{P}_{0}\left({\stackrel{ˉ}{Q}}_{0}-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\frac{{\stackrel{ˉ}{g}}_{0}-{\stackrel{ˉ}{g}}_{n}^{\ast }}{{\stackrel{ˉ}{g}}_{n}^{\ast }}\\ =\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left(Q,g\right)+{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}+{o}_{P}\left(1/\sqrt{n}\right),\end{array}$

if ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ falls in a Donsker class with probability tending to 1, and ${P}_{0}\left\{{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}^{\ast }\left(Q,g\right){\right\}}^{2}↦0$ in probability as $n\text{\hspace{0.17em}}↦\mathrm{\infty }$. The first term on right-hand side gives us the first component ${D}^{\ast }\left(Q,g\right)$ of the influence curve of ${\mathrm{\psi }}_{n}^{\ast }$.

It suffices to analyze the second term. Initially, we note that ${P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}={P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}+{R}_{1,n},$

where ${R}_{1,n}=-{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}{\stackrel{ˉ}{g}}_{n}^{\ast }}.$

By assumption, ${R}_{1,n}={o}_{P}\left(1/\sqrt{n}\right)$.

Now, we note $\begin{array}{c}{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}={P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}+\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}+\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}\\ ={P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}+{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}\\ +{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}+{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}.\end{array}$

By our assumptions, the first term ${R}_{2,n}={P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}$ satisfies ${R}_{2,n}={o}_{P}\left(1/\sqrt{n}\right)$. In addition, the last term equals zero by assumption: $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$ or $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$.

So it suffices to analyze the second and third terms of this last expression. In order to represent the second and third terms we define $\begin{array}{c}{\mathrm{\Psi }}_{2,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)={P}_{0}{\stackrel{ˉ}{Q}}_{n}^{\ast }\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}\\ {\mathrm{\Psi }}_{1,\stackrel{ˉ}{g},\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)={P}_{0}\frac{\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}}{\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}_{n}^{\ast }.\end{array}$

The sum of the second and third terms can now be represented as: $\begin{array}{c}I\left(\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}\right)\left\{{\mathrm{\Psi }}_{2,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{0}}\left(\stackrel{ˉ}{Q}\right)\right\}\\ +I\left(\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}\right)\left\{{\mathrm{\Psi }}_{1,\stackrel{ˉ}{g},\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1,\stackrel{ˉ}{g},\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{0}}\left(\stackrel{ˉ}{g}\right)\right\}.\end{array}$

For notational convenience, we will suppress the dependence of these mappings on the unknown quantities, and thus use ${\mathrm{\Psi }}_{1},{\mathrm{\Psi }}_{2}$.

Analysis of ${\mathrm{\Psi }}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ if $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$: Recalling the definition ${\stackrel{ˉ}{Q}}_{0,n}^{r}$, we have $\begin{array}{c}{\mathrm{\Psi }}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left(\stackrel{ˉ}{g}\right)={P}_{0}\frac{\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}}{{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ =-{P}_{0}\left(Y-\stackrel{ˉ}{Q}\right)\frac{A}{{\stackrel{ˉ}{g}}_{0}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ =-{P}_{0}\frac{{E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,{\stackrel{ˉ}{g}}_{0},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)}{{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ =-{P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0,n}^{r}}{{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ =-{P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}}{{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)-{P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}^{r}}{{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right).\end{array}$

By our assumptions, ${R}_{3,n}\equiv {P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}}{{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)={o}_{P}\left(1/\sqrt{n}\right),$

so that it remains to analyze $-{P}_{0}{H}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)$, where ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/{\stackrel{ˉ}{g}}_{0}$. Let ${H}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r\ast }/{\stackrel{ˉ}{g}}_{n}^{\ast }$, and recall that by construction ${P}_{n}{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)={P}_{n}{H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=0$. We then proceed as follows: $\begin{array}{rl}& {P}_{0}{H}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\phantom{\rule{thinmathspace}{0ex}}\\ \phantom{\rule{thinmathspace}{0ex}}& ={P}_{0}{H}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)+{P}_{0}\left({H}_{0}^{r}-{H}_{n}^{r\ast }\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)\\ & \equiv \phantom{\rule{thinmathspace}{0ex}}{P}_{0}{H}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)+{R}_{4,n},\end{array}$

where, by our assumptions, $\begin{array}{c}{R}_{4,n}={P}_{0}\left({H}_{n}^{r\ast }-{H}_{0}^{r}\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)={o}_{P}\left(1/\sqrt{n}\right).\end{array}$

In addition, $\begin{array}{c}{P}_{0}{H}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}\right)=-{P}_{0}{H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\left({P}_{n}-{P}_{0}\right){H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =\left({P}_{n}-{P}_{0}\right){H}_{0}^{r}\left(A-{\stackrel{ˉ}{g}}_{0}\right)+{R}_{4,n}+{R}_{5,n},\end{array}$

where ${R}_{5,n}={o}_{P}\left(1/\sqrt{n}\right)$ if ${P}_{0}\left\{{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}{\right\}}^{2}={o}_{P}\left(1\right)$ and ${D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ falls in a Donsker class with probability tending to 1. This proves that, if $\stackrel{ˉ}{g}={\stackrel{ˉ}{g}}_{0}$, then ${\mathrm{\Psi }}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{1}\left({\stackrel{ˉ}{g}}_{0}\right)=-\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)+{o}_{P}\left(1/\sqrt{n}\right)$.

Analysis of ${\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ if $\stackrel{ˉ}{Q}={\stackrel{ˉ}{Q}}_{0}$: Recall the definitions of ${H}_{Y}\left({\stackrel{ˉ}{g}}^{r},\stackrel{ˉ}{g}\right)$, ${\stackrel{ˉ}{g}}_{0,n}^{r}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{\ast },\stackrel{ˉ}{Q}\right)$, ${\stackrel{ˉ}{g}}_{n}^{r\ast }$ (an estimator of ${\stackrel{ˉ}{g}}_{0}^{r}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$), and that, by construction, ${P}_{n}{H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)=0$. We have $\begin{array}{rl}{\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{0}\right)=\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& -{P}_{0}\frac{A-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& -{P}_{0}\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}\frac{A}{{\stackrel{ˉ}{g}}_{0,n}^{r}}\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}{H}_{Y}\left({\stackrel{ˉ}{g}}_{0,n}^{r},\stackrel{ˉ}{g}\right)\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}{H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\\ \phantom{\rule{thinmathspace}{0ex}}& +{P}_{0}\left\{{H}_{Y}\left({\stackrel{ˉ}{g}}_{0,n}^{r},\stackrel{ˉ}{g}\right)\left({\stackrel{ˉ}{Q}}_{0}-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left({\stackrel{ˉ}{Q}}_{0}-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\right\}\end{array}$

Here we used that ${\stackrel{ˉ}{g}}_{0}$ is a conditional expectation of A allowing us to first replace ${\stackrel{ˉ}{g}}_{0}$ by A and then retake the conditional expectation but now only conditioning on what is needed to fix all other terms within expectation w.r.t. ${P}_{0}$. As a result of this trick, we were able to replace the hard to estimate ${\stackrel{ˉ}{g}}_{0}$ that conditions on all of W by the easier ${\stackrel{ˉ}{g}}_{0,n}^{r}$. Similarly, we used this to replace ${\stackrel{ˉ}{Q}}_{0}$ by Y. The last term is a second-order term involving square differences $\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)$ and $\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\left({\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{n}^{r\ast }\right)$. By our assumptions, this last term is ${o}_{P}\left(1/\sqrt{n}\right)$. We now proceed as follows: $\begin{array}{c}-{P}_{0}{H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)=\left({P}_{n}-{P}_{0}\right){H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\\ =\left({P}_{n}-{P}_{0}\right){H}_{Y}\left({\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)\left(Y-\stackrel{ˉ}{Q}\right)+{o}_{P}\left(1/\sqrt{n}\right),\end{array}$

where we assumed that ${H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\phantom{\rule{thinmathspace}{0ex}}\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$ falls in a Donsker class with probability tending to 1, and ${P}_{0}{\left\{{H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{H}_{Y}\left({\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)\left(Y-\stackrel{ˉ}{Q}\right)\right\}}^{2}\phantom{\rule{thinmathspace}{0ex}}↦\phantom{\rule{thinmathspace}{0ex}}0,$

in probability. This proves ${\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{0}\right)=-\left({P}_{n}-{P}_{0}\right){D}_{Y}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)+{o}_{P}\left(1/\sqrt{n}\right)$. □

## Proof of Theorem 4

As in the proof of previous theorem, we start with $\begin{array}{c}\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)-\mathrm{\Psi }\left({Q}_{0}\right)=\left({P}_{n}-{P}_{0}\right){D}^{\ast }\left(Q,g\right)+{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}+{o}_{P}\left(1/\sqrt{n}\right),\end{array}$

where we use that ${D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)$ falls in a Donsker class with probability tending to 1, and ${P}_{0}\left\{{D}^{\ast }\left({Q}_{n}^{\ast },{g}_{n}^{\ast }\right)-{D}^{\ast }\left(Q,g\right){\right\}}^{2}\phantom{\rule{thinmathspace}{0ex}}↦0$ in probability as $n\text{\hspace{0.17em}}↦\mathrm{\infty }$. The first term yields the first component ${D}^{\ast }\left(Q,g\right)$ of the influence curve of ${\mathrm{\psi }}_{n}^{\ast }$.

As in the proof of previous theorem, we decompose this second term as follows: $\begin{array}{c}{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}={P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}+\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}+\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}\\ ={P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}+{P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}\\ +{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{{\stackrel{ˉ}{g}}_{n}^{\ast }}+{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{{\stackrel{ˉ}{g}}_{n}^{\ast }},\end{array}$

resulting in four terms, which we will denote with Terms 1–4. We will now analyze these four terms.

Term 1: The first term ${P}_{0}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\frac{{\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}={o}_{P}\left(1/\sqrt{n}\right)$, by assumption.

Term 4: Due to our assumption that ${P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)/\stackrel{ˉ}{g}=0$ this last term equals: $\begin{array}{c}-{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)\frac{\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)}{{\stackrel{ˉ}{g}}_{n}^{\ast }\stackrel{ˉ}{g}}\\ =-{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)\frac{\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)}{{\stackrel{ˉ}{g}}^{2}}+{R}_{{1}_{},n},\end{array}$

where, by assumption, ${R}_{1,n}=-{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)\frac{\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\right)}^{2}}{{\stackrel{ˉ}{g}}^{2}{\stackrel{ˉ}{g}}_{n}^{\ast }}={o}_{P}\left(1/\sqrt{n}\right).$

We proceed as follows: $\begin{array}{rl}\phantom{\rule{thinmathspace}{0ex}}& -{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)\frac{\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)}{{\stackrel{ˉ}{g}}^{2}}\\ \phantom{\rule{thinmathspace}{0ex}}& =-{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\left(\stackrel{ˉ}{g}-A\right)\frac{\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)}{{\stackrel{ˉ}{g}}^{2}}\\ \phantom{\rule{thinmathspace}{0ex}}& =-{P}_{0}\frac{\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)+{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{A}{{\stackrel{ˉ}{g}}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right).\end{array}$

The first term is asymptotically equivalent with minus Term 3, which shows that Term 3 is canceled out by a component of Term 4 up till a second-order term that is ${o}_{P}\left(1/\sqrt{n}\right)$, by assumption. The second term equals $\begin{array}{c}{P}_{0}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)\frac{A}{{\stackrel{ˉ}{g}}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)=-{P}_{0}\left(Y-\stackrel{ˉ}{Q}\right)\frac{A}{{\stackrel{ˉ}{g}}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)\\ =-{P}_{0}{E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\frac{A}{{\stackrel{ˉ}{g}}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)\\ =-{P}_{0}{E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}}{{\stackrel{ˉ}{g}}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)\\ =-{P}_{0}{E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)\frac{{\stackrel{ˉ}{g}}_{0}^{r}}{{\stackrel{ˉ}{g}}^{2}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)-{P}_{0}\left({H}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{H}_{1}\left(\stackrel{ˉ}{g}\right)\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right),\end{array}$

where ${H}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)\equiv {E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}}{{\stackrel{ˉ}{g}}^{2}}$ approximates ${H}_{1}\left(\stackrel{ˉ}{g}\right)={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)\frac{{\stackrel{ˉ}{g}}_{0}^{r}}{{\stackrel{ˉ}{g}}^{2}}$, ${\stackrel{ˉ}{g}}_{0,n}^{r}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$, and ${\stackrel{ˉ}{g}}_{0}^{r}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g}\right)$. Let ${\stackrel{ˉ}{Q}}_{0,n}^{r}={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ and ${\stackrel{ˉ}{Q}}_{0}^{r}={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$. We assumed $\parallel {\stackrel{ˉ}{g}}_{0,n}^{r}-{\stackrel{ˉ}{g}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)$, and $\parallel {\stackrel{ˉ}{Q}}_{0,n}^{r}-{\stackrel{ˉ}{Q}}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}\right)$, which implies that ${R}_{2,n}={P}_{0}\left({H}_{1}\left({\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{H}_{1}\left(\stackrel{ˉ}{g}\right)\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)={o}_{P}\left(1/\sqrt{n}\right).$

By assumption, ${E}_{0}\left(A|{W}_{1}\right)$ for some ${W}_{1}$ that is a function of W. Therefore, ${\stackrel{ˉ}{g}}_{0}^{r}\left(W\right)={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g}\left(W\right)\right)={E}_{{P}_{0}}\left({E}_{{P}_{0}}\left(A|{W}_{1}\right)|\stackrel{ˉ}{g}\left(W\right)\right)=\stackrel{ˉ}{g}\left(W\right)$. Thus, it remains to analyze $-{P}_{0}\frac{{E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right).$(4)

This term is analyzed below and it is shown that this term equals $-\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)+{o}_{P}\left(1/\sqrt{n}\right).$

To conclude, we have then shown that the fourth term equals the latter expression minus the third term.

We now analyze (4) which can be represented as $-{P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}^{r}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)$, where ${\stackrel{ˉ}{Q}}_{0}^{r}={E}_{{P}_{0}}\left(Y-\stackrel{ˉ}{Q}|A=1,\stackrel{ˉ}{g}\right)$. In this proof, we will use the notation ${H}_{0}^{r}={\stackrel{ˉ}{Q}}_{0}^{r}/\stackrel{ˉ}{g}$, ${H}_{n}^{r\ast }={\stackrel{ˉ}{Q}}_{n}^{r\ast }/{\stackrel{ˉ}{g}}_{n}^{\ast }$. Since $\stackrel{ˉ}{g}\left(W\right)={E}_{0}\left(A|{W}_{1}\right)$ for some ${W}_{1}$, and ${\stackrel{ˉ}{Q}}_{0}^{r}$ is thus also a function of ${W}_{1}$, we have ${P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}^{r}}{\stackrel{ˉ}{g}}\stackrel{ˉ}{g}={P}_{0}\frac{{\stackrel{ˉ}{Q}}_{0}^{r}}{\stackrel{ˉ}{g}}A.$

We now proceed as follows: $\begin{array}{c}-{P}_{0}{H}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)=-{P}_{0}{H}_{0}^{r}\left({\stackrel{ˉ}{g}}_{n}^{\ast }-A\right)\\ =-{P}_{0}{H}_{n}^{r\ast }\left({\stackrel{ˉ}{g}}_{n}^{\ast }-A\right)-{P}_{0}\left({H}_{0}^{r}-{H}_{n}^{r\ast }\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-A\right)\\ ={P}_{0}{H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{P}_{0}\left({H}_{0}^{r}-{H}_{n}^{r\ast }\right)\left({\stackrel{ˉ}{g}}_{n}^{\ast }-{E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\right).\end{array}$

For the second term ${R}_{4,n}$, we can substitute ${\stackrel{ˉ}{g}}_{n}^{\ast }-{E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=\left({\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}\right)+{E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{0}^{r}\right)-{E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right),$

by noting that $\stackrel{ˉ}{g}={E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{0}^{r}\right)$. Thus, this second term results in two terms, one that can be bounded by $\parallel {H}_{n}^{r\ast }-{H}_{0}^{r}{\parallel }_{0}\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}$ and the other is bounded by $\parallel {H}_{n}^{r\ast }-{H}_{0}^{r}{\parallel }_{0}\parallel {E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{0}^{r}\right)-{E}_{0}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right){\parallel }_{0}.$

By assumption, both terms are ${o}_{P}\left(1/\sqrt{n}\right)$ and thus ${R}_{4,n}={o}_{P}\left(1/\sqrt{n}\right)$.

Since, by construction of ${g}_{n}^{\ast }$, ${P}_{n}{H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=0$, the first term can be written as follows: $\begin{array}{c}{P}_{0}{H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =-\left({P}_{n}-{P}_{0}\right){H}_{n}^{r\ast }\left(A-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)\\ =-\left({P}_{n}-{P}_{0}\right){H}_{0}^{r}\left(A-\stackrel{ˉ}{g}\right)+{R}_{5,n},\end{array}$

where ${R}_{5,n}={o}_{P}\left(1/\sqrt{n}\right)$ if ${P}_{0}\left\{{D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right){\right\}}^{2}={o}_{P}\left(1\right)$ and ${D}_{A}\left({\stackrel{ˉ}{Q}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ falls in a Donsker class with probability tending to 1, and we are reminded that ${D}_{A}\left(\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{0}^{r}\right)={H}_{0}^{r}\left(A-\stackrel{ˉ}{g}\right)$. This completes the proof for the fourth term.

Term 3: Our analysis of Term 4 showed that Term 3 cancels out and thus that the sum of the third and fourth terms equals $-\left({P}_{n}-{P}_{0}\right){D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)+{o}_{P}\left(1/\sqrt{n}\right)$, which yields the second component $-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)$ of the influence curve of ${\mathrm{\psi }}_{n}^{\ast }$.

Analysis of Term 2: Up till a second-order term that can be bounded by $\parallel {\stackrel{ˉ}{g}}_{n}^{\ast }-\stackrel{ˉ}{g}{\parallel }_{0}\parallel {\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}{\parallel }_{0}={o}_{P}\left(1/\sqrt{n}$, we can represent Term 2 as $\begin{array}{c}\left\{{\mathrm{\Psi }}_{2,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{0}}\left(\stackrel{ˉ}{Q}\right)\right\}.\end{array}$

where $\begin{array}{c}{\mathrm{\Psi }}_{2,\stackrel{ˉ}{g},{\stackrel{ˉ}{g}}_{0}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)={P}_{0}{\stackrel{ˉ}{Q}}_{n}^{\ast }\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}.\end{array}$

We have $\begin{array}{c}{\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2}\left(\stackrel{ˉ}{Q}\right)={P}_{0}\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\\ =-{P}_{0}\frac{A-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\\ =-{P}_{0}\frac{{E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{\ast },\stackrel{ˉ}{Q}\right)-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right).\end{array}$

Recall that, by our assumption, $\stackrel{ˉ}{g}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$. Let ${\stackrel{ˉ}{g}}_{0,n}^{r}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},{\stackrel{ˉ}{Q}}_{n}^{\ast },\stackrel{ˉ}{Q}\right)$. By our assumptions, ${P}_{0}\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)={o}_{P}\left(1/\sqrt{n}\right).$(5)

This proves that ${\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2}\left(\stackrel{ˉ}{Q}\right)={o}_{P}\left(1/\sqrt{n}\right)$. □

Remark: Proof of additional result In this analysis of Term 2, we assumed $\stackrel{ˉ}{g}={E}_{{P}_{0}}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$, and condition (5). Let us now try to provide a different type of analysis for this Term 2, relying on different conditions. We have $\begin{array}{rl}{\mathrm{\Psi }}_{2}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }\right)-{\mathrm{\Psi }}_{2}\left(\stackrel{ˉ}{Q}\right)=\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}\frac{\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}}{\stackrel{ˉ}{g}}\left({\stackrel{ˉ}{Q}}_{n}^{\ast }-\stackrel{ˉ}{Q}\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}\frac{A-\stackrel{ˉ}{g}}{\stackrel{ˉ}{g}}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)\\ =\phantom{\rule{thinmathspace}{0ex}}& {P}_{0}\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}-\stackrel{ˉ}{g}}{{\stackrel{ˉ}{g}}_{0,n}^{r}\stackrel{ˉ}{g}}A\left(Y-{\stackrel{ˉ}{Q}}_{n}^{\ast }\right),\end{array}$

where ${\stackrel{ˉ}{g}}_{0,n}^{r}={E}_{0}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q},{\stackrel{ˉ}{Q}}_{n}^{\ast }\right)$, and if we assume that ${P}_{0}\frac{{\stackrel{ˉ}{g}}_{0,n}^{r}-\stackrel{ˉ}{g}}{{\stackrel{ˉ}{g}}_{0,n}^{r}\stackrel{ˉ}{g}}A\left(Y-\stackrel{ˉ}{Q}\right)=0$. The latter equality holds if we target in the TMLE algorithm ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ with clever covariate ${H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)=\left({\stackrel{ˉ}{g}}_{n}^{r\ast }-{\stackrel{ˉ}{g}}_{n}^{\ast }\right)/\left({\stackrel{ˉ}{g}}_{n}^{r\ast }{\stackrel{ˉ}{g}}_{n}^{\ast }\right)A$, where ${\stackrel{ˉ}{g}}_{n}^{r\ast }$ estimates a non-parametric regression of A on ${\stackrel{ˉ}{Q}}_{n}^{\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }$, exactly as in Theorem 3. Under that assumption one can now show that we obtain another influence curve component ${D}_{Y}$ defined by ${D}_{Y}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)=\frac{{\stackrel{ˉ}{g}}_{0}^{r}-\stackrel{ˉ}{g}}{{\stackrel{ˉ}{g}}_{0}^{r}\stackrel{ˉ}{g}}A\left(Y-\stackrel{ˉ}{Q}\right),$

where ${\stackrel{ˉ}{g}}_{0}^{r}={E}_{0}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$. Thus, now we have that $\mathrm{\Psi }\left({Q}_{n}^{\ast }\right)$ is asymptotically linear with influence curve ${D}^{\ast }\left(Q,g\right)-{D}_{A}\left({\stackrel{ˉ}{Q}}_{0}^{r},\stackrel{ˉ}{g}\right)-{D}_{Y}\left(\stackrel{ˉ}{Q},{\stackrel{ˉ}{g}}_{0}^{r},\stackrel{ˉ}{g}\right)$. However, note that if ${\stackrel{ˉ}{g}}_{0}^{r}=\stackrel{ˉ}{g}$, i.e. if $\stackrel{ˉ}{g}={E}_{0}\left(A|\stackrel{ˉ}{g},\stackrel{ˉ}{Q}\right)$, then ${D}_{Y}=0$. To conclude, one can remove the condition that ${\stackrel{ˉ}{g}}_{n}^{\ast }$ needs to non-parametrically adjust for ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ as arranged by the TMLE algorithm in Theorem 4 by adding the additional clever covariate ${H}_{Y}\left({\stackrel{ˉ}{g}}_{n}^{r\ast },{\stackrel{ˉ}{g}}_{n}^{\ast }\right)$ to the submodel for ${\stackrel{ˉ}{Q}}_{n}^{\ast }$ in the TMLE algorithm, and the influence curve will now have another component ${D}_{Y}\left({P}_{0}\right)$, as in Theorem 3. This results in a generalization of Theorem 3 which does not require that either ${Q}_{n}^{\ast }$ or ${g}_{n}^{\ast }$ is consistent, but only requires that their limits $\stackrel{ˉ}{g},\stackrel{ˉ}{Q}$ satisfy ${P}_{0}\left(\stackrel{ˉ}{g}-{\stackrel{ˉ}{g}}_{0}\right)/\stackrel{ˉ}{g}\left(\stackrel{ˉ}{Q}-{\stackrel{ˉ}{Q}}_{0}\right)=0$. Thus, this latter generalization of Theorem 3 would provide an appropriate theorem for a C-TMLE that does not enforce the non-parametric adjustment for ${\stackrel{ˉ}{Q}}_{n}^{\ast }$, but still needs to adjust for ${\stackrel{ˉ}{Q}}_{n}^{\ast }-{\stackrel{ˉ}{Q}}_{0}$.

## References

• 1.

Bickel PJ, Klaassen CA, Ritov Y, Wellner J. Efficient and adaptive estimation for semiparametric models. Springer-Verlag, 1997. Google Scholar

• 2.

Gill RD. Non- and semiparametric maximum likelihood estimators and the von Mises method (part 1). Scand J Stat 1989;16:97–128. Google Scholar

• 3.

Gill RD, van der Laan MJ, Wellner JA. Inefficient estimators of the bivariate survival function for three models. Ann Inst Henri Poincaré 1995;31:545–97.Google Scholar

• 4.

van der Vaart AW, Wellner JA. Weak convergence and empirical processes. New York: Springer-Verlag, 1996. Google Scholar

• 5.

van der Laan MJ. Estimation based on case-control designs with known prevalence probability. Int J Biostat 2008. Available at: http://www.bepress.com/ijb/vol4/iss1/17/

• 6.

van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data. New York: Springer, 2012.

• 7.

van der Laan MJ, Rubin D. Targeted maximum likelihood learning. Int J Biostat 2006;20. Google Scholar

• 8.

van der Laan MJ, Dudoit S. Unified cross-validation methodology for selection among estimators and a general cross-validated adaptive epsilon-net estimator: finite sample oracle inequalities and examples. Technical report, Division of Biostatistics, University of California, Berkeley, CA, November 2003. Google Scholar

• 9.

van der Laan MJ, Polley E, Hubbard A. Super learner. Stat Appl Genet Mol Biol 2007;6:Article 25. Google Scholar

• 10.

van der Vaart AW, Dudoit S, van der Laan MJ. Oracle inequalities for multi-fold cross-validation. Stat Decis 2006;240:351–71. Google Scholar

• 11.

Robins JM, Rotnitzky A. Recovery of information and adjustment for dependent censoring using surrogate markers. In Aids epidemiology. Methodological issues. Basel: Bikhäuser, 1992:297–331. Google Scholar

• 12.

Robins JM, Rotnitzky A. Semiparametric efficiency in multivariate regression models with missing data. J Am Stat Assoc 1995;900:122–9.

• 13.

van der Laan MJ, Robins JM. Unified methods for censored longitudinal data and causality. New York: Springer-Verlag, 2003. Google Scholar

• 14.

Robins JM, Rotnitzky A, van der Laan MJ. Comment on “on profile likelihood” by S.A. Murphy and A.W. van der Vaart. J Am Stat Assoc – Theory Methods 2000;450:431–5.Google Scholar

• 15.

Robins JM. Robust estimation in sequentially ignorable missing data and causal inference models. In Proceedings of the American Statistical Association, 2000.Google Scholar

• 16.

Robins JM, Rotnitzky A. Comment on the Bickel and Kwon article, “inference for semiparametric models: some questions and an answer”. Stat Sin 2001;110:920–36. Google Scholar

• 17.

Scharfstein DO, Rotnitzky A, Robins JM. Adjusting for non-ignorable drop-out using semiparametric nonresponse models, (with discussion and rejoinder). J Am Stat Assoc 1999;940:1096–120 (1121–46).

• 18.

Bembom O, Petersen ML, Rhee S-Y, Fessel WJ, Sinisi SE, Shafer RW, et al. Biomarker discovery using targeted maximum likelihood estimation: application to the treatment of antiretroviral resistant HIV infection. Stat Med 2009;28:152–72.

• 19.

Gruber S, van der Laan MJ. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. Int J Biostat 2010;6:Article 26. Available at: www.bepress.com/ijb/vol6/iss1/26 Web of Science

• 20.

Gruber S, van der Laan MJ. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics. Int J Biostat 2010;60.

• 21.

Gruber S, van der Laan MJ. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. Technical Report 265, UC Berkeley, CA, 2010.

• 22.

Rosenblum M, van der Laan MJ. Targeted maximum likelihood estimation of the parameter of a marginal structural model. Int J Biostat 2010;60.

• 23.

Sekhon JS, Gruber S, Porter K, van der Laan MJ. Propensity-score-based estimators and C-TMLE. In: MJ van der Laan and S Rose, editors. Targeted learning: prediction and causal inference for observational and experimental data, chapter 21. New York: Springer, 2011.

• 24.

Gruber S, van der Laan MJ. Targeted minimum loss based estimation of a causal effect on an outcome with known conditional bounds. Int J Biostat 2012;8.

• 25.

Zheng W, van der Laan MJ. Asymptotic theory for cross-validated targeted maximum likelihood estimation. Technical Report 273, Division of Biostatistics, University of California, Berkeley, CA, 2010. Google Scholar

• 26.

Zheng W, van der Laan MJ. Cross-validated targeted minimum loss based estimation. In: MJ van der Laan and S Rose, editors. Targeted learning: causal inference for observational and experimental data, chapter 21. New York: Springer, 2011:459–74.

• 27.

van der Vaart AW. Asymptotic statistics. New York: Cambridge University Press, 1998. Google Scholar

• 28.

Rotnitzky A, Lei Q, Sued M, Robins J. Improved double-robust estimation in missing data and causal inference models. Biometrika 2012;99:439–56.

• 29.

Gruber S, van der Laan MJ. Targeted minimum loss based estimator that outperforms a given estimator. Int J Biostat 2012;80:Article 11.

• 30.

Gruber S, van der Laan MJ. Marginal structural models. In: MJ van der Laan and S Rose, editors. C-TMLE of an additive point treatment effect, chapter 19. New York: Springer, 2011.Google Scholar

• 31.

Porter KE, Gruber S, van der Laan MJ, Sekhon JS. The relative performance of targeted maximum likelihood estimators. Int J Biostat 2011;70:1–34.

• 32.

Stitelman OM, van der Laan MJ. Collaborative targeted maximum likelihood for time to event data. Int J Biostat 2010:Article 21.

• 33.

van der Laan MJ, Gruber S. Collaborative double robust penalized targeted maximum likelihood estimation. Int J Biostat 2010;60. Google Scholar

• 34.

van der Laan MJ, Rose S. Targeted learning: prediction and causal inference for observational and experimental data. New York: Springer, 2011.

• 35.

Wang H, Rose S, van der Laan MJ. Finding quantitative trait loci genes. In: MJ van der Laan and S Rose, editors. Targeted learning: causal inference for observational and experimental data, chapter 23. New York: Springer, 2011.

• 36.

Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology 2000;110:561–70.

• 37.

Györfi L, Kohler M, Krzyżak A, Walk H. A distribution-free theory of nonparametric regression. New York: Springer-Verlag, 2002. Google Scholar

• 38.

van der Laan MJ, Dudoit S, van der Vaart AW. The cross-validated adaptive epsilon-net estimator. Stat Decis 2006;240:373–95. Google Scholar

• 39.

van der Laan MJ, Dudoit S, Keles S. Asymptotic optimality of likelihood-based cross-validation. Stat Appl Genet Mol Biol 2004;3:Article 4. Google Scholar

• 40.

Dudoit S, van der Laan MJ. Asymptotics of cross-validated risk estimation in estimator selection and performance assessment. Stat Methodol 2005;20:131–54.

• 41.

Polley EC, Rose S, van der Laan MJ. Super learning. In: MJ van der Laan and S Rose, editors. Targeted learning: causal inference for observational and experimental data, chapter 3. New York: Springer, 2011.

• 42.

Polley EC, van der Laan MJ. Super learner in prediction. Technical report 200. Division of Biostatistics, UC Berkeley, Working Paper Series, 2010. Google Scholar

• 43.

van der Laan MJ, Petersen ML. Targeted learning. In: Zhang C, Ma Y, editors. Ensemble machine learning. New York: Springer, 2012:117–56. ISBN 978-1-4419-9326-7. Google Scholar

• 44.

van der Laan MJ. Efficient and inefficient estimation in semiparametric models. Center for Mathematics and Computer Science, CWI-tract 114. 1996. Google Scholar

• 45.

Lee BK, Lessler J, Stuart EA. Improved propensity score weighting using machine learning. Stat Med 2009;29:337–46. Google Scholar

• 46.

Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology 2009;20:512–22. DOI: 10.1097/EDE.0b013e3181a663cc.

• 47.

Vansteelandt S, Bekaert M, Claeskens G. On model selection and model misspecification in causal inference. Stat Methods Med Res 2010;21:7–30. DOI:10.1177/0962280210387717.

• 48.

Westreich D, Cole SR, Funk MJ, Brookhart MA, Sturmer T. The role of the c-statistic in variable selection for propensity scores. Pharmacoepidemiol Drug Saf 2011;20:317–20.

• 49.

van der Laan MJ, Gruber S. Collaborative double robust penalized targeted maximum likelihood estimation. Int J Biostat 2009;6. Google Scholar

Published Online: 2014-02-11

Published in Print: 2014-05-01

Citation Information: The International Journal of Biostatistics, Volume 10, Issue 1, Pages 29–57, ISSN (Online) 1557-4679, ISSN (Print) 2194-573X,

Export Citation

© 2014 by Walter de Gruyter Berlin / Boston.

## Citing Articles

[1]
Kara E. Rudolph, Oleg Sofrygin, and Mark J. van der Laan
Journal of the American Statistical Association, 2020, Page 1
[2]
Karel Vermeulen and Stijn Vansteelandt
Journal of the American Statistical Association, 2015, Volume 110, Number 511, Page 1024
[3]
D Benkeser, M Carone, M J Van Der Laan, and P B Gilbert
Biometrika, 2017, Volume 104, Number 4, Page 863
[4]
Iván Díaz
Statistics in Medicine, 2019, Volume 38, Number 15, Page 2735
[5]
Jianxuan Liu, Yanyuan Ma, and Lan Wang
Biometrics, 2018
[6]
Claude M. Setodji, Daniel F. McCaffrey, Lane F. Burgette, Daniel Almirall, and Beth Ann Griffin
Epidemiology, 2017, Volume 28, Number 6, Page 802
[7]
Yuying Xie, Yeying Zhu, Cecilia A Cotton, and Pan Wu
Statistical Methods in Medical Research, 2017, Page 096228021771548
[8]
Wei Luo, Yeying Zhu, and Debashis Ghosh
Biometrika, 2017, Page asw068