We now describe the TMLE algorithm based on the above choices of (1) the representation of $\mathrm{\Psi}(P)$ as $\mathrm{\Psi}(\stackrel{\u02c9}{Q},{Q}_{L(0)})$, (2) the loss function for $(\stackrel{\u02c9}{Q},{Q}_{L(0)})$, and (3) the least favorable submodels $((\stackrel{\u02c9}{Q}(\in ,g):\in ),({Q}_{L(0)}({\in}_{0}):{\in}_{0}))$ through $(\stackrel{\u02c9}{Q},{Q}_{L(0)})$ at $(\in ,{\in}_{0})=0$ for fluctuating these parameters $(\stackrel{\u02c9}{Q},{Q}_{L(0)})$. We utilize the same sequential regression approach described in Section 3.4, but now incorporate sequential targeted updating of the initial regression fits. We assume an estimator ${g}_{n}$ of ${g}_{0}$. We first specify where in the algorithm updating occurs and then describe the updating process.

Recall that we define ${\stackrel{\u02c9}{Q}}_{t+1}^{d,t}=Y(t)$ for all *d* and that ${\stackrel{\u02c9}{Q}}_{t,n}^{d,t}$ is the regression of $Y(t)$ on $\stackrel{\u02c9}{A}(t-1)={\stackrel{\u02c9}{d}}_{t-1}(\stackrel{\u02c9}{L}(t-1)),\stackrel{\u02c9}{L}(t-1)$. For any given $t=1,\dots ,K+1$, the initial estimator ${\stackrel{\u02c9}{Q}}_{t,n}^{d,t}$ is first updated to ${\stackrel{\u02c9}{Q}}_{t,n}^{d,t,\ast}$ using a logistic regression fit of our least favorable submodels, as described below. For a $d\in \mathcal{D}$, we then regress the updated regression fit ${\stackrel{\u02c9}{Q}}_{t,n}^{d,t,\ast}$ onto $\stackrel{\u02c9}{A}(t-2),\stackrel{\u02c9}{L}(t-2)$, and evaluate it at $\stackrel{\u02c9}{A}(t-2)={\stackrel{\u02c9}{d}}_{t-2}(\stackrel{\u02c9}{L}(t-2))$, giving us ${\stackrel{\u02c9}{Q}}_{t-1,n}^{d,t}$. This is carried out for each $d\in \mathcal{D}$, giving us ${\stackrel{\u02c9}{Q}}_{t-1,n}^{d,t}$ for each $d\in \mathcal{D}$. The regressions ${\stackrel{\u02c9}{Q}}_{t-1,n}^{d,t}$ are then updated for each $d\in \mathcal{D}$, as described below, giving us ${\stackrel{\u02c9}{Q}}_{t-1,n}^{d,t,\ast}$ for each $d\in \mathcal{D}$. For a $d\in \mathcal{D}$, we then regress the updated regression fit ${\stackrel{\u02c9}{Q}}_{t-1,n}^{d,t,\ast}$ on $\stackrel{\u02c9}{A}(t-3),\stackrel{\u02c9}{L}(t-3)$ and evaluate it at $\stackrel{\u02c9}{A}(t-3)={\stackrel{\u02c9}{d}}_{t-3}(\stackrel{\u02c9}{L}(t-3)$, giving us ${\stackrel{\u02c9}{Q}}_{t-2,n}^{d,t}$. We again carry this out for each $d\in \mathcal{D}$, giving us ${\stackrel{\u02c9}{Q}}_{t-2,n}^{d,t}$ for each $d\in \mathcal{D}$ and again update the resulting regressions, giving us ${\stackrel{\u02c9}{Q}}_{t-2,n}^{d,t,\ast}$, for each $d\in \mathcal{D}$. This process is iterated until we obtain an updated estimator ${\stackrel{\u02c9}{Q}}_{1,n}^{d,t,\ast}(L(0))$ for each $d\in \mathcal{D}$. Since this process is carried out for each $t=1,\dots ,K+1$, this results in an estimator ${\stackrel{\u02c9}{Q}}_{1,n}^{d,t,\ast}$ for each $d\in \mathcal{D}$ and $t=1,\dots ,K+1$. We denote this estimator of ${\stackrel{\u02c9}{Q}}_{1,0}=({\stackrel{\u02c9}{Q}}_{1,0}^{d,t}:d,t)$ with ${\stackrel{\u02c9}{Q}}_{1,n}^{\ast}=({\stackrel{\u02c9}{Q}}_{1,n}^{d,t,\ast}:d,t)$.

The updating steps are implemented as follows: for each $t\in \{1,\dots ,K+1\}$, and for $k=t$ to $k=1$, we compute
${\in}_{k,n}\equiv \underset{{\in}_{k}}{\mathrm{a}\mathrm{r}\mathrm{g}\mathrm{m}\mathrm{i}\mathrm{n}}{P}_{n}{\sum}_{d\in \mathcal{D}}{\mathcal{L}}_{d,t,k,{\stackrel{\u02c9}{Q}}_{k+1,n}^{d,t,\ast}}\left({\stackrel{\u02c9}{Q}}_{k,n}^{d,t}({\in}_{k},{g}_{n})\right),$and compute the corresponding update ${\stackrel{\u02c9}{Q}}_{k,n}^{d,t,\ast}={\stackrel{\u02c9}{Q}}_{k,n}^{d,t}({\in}_{k,n},{g}_{n})$, for all $d\in \mathcal{D}$. Note that
$\begin{array}{l}{\in}_{k,n}=\mathrm{arg}\mathrm{min}{\text{\hspace{0.17em}}}_{\in}{\displaystyle {\sum}_{d\in D}{L}_{d,t,k,{\overline{Q}}_{k+1,n}^{d,t,*}}}({\overline{Q}}_{k,n}^{d,t}(\in ,{g}_{n}))\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\mathrm{arg}{\mathrm{min}}_{\in}{\displaystyle {\sum}_{d\in D}{\displaystyle {\sum}_{i=1}^{n}I}}({\overline{A}}_{i}(k-1)={\overline{d}}_{k-1}({\overline{L}}_{i}(k-1)))\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\{{\overline{Q}}_{k+1,n}^{d,t,*}({\overline{L}}_{i}(k))\mathrm{log}{\overline{Q}}_{k,n}^{d,t}(\in ,{g}_{n})({\overline{L}}_{i}(k-1))\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+(1-{\overline{Q}}_{k+1,n}^{d,t,*}({\overline{L}}_{i}(k)))\mathrm{log}(1-{\overline{Q}}_{k,n}^{d,t}(\in ,{g}_{n})({\overline{L}}_{i}(k-1)))\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,\dots ,K+1.\end{array}$
Thus ${\in}_{k,n}$ can be obtained by fitting a logistic regression of the outcome ${\stackrel{\u02c9}{Q}}_{k+1,n}^{d,t,\ast}({\stackrel{\u02c9}{L}}_{i}(k))$ with offset $\phantom{\rule{1pt}{0ex}}\mathrm{L}\mathrm{o}\mathrm{g}\mathrm{i}\mathrm{t}\text{\hspace{0.17em}}\phantom{\rule{1pt}{0ex}}{\stackrel{\u02c9}{Q}}_{k,n}^{d,t}$ on multivariate covariate
${h}_{1}(d,t,{V}_{i})I({\stackrel{\u02c9}{A}}_{i}(k-1)={\stackrel{\u02c9}{d}}_{k-1}({\stackrel{\u02c9}{L}}_{i}(k-1)))/{g}_{0:k-1}({O}_{i}),$using a data set pooled across $i=1,\dots ,n,d\in \mathcal{D}$ (consisting of $n\times |\mathcal{D}|$ observations).

This defines the TMLE ${\stackrel{\u02c9}{Q}}_{n}^{\ast}=({\stackrel{\u02c9}{Q}}_{k,n}^{d,t,\ast}:d\in \mathcal{D},t,k=1,\dots ,t)$. In particular, ${\stackrel{\u02c9}{Q}}_{1,n}^{\ast}=({\stackrel{\u02c9}{Q}}_{1,n}^{d,t,\ast}:d\in \mathcal{D},t)$ is the TMLE of ${\stackrel{\u02c9}{Q}}_{1,0}=({E}_{0}({Y}^{d}(t)|L(0)):d\in \mathcal{D},t)$. This defines now the TMLE $({Q}_{L(0),n},{\stackrel{\u02c9}{Q}}_{n}^{\ast})$ of $({Q}_{L(0),0},{\stackrel{\u02c9}{Q}}_{0})$, where ${Q}_{L(0),n}$ is the empirical distribution of $L(0)$.

The TMLE of ${\mathrm{\psi}}_{0}$ is the plug-in estimator corresponding with ${\stackrel{\u02c9}{Q}}_{1,n}^{\ast}$ and ${Q}_{L(0),n}$:
${\mathrm{\psi}}_{n}^{\ast}=\mathrm{\Psi}({\stackrel{\u02c9}{Q}}_{1,n}^{\ast},{Q}_{L(0),n}).$This plug-in estimator $\mathrm{\Psi}({\stackrel{\u02c9}{Q}}_{1,n}^{\ast},{Q}_{L(0),n})$ of ${\mathrm{\psi}}_{0}=\mathrm{\Psi}({\stackrel{\u02c9}{Q}}_{1,0},{Q}_{L(0),0})$ is obtained by regressing ${\stackrel{\u02c9}{Q}}_{1,n}^{d,t,\ast}$ onto $d,t,V$ according to the marginal structural working model in the pooled sample $({\stackrel{\u02c9}{Q}}_{1,n}^{d,t,\ast}({L}_{i}(0)),{V}_{i},d,t)$, $d\in \mathcal{D},i=1,\dots ,n$, $t=1,\dots ,K+1$, using weights $h(d,t,{V}_{i})$.

An alternative pooled TMLE that only fits a single $\in $ to compute the update is described in Appendix C.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.