The exact expression of the variance is given in the proof.

#### Proof.

Under Assumption 11, the strong law of large numbers shows that ${\mathbb{W}}_{N}(\theta )$ converges almost surely to $\begin{array}{r}\mathbb{W}(\theta )=\sum _{k=0}^{n-1}\sum _{i=0}^{k}{w}_{0}(i,k-i)G({f}_{0}(i,k-i),f(\theta ,i,k-i))\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

Assumptions 1–(ii) and 11 ensure that ${\theta}_{0}$ is the unique minimum of $\mathbb{W}$. Indeed, $G(p,q)>0$ if $p\ne q$ and $G(p,p)=0$. Thus, $\mathbb{W}$ is minimized by any value of $\theta $ such that $f(\theta ,i,k-i)=f({\theta}_{0},i,k-i)$. By Assumption 1–(ii), this implies $\theta ={\theta}_{0}$.

Moreover the convergence of ${\mathbb{W}}_{N}$ to $\mathbb{W}$ is uniform, since $\mathrm{\Theta}$ is compact and the function $f$ is twice continuously differential with respect to $\theta $, its first variable. This yields the consistency of ${\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}}$. For the sake of completeness, we give a brief proof. Since ${\theta}_{0}$ minimizes $\mathbb{W}$ and ${\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}}$ minimizes ${\mathbb{W}}_{N}$, we have $\begin{array}{rl}0& \le \mathbb{W}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})-\mathbb{W}({\theta}_{0})\\ & =\mathbb{W}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})-{\mathbb{W}}_{N}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})+{\mathbb{W}}_{N}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})-{\mathbb{W}}_{N}({\theta}_{0})+{\mathbb{W}}_{N}({\theta}_{0})-\mathbb{W}({\theta}_{0})\\ & \le \mathbb{W}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})-{\mathbb{W}}_{N}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})+{\mathbb{W}}_{N}({\theta}_{0})-\mathbb{W}({\theta}_{0})\le 2{sup}_{\theta \phantom{\rule{thinmathspace}{0ex}}\in \phantom{\rule{thinmathspace}{0ex}}\mathrm{\Theta}}|{\mathbb{W}}_{N}(\theta )-\mathbb{W}(\theta )|\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

Since ${\theta}_{0}$ is the unique minimizer of $\mathbb{W}$, for $\u03f5>0$, we can find $\delta $ such that if $\theta \in \mathrm{\Theta}$ and $\parallel \theta -{\theta}_{0}\parallel >\u03f5$, then $\mathbb{W}(\theta )-\mathbb{W}({\theta}_{0})\ge \delta $. Thus $\begin{array}{rl}\mathbb{P}(\parallel {\stackrel{\u02c6}{\theta}}_{N}-{\theta}_{0}\parallel >\u03f5)& \le \mathbb{P}(\mathbb{W}({\stackrel{\u02c6}{\theta}}_{N})-\mathbb{W}({\theta}_{0})\ge \delta )\\ & \le \mathbb{P}\left(2{sup}_{\theta \phantom{\rule{thinmathspace}{0ex}}\in \phantom{\rule{thinmathspace}{0ex}}\mathrm{\Theta}}|{\mathbb{W}}_{N}(\theta )-\mathbb{W}(\theta )|\ge \delta \right)\to 0\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

The central limit theorem is a consequence of the consistency and Lemma 9. A first order Taylor extension of ${\dot{\mathbb{W}}}_{N}(\theta )$ at ${\theta}_{0}$ yields

$\begin{array}{rl}0& ={\dot{\mathbb{W}}}_{N}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}})={\dot{\mathbb{W}}}_{N}({\theta}_{0})+{\ddot{\mathbb{W}}}_{N}({\tilde{\theta}}_{N})({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}}-{\theta}_{0})\phantom{\rule{thickmathspace}{0ex}},\end{array}$

where ${\tilde{\theta}}_{N}\in [{\theta}_{0},{\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}}]$. Setting ${\dot{f}}_{0}(i,k-i)=\dot{f}({\theta}_{0},i,k-i)$, we have

$\begin{array}{r}{\dot{\mathbb{W}}}_{N}({\theta}_{0})=\sum _{k=0}^{n-1}\sum _{i=0}^{k}{w}_{N}(i,k-i){\mathrm{\partial}}_{2}G({p}_{N}(i,k-i),{f}_{0}(i,k-i)){\dot{f}}_{0}(i,k-i)\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

Let ${\mathrm{\partial}}_{12}^{2}G$ be the mixed second derivative of $G$. Note that

${\mathrm{\partial}}_{2}G({f}_{0}(i,k-i),{f}_{0}(i,k-i))=0\phantom{\rule{thinmathspace}{0ex}}.$

Thus, by the delta-method (see [24], Theorem 3.3.11) and since ${w}_{N}$ converges almost surely to ${w}_{0}$, we obtain that $\sqrt{N}{\dot{\mathbb{W}}}_{N}({\theta}_{0})$ converges weakly towards

$\begin{array}{r}\sum _{k=0}^{n-1}\sum _{i=0}^{k}{w}_{0}(i,k-i){\mathrm{\partial}}_{12}^{2}G({f}_{0}(i,k-i),{f}_{0}(i,k-i)){\mathrm{\Lambda}}_{0}(i,k-i){\dot{f}}_{0}(i,k-i)\phantom{\rule{thickmathspace}{0ex}},\end{array}$

where ${\mathrm{\Lambda}}_{0}(i,k)$ are independent Gaussian random variables with zero mean and variance ${\gamma}_{0}(i,k)$ defined in eq. (14). Equivalently, $\sqrt{N}{\dot{\mathbb{W}}}_{N}({\theta}_{0})$ converges weakly to a Gaussian vector with zero mean and covariance matrix $H({\theta}_{0})$ defined by

$\begin{array}{r}H({\theta}_{0})=\sum _{k=0}^{n-1}\sum _{i=0}^{k}{w}_{0}^{2}(i,k-i)\{{\mathrm{\partial}}_{12}^{2}G({f}_{0}(i,k-i),{f}_{0}(i,k-i)){\}}^{2}{\gamma}_{0}(i,k){\dot{f}}_{0}(i,k-i)({\dot{f}}_{0}(i,k-i){)}^{\prime}\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

By the law of large numbers, ${\ddot{\mathbb{W}}}_{N}(\theta )$ converges almost surely to $\ddot{\mathbb{W}}(\theta )$ and this convergence is also locally uniform. Thus, ${\ddot{\mathbb{W}}}_{N}({\tilde{\theta}}_{N})$ converges almost surely to $\ddot{\mathbb{W}}({\theta}_{0})$. Using again the fact that ${\mathrm{\partial}}_{2}G(p,p)=0$, we obtain

$\begin{array}{r}\ddot{\mathbb{W}}({\theta}_{0})=\sum _{k=0}^{n-1}\sum _{i=0}^{k}{w}_{0}(i,k-i){\mathrm{\partial}}_{2}^{2}G({f}_{0}(i,k-i),{f}_{0}(i,k-i)){\dot{f}}_{0}(i,k-i)({\dot{f}}_{0}(i,k-i){)}^{\prime}\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

Denote for brevity $g(i,k-i)={w}_{0}(i,k-i){\mathrm{\partial}}_{2}^{2}G({f}_{0}(i,k-i),{f}_{0}(i,k-i))$. Then, for any $u\in {\mathbb{R}}^{d}$, we have

$\begin{array}{rl}u\ddot{\mathbb{W}}({\theta}_{0}){u}^{\prime}& ={\sum}_{k=0}^{n-1}{\sum}_{i=0}^{k}g(i,k-i){\left({\sum}_{s=1}^{d}{u}_{s}{\mathrm{\partial}}_{s}f({\theta}_{0},i,k-i)\right)}^{2}\phantom{\rule{thickmathspace}{0ex}}.\end{array}$(15)

By assumption 12, $g(i,k-i)>0$ for all $0\le i\le k\le n-1$, thus eq. (15) is zero only if for all $k=0,\dots ,n-1$ and $i=0,\dots ,k$, we have $\sum _{s=1}^{d}{u}_{s}{\mathrm{\partial}}_{s}f({\theta}_{0},i,k-i)=0$. By Assumption 1–(iii), this is possible only if ${u}_{s}=0$ for all $s=1,\dots ,d$. Thus $\ddot{\mathbb{W}}({\theta}_{0})$ is positive definite.

We can now conclude that for large enough $N$, ${\ddot{\mathbb{W}}}_{N}({\tilde{\theta}}_{N})$ is invertible and we can write

$\begin{array}{r}\sqrt{N}({\stackrel{\u02c6}{\theta}}_{N}^{\mathbb{W}}-{\theta}_{0})=-{\ddot{\mathbb{W}}}_{N}^{-1}({\tilde{\theta}}_{N})\sqrt{N}{\dot{\mathbb{W}}}_{N}({\theta}_{0})\phantom{\rule{thickmathspace}{0ex}}.\end{array}$

The right hand side converges weakly to the Gaussian distribution with zero mean and covariance matrix ${\ddot{\mathbb{W}}}^{-1}({\theta}_{0})H({\theta}_{0}){\ddot{\mathbb{W}}}^{-1}({\theta}_{0})$.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.