Strong consistency of regression function estimator with martingale difference errors

The estimation of a regression function g x E y x ( ) ( ∣ ) = is an important statistical problem. Usually, g x ( ) has a specified functional form and parameter estimates are obtained according to certain desirable criteria. When the errors are normal random variables, we can test the appropriateness of the hypothesized model. However, one may wish to have an estimation technique applicable for an arbitrary g x ( ). Priestley and Chao [1] considered the problem of estimating an unknown regression function g x ( ) given observations at a fixed set of points. Their estimate is nonparametric in the sense that g x ( ) is restricted only by certain smoothing requirements.


Introduction
The estimation of a regression function g x E y x ( ) ( | ) = is an important statistical problem. Usually, g x ( ) has a specified functional form and parameter estimates are obtained according to certain desirable criteria. When the errors are normal random variables, we can test the appropriateness of the hypothesized model. However, one may wish to have an estimation technique applicable for an arbitrary g x ( ). Priestley and Chao [1] considered the problem of estimating an unknown regression function g x ( ) given observations at a fixed set of points. Their estimate is nonparametric in the sense that g x ( ) is restricted only by certain smoothing requirements.

Priestley-Chao estimate
Let Y Y , , n 1 … be n observations at fixed x x , , n 1 … according to the model Y g x ε i n , 1 , where g x ( ) is an unknown function defined in 0, 1 [ ] and the errors ε i { } are i.i.d. random variables with zero mean and finite variance σ 2 . Without loss of generality we assume where K is a weight function satisfying The estimate g x n ( ) can be viewed as a moving average of sample Y s whose weights are based on a class of kernels suggested by Rosenblatt [2] and Parzen [3].
Priestley and Chao [1] established the consistency of the estimate g x n ( ). Benedetti [4] studied the strong convergence and asymptotic normality for g x n ( ). Specially, the optimal choice of a weighting function was considered. Gasser and Müller [5] established a new kernel estimate which was superior to the one introduced by Priestley and Chao [1]. Their results were not restricted to positive kernels, but extended to classes of kernels satisfying certain moment conditions. Cheng [6] used linear combinations of sample quantile regression functions to estimate the unknown function g. Csörgő and Mielniczuk [7] considered the fixeddesign regression model with long-range dependent normal errors and showed that the finite-dimensional distributions of the properly normalized Gasser and Mülller [5] and Priestley and Chao [1] estimators of the regression function converge to those of a white noise process. Burman [8] dealt with the convergence of spline regression estimators under mixing conditions. Robinson [9] studied central limit theorems for an estimator for the regression function of a fixed-design model when the residuals come from a linear process of martingale differences. Tran et al. [10] discussed the asymptotic normality of g x n ( ) assuming that the errors form a linear time series, more precisely, a weakly stationary linear process based on a martingale difference sequence. Yang and Wang [11] and Liang and Jing [12] established the strong consistency of regression function estimator for negative associated samples. Niu and Li [13] discussed the asymptotic normality of the weighted kernel estimators of g x ( ) when the censoring variable is known or unknown. Zhang et al. [14] studied the strong convergence of the estimate g x n ( ) when the errors are the mixingale sequence. Yang [15] obtained the strong consistency of the Georgiev estimates of the regression function when the errors are martingale differences.
Motivated by the above works, in the present paper, we shall establish the strong consistency and uniform strong consistency of Priestley-Chao estimate of regression function based on the errors of martingale difference sequences and extend the results of Li [16] and Yin et al. [17]. In Section 2, we state the main results, and the proofs of these theorems are given in Section 3.

Main results
We consider the following regression model with fixed design ≤ are the nonrandom design points, and Y Y , , n 1 … are the observed sample items.
− , and we assume that the following regularity conditions are satisfied: (a) Let K( ) ⋅ be a weighted bounded function satisfying The assumptions for the weighted function K in [4] are stronger than ones in the present paper. Besides the condition (a), Benedetti [4] assumed that the weighted function K satisfied Suppose that the exponential moments of the errors ε i { } exist, then we have the following results.

Auxiliary results
In this section, we give some lemmas in order to prove our main results.
is an increasing sequence, so it is easy to check that, be a sequence of nonnegative random variables defined on P Ω, , ( )and n , 0 be a sequence of sub-σ-algebras of (to which ξ n , 1 where S ξ n i Proof. For any t 0 > , by Lemma 1 in [19] and Markov's inequality, we have Thus, the proof is completed. □

Proof of main results
In this section, we give the proofs of main results. Let  Proof of Theorem 2.1. Define x η   1  I  log  ,  2  I  log  ,   1  1  ,  2  2  , , , , then it follows that ε ξ η   Now letting m 0 n → in (4.6), we have, for all n large enough, By choosing t 1 > , we get, Thus by (4.9) and (4.10), we get (4.8). The proof of the theorem is completed. □

Proof of Theorem 2.2. For
. In order to prove (4.1) and (4.2), it is sufficient to prove that for any x 0, 1 ( ) ∈ and any τ 0, 2 1 ( ) sup 0 a.s.,  Thus for all n large enough, we obtain from (4.13) By letting t 2  in which the equality is valid as r 1 > and the last inequality is from Lemma 3.2. Furthermore, we have log .