On shrinkage estimators improving the positive part of James-Stein estimator


 In this work, we study the estimation of the multivariate normal mean by different classes of shrinkage estimators. The risk associated with the quadratic loss function is used to compare two estimators. We start by considering a class of estimators that dominate the positive part of James-Stein estimator. Then, we treat estimators of polynomial form and prove if we increase the degree of the polynomial we can build a better estimator from the one previously constructed. Furthermore, we discuss the minimaxity property of the considered estimators.


Introduction
In multivariate statistical analysis, the multivariate normal distribution comes into play in many applications and statistical tests. It is therefore important to estimate the mean parameter of this distribution. It is well known that the maximum likelihood estimator (MLE) is admissible when the dimension of the parameter space p 3 < . When p 2 > , Stein [1] noticed that the MLE has not minimum square risk, and it can be improved on decreasing the length of the MLE by multiplying to a scalar. This procedure was called shrinkage, and such estimators were called shrinkage estimators. Dozens of shrinkage estimators have been developed by various authors, we cite for example Bhattacharya [2], Efron and Morris [3], Bock [4], Berger [5], Stein [6], Hamdaoui and Benmansour [7] and Tsukuma and Kubukaza [8].
Recent studies, in the context of shrinkage estimation, include Karamikabir and Afsahri [9], Yuzbasi et al. [10] and Hamdaoui et al. [11]. Benkhaled and Hamdaoui [12] have considered two forms of shrinkage estimators of the mean θ of a multivariate normal distribution X N θ σ Ĩ , where σ 2 is unknown and estimated by the statistic S σ χ n 2 2 2 . Estimators that shrink the components of the usual estimator X to zero and estimators of Lindley-type, that shrink the components of the usual estimator to the random variable X . The aim is to ameliorate the results of minimaxity obtained in the published papers of estimators cited above. Hamdaoui et al. [13] have treated the minimaxity and limits of risk ratios of shrinkage estimators of a multivariate normal mean in the Bayesian case. The authors have considered the model X N θ σ Ĩ , p p 2 ( ), where σ 2 is unknown and have taken the prior law θ N υ τ Ĩ , p p 2 these estimators, to the MLE X, when n and p tend to infinity. The majority of these authors have been considered the quadratic loss function for computing the risk.
Sanjari Farsipour and Asgharzadeh [14] have considered the model: X X , , n 1 … to be a random sample from N θ σ , p 2 ( ) with σ 2 known and the aim is to estimate the parameter θ. They studied the admissibility of the estimator of the form aX b + under the balanced loss function. Kaciranlar and Dawoud [15] introduced and derived the optimal extended balanced loss function (EBLF) estimators and predictors and discussed their performances.
In this work, we deal with the model X N θ σ Ĩ , ), where the parameter σ is known. Our aim is to estimate the unknown parameter θ by shrinkage estimators deduced from the MLE. The adopted criterion to compare two estimators is the risk associated with the quadratic loss function. The paper is organized as follows. In Section 2, we recall some preliminaries that are useful for our main results. In Section 3, we consider the new classes of shrinkage estimators that dominate the positive part of James-Stein estimator. Our principal idea is to add recursively a term of the form α X X 1 , where α is a real constant that may depend on p and the parameter m is integer. Then we note that we can construct a class dominating the class of estimators defined previously. And consequently we obtained a series of shrinkage estimators of polynomial form with the indeterminate X 1 and show that if we increase the degree of the polynomial we can construct a better estimator from the one previously given. We conclude this work by a simulation study that shows the performance of the considered estimators.

Preliminaries
In this section, we recall some results that are useful in our main results. If X is a multivariate Gaussian random N θ σ I , In the next section, we deal with the model X N θ σ Ĩ , ) where σ 2 is known. Our aim is to estimate the unknown mean parameter θ by the shrinkage estimators under the quadratic loss function. For the sake of simplicity, we treat only the case when σ 1 2 = , as long as by a change of variable, any model of type ). Namely, we consider the model X N θ Ĩ , p p ( ) and we want to estimate the unknown parameter θ. The quadratic loss function is defined as follows: We associate with this quadratic loss function the risk function defined by = and the desired result follows.
It is well known that δ 0 is minimax and inadmissible for p 3 ≥ , thus any estimator dominates it is also minimax.
We recall the James-Stein estimator introduced by James and Stein [16], defined as and we also recall the positive part of James-Stein estimator given by From the study by Casella and Hwang [17], the risk function of the estimator δ JS + under the quadratic loss

Main results
In this section, we construct the classes of shrinkage estimators which dominate the positive part of James-Stein estimator δ JS + . Our main idea is to add recursively a term of the form α X X 1 , where α is a real constant that may depend on p and the parameter m is integer, and we note that we can construct the estimators that dominate the estimators of the class defined previously.

A class of shrinkage estimators improving the positive part of James-Stein estimator
Consider where the real positive constant b may depend on p. in (5) is Using Lemma 2.1 of Shao and Strawderman [18], we get From formulas (6) and (7) we obtain Under the quadratic loss function L and for p 4 > , a sufficient condition for that the estimator From the last formula and using Proposition 3.1, we have thus, a sufficient condition for that the estimator δ b The optimal value of b that minimizes the right hand side (RHS) of the inequality (9) is

Polynomial shrinkage estimators
Now, we consider the estimator where the constant b  is given in (10) and the real positive parameter c may depend on p. in (11) is Using Lemma 2.1 of Shao and Strawderman [18], we get From formulas (12) and (13) we obtain the desired result. □ Theorem 3.5. Under the quadratic loss function L and for p 6 > , a sufficient condition for that the estimator δ c and From the formulas (14) and (15) and using Proposition 3.4, we have Then, a sufficient condition for that the estimator δ c which is equivalent to Remark 3.6. The optimal value for c that minimizes the RHS of the formula (16) is If we replace c by c  in the formula (16), we get Now, we consider the estimator where the constants b  and c  are given, respectively, in (10) and (17) (18) is and From the formulas (19), (20) and (21) and using Proposition 3.7, we have then, a sufficient condition for that the estimator δ d The optimal value for d that minimizes the RHS of the inequality (22), is If we replace d by d  in the formula (22), we get

Simulation results
We recall the form of the positive part of James-Stein estimator δ JS + given in (3). Its risk function associated with the quadratic loss function L is given in formula (4).
We also recall the estimators δ b In the first part of this section, we present the graphs of the risk ratios of the estimators δ JS , δ JS for various values of p. In the second part of this section, we present two tables. The first containing the values of risk ratios        In Table 1, we note that if λ θ 2 = ‖ ‖ is small, the gain of the risk ratios important. Also, if the values of λ increase, the gain decreases and approaches to zero, a little improvement is then obtained. We also observe that, if the values of p increase, the gain decreases. We conclude that, the gain is important when the parameters p and λ are small. As seen above, the gain of the risk ratios is influenced by various values of p and λ.  In Table 2, we note that the gain of risk ratios showed that if we increase the degree of the polynomial we can constructed a better estimator from the one previously given. An extension of this work is to obtain the similar results in the case where the model has a symmetrical spherical distribution.
Acknowledgements: The author is thankful to the referees for their invaluable comments and suggestions, which put the article in its present shape.

Conflict of interest:
The author states no conflict of interest.