One of the most common challenges in multivariate statistical analysis is estimating the mean parameters. A well-known approach of estimating the mean parameters is the maximum likelihood estimator (MLE). However, the MLE becomes inefficient in the case of having large-dimensional parameter space. A popular estimator that tackles this issue is the James-Stein estimator. Therefore, we aim to use the shrinkage method based on the balanced loss function to construct estimators for the mean parameters of the multivariate normal (MVN) distribution that dominates both the MLE and James-Stein estimators. Two classes of shrinkage estimators have been established that generalized the James-Stein estimator. We study their domination and minimaxity properties to the MLE and their performances to the James-Stein estimators. The efficiency of the proposed estimators is explored through simulation studies.
Estimating the mean parameters is one of the most often encountered difficulties in multivariate statistical analysis. Various studies have dealt with this issue in the context of MVN distribution. When the dimensionality of the parameter space is greater than three, the efficiency of the MLE approach is not fulfilled. There are certain limitations to this approach, which have been shown by Stein  and James and Stein .
A common strategy for enhancing the MLE is the shrinkage estimation approach, which reduces the components of the MLE to zero. The shrinkage estimation approach has been used for enhancing different estimators, such as ordinary least squares estimator , and preliminary test and Stein-type shrinkage ridge estimators in robust regression . In the context of enhancing the mean of the MVN distribution, Khursheed  studied the domination and admissibility properties of the MLE of a family of shrinkage estimators. Baranchik  and Shinozaki  also studied the minimaxity of some shrinkage estimators. In addition, several studies have examined the minimaxity and domination properties for various shrinkage estimators under the Bayesian framework, including Efron and Morris [8,9], Berger and Strawderman , Benkhaled and Hamdaoui , Hamdaoui et al. [12,13], and Zinodiny et al. . Most of these studies have used the quadratic loss function to compute the risk function.
This paper introduces a new class of shrinkage estimators that dominate the James-Stein estimator and the MLE. In order to get a competitive estimator, the estimator has to be unbiased and have a good fit. This can be done by implementing the balanced loss function in the estimation procedure of the competitive estimator. The balanced loss function has been suggested by Zellner , and its performance and applications to estimators have been discussed by Sanjari Farsipour and Asgharzadeh , JafariJozani et al. , and Selahattin and Issam .
Therefore, we consider the random vector to be normally distributed with an unknown mean vector and covariance matrix , where is the dimension of parameter space and is the identity matrix. As the main object of this paper is to propose a new estimator of , we estimated the unknown parameter by ( ). Then, we construct a new class of shrinkage estimators of derived from the MLE. Specifically, the new class of shrinkage estimators is proposed by modifying the James-Stein estimator. We consider adding a term of the form to the James-Stein estimator , where and are real constant parameters that both depend on the integer parameters and . We show that these estimators are minimax and dominating the James-Stein estimator for any values of and . The balanced loss function is implemented in the computation of the risk function to compare the efficiency of the proposed estimators over the James-Stein estimator.
The rest of this paper is composed of the following sections: In Section 2, we establish the minimaxity of the estimators defined by . Section 3 introduces the new shrinkage estimator class and its domination criterion over the James-Stein estimator. The efficiency of the new estimator classes is explored through simulation studies in Section 4. Then, we conclude our work in Section 5.
2 A class of minimax shrinkage estimators
We assume here the random variable is following an MVN distribution with mean vector and a covariance matrix , where the parameters and are unknown. Thus, the term follows a non-central chi-square distribution with degrees of freedom and non-centrality parameter . As the aim of this paper is to establish an effective estimator for the mean parameter , we consider the statistic ( ) as an estimate of the unknown parameter . Thus, for any estimator of , the balanced squared error loss function is defined as follows:
where is the target estimator of , is the weight given to the closeness between the estimators and , and is the relative weight attributed to the accuracy of the estimator . The associated risk function to the function is defined as follows:
Benkhaled et al.  demonstrated that the MLE of is . Then, its risk function becomes . This finding shows the minimaxity and inadmissibility property of for . Consequently, the minimaxity property is also achieved for any estimator that dominates the estimator .
Now, let consider the estimator
where is a real constant parameter that can be related to the values of the parameters and .
The associated risk function of the estimator given in equation (2) based on the balanced loss function given in equation (1) is
The last equality comes from the independence between two random variables and .
where and for all , . Then, based on Lemma 1 given in Stein , we get
From Proposition (2.1), the minimaxity and domination criterion of the estimator to the MLE is achieved under the following condition:
Thus, the risk function is minimized at the optimal value ( ) as follows:
Then, by considering , we get the James-Stein estimator
From Proposition 2.1, the risk function of is expressed as follows:
Based on equation (5), the positive part of James-Stein estimator can be defined as follows:
where , and its risk function associated with is shown in the following formula:
where represents the indicating function of the set . Both equations (6) and (8) show that and are less than , which proves the domination and minimaxity of both estimators and over the MLE.
3 The improved shrinkage estimators of the James-Stein estimator
In this section, we construct a class of shrinkage estimators that has the domination property over the James-Stein estimator . This class of estimators is a modified version of . Specifically, we extend given in equation (5) by adding the term , where behaves like in equation (2). These new estimators are then investigated regarding their superiority to the James-Stein estimator . The modified version of the James-Stein estimator is shown in the following formula:
The associated risk function of the estimator given in equation (9) based on the balanced loss function given in equation (1) is
where and for .
where the last equality is obtained as a result of the independence between the two random variables and . Thus,
Then, by making the transformation , where for , and using Lemma 1 given in Stein , we get
Under the balanced loss function , the estimator with and
dominates the James-Stein estimator .
According to Proposition 3.1, we have
Following Lemma 2 given in the study by Benkhaled et al. , we obtain
The right side of the aforementioned inequality is minimized at the optimal value of as follows:
Then, by replacing by in equation (11), we obtain
4 Simulation results
We conduct here a simulation study for comparing the efficiency of the proposed estimators and to the estimators , and the MLE. We consider here in the estimator . This comparison is done based on the risk ratio of these estimators to the MLE. Thus, the risk ratios of these estimators are denoted as follows: , , , and . We consider here all estimators to be functions of .
Figures 1, 2, 3, 4, 5, show the curve of the risk ratios for simulated values of in the interval and for relatively low and high values of , , and . The risk ratio of the MLE is represented by the horizontal line at the value of one. The gap between the curves of estimators indicates the gain magnitude of the estimator. We observed that the curves of all risk ratios for the different sets of , , and values are entirely located below 1, which indicate the domination of these estimators to the MLE . Consequently, these estimators are considered minimax.
Among these estimators, the positive-part James-Stein estimator ( ) was the more efficient estimator for values of less than approximately 10. It means that dominates all the considered estimators. Also, we note that the estimator dominates the James-Stein estimator for the various values of , , and . We also observe a larger gain of the estimator for low values of . The gain of the estimators , , and was very similar in a specific period of values, which depend on the combination of the values of the and . To study this similarity, we conduct simulation studies for all combinations of the selected values of and for different sets of values of and .
Tables 1, 2, 3, 4 show the results of the risk ratios of the estimators , , and . Each cell of the tables represents the risk ratio of these estimators in order. We observe a strong relationship between the gain of the risk ratios and the values of and . The gain of all risk ratios was large with small values of and and tended to vanish with the increase of and values. Also, the difference in the gain of risk ratios was observed in small values of . This difference indicated the domination of an estimator to another. Thus, the estimator dominated both estimators and for small values of . However, as the values of and increased, the difference in the gain of these estimators became negligible (i.e., no improvement of the proposed estimators over the James-Stein estimator). The other parameters and have also an influence on the gain of the estimators. The gain of the estimators was large for large values of and under fixed values of . Specifically, the increase of had significant influence on the gain than the increase of values. This means that having large values of , , and with value of close to zero leads to a larger gain of the estimators, which leads to a significant improvement. Thus, we conclude that the improvement of the considered estimators is clearly affected by the values of the parameters , , , and .
In this paper, we constructed a new class of shrinkage estimator that dominate the James-Stein estimator for the estimation of the mean of the MVN distribution , where is unknown. We implemented the balanced square function in the form of the risk function of the estimators for the purpose of comparing the efficiency of two estimators. We started establishing a class of the minimaxity property for the estimator defined by . We found then the minimum risk of this class that resulted in the James-Stein estimator. Then, we constructed a new class of shrinkage estimator that is a modified version of the James-Stein estimator. Mainly, a term was added to the James-Stein estimator. The efficiency of the constructed estimator was explored by simulation studies under various values of the model parameters, and it has been shown that the constructed estimators beat the James-Stein estimator under the balanced loss function.
An extension of this work is to implement the similar procedures of this paper in the Bayesian framework and explore possible shrinkage estimators for the mean parameters of the MVN distribution, such as the ridge estimators.
The authors are very grateful to the editor and the referees for their valuable suggestions and advice that enhance the whole paper.
Funding information: This research received no external funding.
Conflict of interest: The authors state no conflict of interest.
 C. Stein, Inadmissibility of the usual estimator for the mean of a multivariate normal distribution, in: Proceedings of the 3th Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Vol. 3.1, University of California Press, Berkeley, 1956, pp. 197–206. 10.1525/9780520313880-018Search in Google Scholar
 W. James and C. Stein, Estimation with quadratic loss, in: Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Vol. 4.1, University of California Press, Berkeley, 1961, pp. 361–379. 10.1007/978-1-4612-0919-5_30Search in Google Scholar
 O. Nimet and K. Selahattin, Risk performance of some shrinkage estimators, Comm. Statist. Simulation Comput. 50 (2021), no. 2, 323–342. 10.1080/03610918.2018.1554116Search in Google Scholar
 M. Norouzirad and M. Arashi, Preliminary test and Stein-type shrinkage ridge estimators in robust regression, Statist. Papers 60 (2019), no. 6, 1849–1882. 10.1007/s00362-017-0899-3Search in Google Scholar
 A. Khursheed, A family of admissible minimax estimators of a mean of a multivariate normal distribution, Ann. Statist. 1 (1973), 517–525. 10.1214/aos/1176342417Search in Google Scholar
 A. J. Baranchik, Inadmissibility of maximum likelihood estimators in some multiple regression problems with three or more independents variables, Ann. Statist. 1 (1973), 312–321. 10.1214/aos/1176342368Search in Google Scholar
 N. Shinozaki, A note on estimating the mean vector of a multivariate normal distribution with general quadratic loss function, Keio Engrg. Rep. 27 (1974), 105–112. Search in Google Scholar
 B. Efron and C. N. Morris, Stein’s estimation rule and its competitors: An empirical Bayes approach, J. Amer. Statist. Assoc. 68 (1973), 117–130. 10.1080/01621459.1973.10481350Search in Google Scholar
 B. Efron and C. N. Morris, Data analysis using Stein’s estimator and its generalizations, J. Amer. Statist. Assoc. 70 (1975), 311–319. 10.1080/01621459.1975.10479864Search in Google Scholar
 J. O. Berger and W. E. Strawderman, Choice of hierarchical priors: Admissibility in estimation of normal means, Ann. Statist. 24 (1996), 931–951. 10.1214/aos/1032526950Search in Google Scholar
 A. Benkhaled and A. Hamdaoui, General classes of shrinkage estimators for the multivariate normal mean with unknown variance: Minimaxity and limit of risk ratios, Kragujevac J. Math. 46 (2019), no. 2, 193–213. 10.46793/KgJMat2202.193BSearch in Google Scholar
 A. Hamdaoui, A. Benkhaled, and M. Terbeche, Baranchick-type estimators of a multivariate normal mean under the general quadratic loss function, J. Sib. Fed. Univ. Math. Phys. 13 (2020), no. 5, 608–621. 10.17516/1997-1397-2020-13-5-608-621Search in Google Scholar
 A. Hamdaoui, A. Benkhaled, and M. Mezouar, Minimaxity and limits of risk ratios of shrinkage estimators of a multivariate normal mean in the Bayesian case, Stat. Optim. Inf. Comput. 8 (2020), no. 2, 507–520. 10.19139/soic-2310-5070-735Search in Google Scholar
 S. Zinodiny, S. Rezaei, and S. Nadarajah, Bayes minimax estimation of the mean matrix of matrix-variate normal distribution under balanced loss function, Statist. Probab. Lett. 125 (2017), 110–120, http://doi.org/10.1016/j.spl.2017.02.003. 10.1016/j.spl.2017.02.003Search in Google Scholar
 A. Zellner, Bayesian and non-Bayesian estimation using balanced loss functions, in: J. O. Berger, S. S. Gupta (eds.), Statistical Decision Theory and Related Topics, Vol. 8, Springer, New York, 1994, pp. 337–390. 10.1007/978-1-4612-2618-5_28Search in Google Scholar
 N. Sanjari Farsipour and A. Asgharzadeh, Estimation of a normal mean relative to balanced loss functions, Statist. Papers 45 (2004), 279–286. 10.1007/BF02777228Search in Google Scholar
 M. Jafari Jozani, A. Leblanc, and E. Marchand, On continuous distribution functions, minimax and best invariant estimators and integrated balanced loss functions, Canad. J. Statistist. 42 (2014), 470–486. 10.1002/cjs.11217Search in Google Scholar
 K. Selahattin and D. Issam, The optimal extended balanced loss function estimators, J. Comput. Appl. Math. 345 (2019), 86–98. 10.1016/j.cam.2018.06.021Search in Google Scholar
 A. Benkhaled, M. Terbeche, and A. Hamdaoui, Polynomials shrinkage estimators of a multivariate normal mean, Stat. Optim. Inf. Comput. (2021), https://doi.org/10.19139/soic-2310-5070-1095. 10.19139/soic-2310-5070-1095Search in Google Scholar
 C. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist. 9 (1981), 1135–1151. 10.1214/aos/1176345632Search in Google Scholar
© 2022 Abdelkader Benkhaled et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.