Abstract
One of the most common challenges in multivariate statistical analysis is estimating the mean parameters. A wellknown approach of estimating the mean parameters is the maximum likelihood estimator (MLE). However, the MLE becomes inefficient in the case of having largedimensional parameter space. A popular estimator that tackles this issue is the JamesStein estimator. Therefore, we aim to use the shrinkage method based on the balanced loss function to construct estimators for the mean parameters of the multivariate normal (MVN) distribution that dominates both the MLE and JamesStein estimators. Two classes of shrinkage estimators have been established that generalized the JamesStein estimator. We study their domination and minimaxity properties to the MLE and their performances to the JamesStein estimators. The efficiency of the proposed estimators is explored through simulation studies.
1 Introduction
Estimating the mean parameters is one of the most often encountered difficulties in multivariate statistical analysis. Various studies have dealt with this issue in the context of MVN distribution. When the dimensionality of the parameter space is greater than three, the efficiency of the MLE approach is not fulfilled. There are certain limitations to this approach, which have been shown by Stein [1] and James and Stein [2].
A common strategy for enhancing the MLE is the shrinkage estimation approach, which reduces the components of the MLE to zero. The shrinkage estimation approach has been used for enhancing different estimators, such as ordinary least squares estimator [3], and preliminary test and Steintype shrinkage ridge estimators in robust regression [4]. In the context of enhancing the mean of the MVN distribution, Khursheed [5] studied the domination and admissibility properties of the MLE of a family of shrinkage estimators. Baranchik [6] and Shinozaki [7] also studied the minimaxity of some shrinkage estimators. In addition, several studies have examined the minimaxity and domination properties for various shrinkage estimators under the Bayesian framework, including Efron and Morris [8,9], Berger and Strawderman [10], Benkhaled and Hamdaoui [11], Hamdaoui et al. [12,13], and Zinodiny et al. [14]. Most of these studies have used the quadratic loss function to compute the risk function.
This paper introduces a new class of shrinkage estimators that dominate the JamesStein estimator and the MLE. In order to get a competitive estimator, the estimator has to be unbiased and have a good fit. This can be done by implementing the balanced loss function in the estimation procedure of the competitive estimator. The balanced loss function has been suggested by Zellner [15], and its performance and applications to estimators have been discussed by Sanjari Farsipour and Asgharzadeh [16], JafariJozani et al. [17], and Selahattin and Issam [18].
Therefore, we consider the random vector
The rest of this paper is composed of the following sections: In Section 2, we establish the minimaxity of the estimators defined by
2 A class of minimax shrinkage estimators
We assume here the random variable
where
Benkhaled et al. [19] demonstrated that the MLE of
Now, let consider the estimator
where
Proposition 2.1
The associated risk function of the estimator
Proof
The last equality comes from the independence between two random variables
As,
where
Then,
From Proposition (2.1), the minimaxity and domination criterion of the estimator
Thus, the risk function
Then, by considering
From Proposition 2.1, the risk function of
Based on equation (5), the positive part of JamesStein estimator can be defined as follows:
where
where
3 The improved shrinkage estimators of the JamesStein estimator
In this section, we construct a class of shrinkage estimators that has the domination property over the JamesStein estimator
Proposition 3.1
The associated risk function of the estimator
where
Proof
where the last equality is obtained as a result of the independence between the two random variables
Then, by making the transformation
Thus,
Theorem 3.1
Under the balanced loss function
dominates the JamesStein estimator
Proof
According to Proposition 3.1, we have
Following Lemma 2 given in the study by Benkhaled et al. [19], we obtain
Then,
The right side of the aforementioned inequality is minimized at the optimal value of
Then, by replacing
4 Simulation results
We conduct here a simulation study for comparing the efficiency of the proposed estimators
Figures 1, 2, 3, 4, 5, show the curve of the risk ratios for simulated values of
Among these estimators, the positivepart JamesStein estimator (
Tables 1, 2, 3, 4 show the results of the risk ratios of the estimators




0.0  0.1  0.2  0.7  0.9  
1.2418  0.6362  0.6726  0.7090  0.8909  0.9636 
0.5150  0.5635  0.6120  0.8545  0.9515  
0.4824  0.5371  0.5911  0.8516  0.9512  
5.0019  0.7501  0.7751  0.8001  0.9250  0.9750 
0.6668  0.7001  0.7334  0.9000  0.9667  
0.6502  0.6867  0.7228  0.8985  0.9665  
10.4311  0.8326  0.8494  0.8661  0.9498  0.9833 
0.7769  0.7992  0.8215  0.9330  0.9777  
0.7694  0.7932  0.8167  0.9324  0.9776  
20.0000  0.8962  0.9066  0.9170  0.9689  0.9896 
0.8616  0.8755  0.8893  0.9585  0.9865  
0.8589  0.8733  0.8876  0.9582  0.9861 




0.0  0.1  0.2  0.7  0.9  
1.2418  0.5758  0.6182  0.6606  0.8727  0.9576 
0.4343  0.4909  0.5475  0.8303  0.9434  
0.4049  0.4671  0.5286  0.8276  0.9431  
5.0019  0.6738  0.7064  0.7390  0.9021  0.9674 
0.5651  0.6086  0.6521  0.8695  0.9565  
0.5468  0.5938  0.6404  0.8679  0.9563  
10.4311  0.7585  0.7827  0.8068  0.9276  0.9758 
0.6781  0.7102  0.7424  0.9034  0.9678  
0.6678  0.7020  0.7359  0.9025  0.9677  
20.0000  0.8363  0.8527  0.8690  0.9509  0.9836 
0.7817  0.8036  0.8254  0.9345  0.9782  
0.7770  0.7998  0.8224  0.9341  0.9781 




0.0  0.1  0.2  0.7  0.9  
1.2418  0.5591  0.6032  0.6473  0.8677  0.9559 
0.4121  0.4709  0.5297  0.8236  0.9412  
0.3753  0.4411  0.5062  0.8203  0.9408  
5.0019  0.6971  0.7274  0.7577  0.9091  0.9697 
0.5961  0.6365  0.6769  0.8788  0.9596  
0.5763  0.6204  0.6642  0.8770  0.9594  
10.4311  0.7971  0.8174  0.8377  0.9391  0.9797 
0.7295  0.7566  0.7836  0.9188  0.9729  
0.7203  0.7491  0.7777  0.9180  0.9729  
20.0000  0.8742  0.8868  0.8994  0.9623  0.9874 
0.8323  0.8490  0.8658  0.9497  0.9832  
0.8288  0.8463  0.8636  0.9494  0.9832 




0.0  0.1  0.2  0.7  0.9  
1.2418  0.4858  0.5372  0.5886  0.8457  0.9486 
0.3144  0.3829  0.4515  0.7943  0.9314  
0.2854  0.3595  0.4330  0.7917  0.9311  
5.0019  0.6046  0.6442  0.6837  0.8814  0.9605 
0.4728  0.5255  0.5783  0.8418  0.9473  
0.4540  0.5103  0.5662  0.8402  0.9471  
10.4311  0.7073  0.7366  0.7659  0.9122  0.9707 
0.6098  0.6488  0.6878  0.8829  0.9610  
0.5988  0.6399  0.6808  0.8819  0.9609  
20.0000  0.8016  0.8214  0.8413  0.9405  0.9801 
0.7354  0.7619  0.7884  0.9206  0.9735  
0.7303  0.7577  0.7851  0.9202  0.9735 
5 Conclusion
In this paper, we constructed a new class of shrinkage estimator that dominate the JamesStein estimator for the estimation of the mean
An extension of this work is to implement the similar procedures of this paper in the Bayesian framework and explore possible shrinkage estimators for the mean parameters of the MVN distribution, such as the ridge estimators.
Acknowledgements
The authors are very grateful to the editor and the referees for their valuable suggestions and advice that enhance the whole paper.

Funding information: This research received no external funding.

Conflict of interest: The authors state no conflict of interest.
References
[1] C. Stein, Inadmissibility of the usual estimator for the mean of a multivariate normal distribution, in: Proceedings of the 3th Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Vol. 3.1, University of California Press, Berkeley, 1956, pp. 197–206. 10.1525/9780520313880018Search in Google Scholar
[2] W. James and C. Stein, Estimation with quadratic loss, in: Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Vol. 4.1, University of California Press, Berkeley, 1961, pp. 361–379. 10.1007/9781461209195_30Search in Google Scholar
[3] O. Nimet and K. Selahattin, Risk performance of some shrinkage estimators, Comm. Statist. Simulation Comput. 50 (2021), no. 2, 323–342. 10.1080/03610918.2018.1554116Search in Google Scholar
[4] M. Norouzirad and M. Arashi, Preliminary test and Steintype shrinkage ridge estimators in robust regression, Statist. Papers 60 (2019), no. 6, 1849–1882. 10.1007/s0036201708993Search in Google Scholar
[5] A. Khursheed, A family of admissible minimax estimators of a mean of a multivariate normal distribution, Ann. Statist. 1 (1973), 517–525. 10.1214/aos/1176342417Search in Google Scholar
[6] A. J. Baranchik, Inadmissibility of maximum likelihood estimators in some multiple regression problems with three or more independents variables, Ann. Statist. 1 (1973), 312–321. 10.1214/aos/1176342368Search in Google Scholar
[7] N. Shinozaki, A note on estimating the mean vector of a multivariate normal distribution with general quadratic loss function, Keio Engrg. Rep. 27 (1974), 105–112. Search in Google Scholar
[8] B. Efron and C. N. Morris, Stein’s estimation rule and its competitors: An empirical Bayes approach, J. Amer. Statist. Assoc. 68 (1973), 117–130. 10.1080/01621459.1973.10481350Search in Google Scholar
[9] B. Efron and C. N. Morris, Data analysis using Stein’s estimator and its generalizations, J. Amer. Statist. Assoc. 70 (1975), 311–319. 10.1080/01621459.1975.10479864Search in Google Scholar
[10] J. O. Berger and W. E. Strawderman, Choice of hierarchical priors: Admissibility in estimation of normal means, Ann. Statist. 24 (1996), 931–951. 10.1214/aos/1032526950Search in Google Scholar
[11] A. Benkhaled and A. Hamdaoui, General classes of shrinkage estimators for the multivariate normal mean with unknown variance: Minimaxity and limit of risk ratios, Kragujevac J. Math. 46 (2019), no. 2, 193–213. 10.46793/KgJMat2202.193BSearch in Google Scholar
[12] A. Hamdaoui, A. Benkhaled, and M. Terbeche, Baranchicktype estimators of a multivariate normal mean under the general quadratic loss function, J. Sib. Fed. Univ. Math. Phys. 13 (2020), no. 5, 608–621. 10.17516/199713972020135608621Search in Google Scholar
[13] A. Hamdaoui, A. Benkhaled, and M. Mezouar, Minimaxity and limits of risk ratios of shrinkage estimators of a multivariate normal mean in the Bayesian case, Stat. Optim. Inf. Comput. 8 (2020), no. 2, 507–520. 10.19139/soic23105070735Search in Google Scholar
[14] S. Zinodiny, S. Rezaei, and S. Nadarajah, Bayes minimax estimation of the mean matrix of matrixvariate normal distribution under balanced loss function, Statist. Probab. Lett. 125 (2017), 110–120, http://doi.org/10.1016/j.spl.2017.02.003. 10.1016/j.spl.2017.02.003Search in Google Scholar
[15] A. Zellner, Bayesian and nonBayesian estimation using balanced loss functions, in: J. O. Berger, S. S. Gupta (eds.), Statistical Decision Theory and Related Topics, Vol. 8, Springer, New York, 1994, pp. 337–390. 10.1007/9781461226185_28Search in Google Scholar
[16] N. Sanjari Farsipour and A. Asgharzadeh, Estimation of a normal mean relative to balanced loss functions, Statist. Papers 45 (2004), 279–286. 10.1007/BF02777228Search in Google Scholar
[17] M. Jafari Jozani, A. Leblanc, and E. Marchand, On continuous distribution functions, minimax and best invariant estimators and integrated balanced loss functions, Canad. J. Statistist. 42 (2014), 470–486. 10.1002/cjs.11217Search in Google Scholar
[18] K. Selahattin and D. Issam, The optimal extended balanced loss function estimators, J. Comput. Appl. Math. 345 (2019), 86–98. 10.1016/j.cam.2018.06.021Search in Google Scholar
[19] A. Benkhaled, M. Terbeche, and A. Hamdaoui, Polynomials shrinkage estimators of a multivariate normal mean, Stat. Optim. Inf. Comput. (2021), https://doi.org/10.19139/soic231050701095. 10.19139/soic231050701095Search in Google Scholar
[20] C. Stein, Estimation of the mean of a multivariate normal distribution, Ann. Statist. 9 (1981), 1135–1151. 10.1214/aos/1176345632Search in Google Scholar
© 2022 Abdelkader Benkhaled et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.