Show Summary Details
More options …

# Open Physics

### formerly Central European Journal of Physics

Editor-in-Chief: Seidel, Sally

Managing Editor: Lesna-Szreter, Paulina

IMPACT FACTOR 2018: 1.005

CiteScore 2018: 1.01

SCImago Journal Rank (SJR) 2018: 0.237
Source Normalized Impact per Paper (SNIP) 2018: 0.541

ICV 2017: 162.45

Open Access
Online
ISSN
2391-5471
See all formats and pricing
More options …
Volume 15, Issue 1

# Statistical inferences with jointly type-II censored samples from two Pareto distributions

• Corresponding author
• Department of Statistics, Faculty of Science- AL Faisaliah, King Abdulaziz University, P.O. Box 32691, Jeddah 21438 Saudi Arabia
• Email
• Other articles by this author:
• De Gruyter OnlineGoogle Scholar
Published Online: 2017-08-10 | DOI: https://doi.org/10.1515/phys-2017-0064

## Abstract

In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

PACS: 02.50.Ga; 02.50.Ng; 02.50.Tt

## 1 Introduction

To be more accurate, we suppose manufactured products are taken from two separate lines with the same conditions and choose two samples of sizes m and n from these two lines, to test in a life testing experiment. Under the considerations of cost and time, the experimenter stopped the experiment after a prior fixed number of failures. The unit type and it is failure time is recorded until a prior fixed number of failures is obtained. Different authors have dealt with this type of censoring and its statistics earlier such as, Rao et al. Rao et al. [1], Basu [2], Johnson and Mehrotra [3], Mehrotra and Johnson [4], Bhattacharyya and Mehrotra [5], Mehrotra and Bhattacharyya [6]. Balakrishnan and Rasouli [7], Rasouli and Balakrishnan [8], and recently Shafay et al. [9]. Recently, Kundu [10] prsented new type-II progressive censoring scheme for two samples.

Suppose that product from two lines Ω1 and Ω2, from a product Ω1, m (i.i.d) lifetimes random variables T1, T2, …, Tm belong to population with CDF F1(t) and PDF f1(t) distribution and density functions, respectively. Also, n (i.i.d) lifetimes random variables Y1, Y2, …,Yn belong to population with CDF F2(y) and PDF f2(y) of product Ω2. Furthermore, suppose Z1Z2 ≤ … ≤ Zr are the order sample of the (m + n) random variables {T1, T2, …, Tmr, Y1, Y2, …,Ynr}. Hence, from the jointly type-II censored sample described above, the observed data set (Z, δ) consists of Z = (Z1, Z2, …,Zr) with 1 ≤ r < m + n and δ = (δ1, δ1, …, δr) with δi = 1 or 0 depending on whether Zi is an T$f(z,δ)=m!n!(m−mr)!(n−nr)!∏i=1rf1(zi)δif2(zi)1−δi×[1−F1(zr)]m−mr[1−F2(zr)]n−nr.$(1)

Pareto distribution is used to describe the distribution of wealth among individuals because it seemed to show rather well the method that a larger portion of the wealth of any society is owned by a smaller percentage of the people in that society. Also, the Pareto distribution is used to describe a distribution of income. In reliability studies Pareto distribution is used in life tests to analyse life time data in reliability studies. Ref. [11] considered constant-stress partially accelerated life testing under Pareto distribution with type-I censoring. More recently, researchers have discussed different application of the Pareto distribution for many types of data sets. For example, for new characterizations of the Pareto distribution see Nofal and El Gebaly [12] and for the odd Pareto families of distributions for modeling loss payment data see Mdziniso and Cooray [13].

The Pareto distribution is a power probability distribution, which is used in the description of geophysical, scientific, social, actuarial, and many other types of observable phenomena. Pareto distribution applied in different cases of life testing. The two-parameter Pareto distribution has the density function (PDF) and distribution function (CDF) respectively, given by $fj(x;αj,θj)=αjθjαj(θj+x)−(αj+1), x>0,αj>0, θj>0,j=1,2,$(2) $Fj(x;αj,θj)=1−θjαj(θj+x)−αj, x>0.$(3)

And reliability function (RF), and hazard rate function (HF) are given, respectively, by $Rj(t)=θjαj(θj+t)−αj, t>0,$(4) $Hj(t)=αj(θj+t)−1, t>0.$(5)

The Pareto distribution is used in different branches such as in analysing business failure data and economic studies. Many authors have studied the Pareto distribution such as Balkema and Haan [14], Arnold [15], Vidondo et al. [16], Childs et al. [17], Al-Awadhi and Ghitany [18], Howlader and Hossain [19]. Abd-Elfattah et al. [20] discussed Bayesian inferences with Pareto distribution. Hassan and Al-Ghamdi [21] used the Pareto distribution to determine the optimal test times of changing stress level under a simple stress scheme with cumulative exposure model.

In this paper we discuss a joint type-II censoring scheme and its properties for obtaining the estimators of the Pareto distribution parameters, although the results can be extended for other lifetime distributions and other censoring schemes. We obtain the MLEs of the unknown parameters. It is also observed that MLEs with normal approximation the approximate confidence interval is obtained. For comparison purposes we proposed to use bootstrap confidence intervals. Bayesian point and interval estimators are presented with the help of MCMC method and results are compared with MLEs and bootstrap confidence intervals.

In this article, The maximum likelihood estimators for the unknown parameters are derived, and the corresponding approximate confidences intervals are presented in Section 2. The two bootstrap confidence intervals are discussed in Section 3. Under SE and MCMC method of Bayesian estimation based on jointly type-II censoring Pareto distribution samples are developed in Section 4. A numerical example for illustration to the results obtained in this paper is presented in Section 5. An assessment and a comparison of the results by Monte Carlo studies are presented in Section 6. Finally comment about the result of simulation studies is shown in Section 7.

## 2 Maximum likelihood estimation

Suppose that (Z1, Z2, …, Zr) be the r-first order statistics of {T1, T2, …, Tmr, Y1, Y2, …,Ynr}, and each of a sample Ts and Ys are independent random samples from the two distributions (2). Also let q = (α1, α2), η = (θ1, θ2), s = (m, n), l = (mr, nr) and ω = (δi, 1 − δi). Then the likelihood function with the jointly samples and distributions is presented by equations (2) and (3), is given by $Lq,η|z_∝∏j=12qjljηjsjqjexp−(qj+1)∑i=1rωjlog⁡(ηj+zi)−qj(sj−lj)log⁡(ηj+zr).$(6)

So the natural logarithm of the likelihood function is presented by $ℓ(q,η|z_)∝∑j=12ljlogqj+sjqjlogηj−(qj+1)×∑i=1rωjlog⁡(ηj+zi)−qj(sj−lj)log⁡(ηj+zr).$(7)

## 2.1 MLE’s

The likelihood equations are obtained from the first partial derivatives of (7) with each qi and ηi, i = 1, 2 as $∂ℓq,η|z_∂qj=ljqj+sjlog⁡ηj−∑i=1rωjlog⁡(ηj+zi)−(sj−lj)log⁡(ηj+zr)=0$ and $∂ℓq,η|z_∂ηj=sjqjηj−(qj+1)∑i=1rωjηj+zi−qj(sj−lj)ηj+zr=0,$ hence $qj(ηj)=ljTj,$(8) and $ηj=sj∑i=1rωjηj+zi+(sj−lj)ηj+zr+Tjlj∑i=1rωjηj+zi−1,$(9) where $Tj=∑i=1rωjlog⁡(ηj+zi)+(sj−lj)log⁡(ηj+zr)−sjlog⁡ηj,$(10) From the likelihoods equations, we obtained the two nonlinear equations (9) solved numerically with respect to θ1 and θ2 with any numerical methods such as quasi-Newton Raphson or fixed point, to obtain the MLE, $\begin{array}{}{\stackrel{^}{\theta }}_{1}\text{\hspace{0.17em}and\hspace{0.17em}}{\stackrel{^}{\theta }}_{2},\end{array}$, and hence $\begin{array}{}{\stackrel{^}{\alpha }}_{1}\text{\hspace{0.17em}and\hspace{0.17em}}{\stackrel{^}{\alpha }}_{2}\end{array}$ by using (8).

## 2.2 Observed Fisher information

Under obtaining the second partial derivatives of the log-likelihood function in (7), we have $∂2ℓq,η|z_∂qj2=−ljqj2, j=1,2,$(11) $∂2ℓq,η|z_∂ηj2=−sjqjηj2+(qj+1)∑i=1rωjηj+zi2+qj(sj−lj)ηj+zr2,$(12) $∂2ℓq,η|z_∂qj∂ηj=∂2ℓq,η|z_∂ηj∂qj=sjηj−∑i=1rωjηj+zi−(sj−lj)ηj+zr,$(13) $∂2ℓq,η|z_∂q1∂q2=∂2ℓq,η|z_∂q2∂q1=∂2ℓq,η|z_∂η1∂η2=∂2ℓq,η|z_∂η2∂η1=0,$(14) and $∂2ℓq,η|z_∂q1∂η2=∂2ℓq,η|z_∂η2∂q1=∂2ℓq,η|z_∂q2∂η1=∂2ℓq,η|z_∂η2∂q1=0.$(15)

The Fisher information matrix I(α1, θ1, α2, θ2) is then obtained by taking expectation of minus eq. (11-15). Under some mild regularity conditions, $\begin{array}{}\left({\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},{\stackrel{^}{\theta }}_{2}\right)\end{array}$ is approximately bivariately normal with mean (α1, θ1, α2, θ2) and covariance matrix I−1(α1, θ1, α2, θ2). In practice, we usually estimate I−1(α1, θ1, α2, θ2) by $\begin{array}{}{I}_{0}^{-1}\left({\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},{\stackrel{^}{\theta }}_{2}\right)\end{array}$. A simpler and equally valued procedure is to use the approximation $(α^1,θ^1,α^2,θ^2)↠Nα1, θ1, α2, θ2,I0−1(α^1,θ^1,α^2,θ^2),$ where I0(α1, β1, α2, β2) is observed information matrix given by $−∂2ℓ(.|z_)∂α12−∂2ℓ(.|z_)∂α1∂θ1−∂2ℓ(.|z_)∂α1∂α2−∂2ℓ(.|z_)∂α1∂θ2−∂2ℓ(.|z_)∂θ1∂α1−∂2ℓ(.|z_)∂θ12−∂2ℓ(.|z_)∂θ1∂α2−∂2ℓ(.|z_)∂θ1∂θ2−∂2ℓ(.|z_)∂α2∂α1−∂2ℓ(.|z_)∂α2∂θ1−∂2ℓ(.|z_)∂α22−∂2ℓ(.|z_)∂α2∂θ2−∂2ℓ(.|z_)∂θ2∂α1−∂2ℓ(.|z_)∂θ2∂θ1−∂2ℓ(.|z_)∂θ2∂α2−∂2ℓ(.|z_)∂θ22,$(16) where (.|z) = (α1, θ1, α2, θ2|z). From the bivariately normal distributed with mean (α1, θ1, α2, θ2) and covariance matrix $\begin{array}{}{I}_{0}^{-1}\left({\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},{\stackrel{^}{\theta }}_{2}\right)\end{array}$, the approximate confidence intervals of parametrs α1, θ1, α2 and θ2 can be obtained. Thus, the 100(1-2γ)% approximate confidence intervals for α1, θ1, α2 and θ2 are $α^1∓zγv11, θ^1∓zγv22, α^2∓zγv33and θ^2∓zγv44,$(17) where v11, v22, v33, and v44 are the elements presented by main diagonal of $\begin{array}{}{I}_{0}^{-1}\left({\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},{\stackrel{^}{\theta }}_{2}\right)\end{array}$ in (17) and zγ is the tabulated value of the N(0, 1) distribution with significant level equal 2γ.

## 3 Bootstrap confidence intervals

The bootstrap is a commonly used method not only to estimate confidence intervals, but is also used in estimation bias, variance of an estimator and calibration of hypothesis tests. Parametric and nonparametric bootstrap methods are exposed in [22] and [23]. The parametric bootstrap methods are used to present two confidence intervals, percentile and t bootstrap confidence intervals (see [24] and [25]) with respect to the following algorithm:

1. For given original data set (z, δ) = {(zi, δi), i = 1, 2, …, (r, 1 ≤ r < N}, δi = 1 or 0 depending on whether zi is an T- or Y-failure the MLEs of parameters say $\begin{array}{}{\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},\text{\hspace{0.17em}and\hspace{0.17em}}{\stackrel{^}{\theta }}_{2}\end{array}$ are obtain.

2. From Pareto distribution with parameters $\begin{array}{}{\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},\text{\hspace{0.17em}and\hspace{0.17em}}{\stackrel{^}{\theta }}_{2}\end{array}$ and the same values of m, n and r, generate a random sample $\begin{array}{}\left({z}^{\ast },{\delta }^{\ast }\right)=\left\{\left({z}_{i}^{\ast },{\delta }_{i}^{\ast }\right),i=1,2,...,r,1\le r

3. With respect to z, δ, (m, (n, r the bootstrap sample estimations $\begin{array}{l}{\stackrel{^}{\alpha }}_{1}^{\ast },{\stackrel{^}{\theta }}_{1}^{\ast },{\stackrel{^}{\alpha }}_{2}^{\ast },\phantom{\rule{thickmathspace}{0ex}}\text{and}\phantom{\rule{thickmathspace}{0ex}}{\stackrel{^}{\theta }}_{2}^{\ast }\end{array}$ are obtained.

4. Steps 2 and 3 repeated M times to represent the different bootstrap samples.

5. Arrange the different bootstrap samples in an ascending order as $\begin{array}{}\left({\stackrel{^}{\psi }}_{k}^{\ast \left[1\right]},{\stackrel{^}{\psi }}_{k}^{\ast \left[2\right]},...,{\stackrel{^}{\psi }}_{k}^{\ast \left[\mathbf{M}\right]}\right),k=1,2,3,4\text{\hspace{0.17em}where\hspace{0.17em}}\left({\psi }_{1}^{\ast }={\stackrel{^}{\alpha }}_{1}^{\ast },{\psi }_{2}^{\ast }={\stackrel{^}{\theta }}_{1}^{\ast },{\psi }_{3}^{\ast }={\stackrel{^}{\alpha }}_{2}^{\ast },{\psi }_{4}^{\ast }={\stackrel{^}{\theta }}_{2}^{\ast }\right).\end{array}$

## Percentile bootstrap confidence interval

The CDF of $\begin{array}{}{\stackrel{^}{\psi }}_{k}^{\ast }\end{array}$ defined by W(z) = P( $\begin{array}{}{\stackrel{^}{\psi }}_{t}^{\ast }\end{array}$z) then boot-p $\begin{array}{}{\stackrel{^}{\psi }}_{k}^{\ast }\end{array}$ = W−1(z) for given z. Then the approximate bootstrap 100(1 − 2γ)% confidence interval of $\begin{array}{}{\stackrel{^}{\phi }}_{k}^{\ast }\end{array}$ given by $boot−pψ^k∗(γ),boot−pψ^k∗(1−γ).$(18)

## Bootstrap-t confidence interval

define the order statistics $\begin{array}{}{\zeta }_{k}^{\ast \left[1\right]}<{\zeta }_{k}^{\ast \left[2\right]}<...<{\zeta }_{k}^{\ast \left[\mathbf{M}\right]},\end{array}$ where $ζk∗[j]=ψ^k∗[j]−ψ^kvarψ^k∗[j],j=1,2,...,M, k=1,2, 3,4,$(19)

Let W(z) = P( $\begin{array}{}{\stackrel{^}{\zeta }}_{t}^{\ast }\end{array}$ < z) be the CDF of $\begin{array}{}{\stackrel{^}{\zeta }}_{t}^{\ast }\end{array}$. For a given z, define $boot−tψ^k=ψ^k+Var(ψ^k)W−1(z).$(20)

The approximate 100(1 − 2γ)% confidence interval of $\begin{array}{}{\stackrel{^}{\psi }}_{k}\end{array}$ is given by $boot−tψ^k(γ),boot−tψ^k(1−γ).$(21)

## 4 Bayes estimation of the model parameters

Consider that each of the parameters α1, θ1, α2, and θ2 are unknown, and have independent gamma priors with prior density as $hk(ψk)=bkakΓ(ak)ψkak−1exp⁡(−bkψk),ψk>0,(ak,bk>0),k=1,2,3,4,$(22) where ψ = (α1, θ1, α2, θ2). Then the joint prior density of vector ψ, can be written as $h(ψ_)=∏i=14hi(ψi).$(23)

Using the joint prior density (24) and the likelihood function, the joint posterior density given the data h(ψ|z), given as $h∗(ψ_|z_)=L(ψ_|z_)×h(ψ_)∫ψL(ψ_|z_)×h(ψ_)dψ.$(24)

Hence, the Bayes estimation for a given function of ψ say g(ψ), under squared error loss function (SEL) is $g^B(ψ_)=Eψ|_z_(g(ψ_))=∫ψg(ψ_)h∗(ψ_|z_)ψ=∫ψg(ψ_)L(ψ_|z_)×h(ψ_)dψ∫ψL(ψ_|z_)×h(ψ_)dψ.$(25)

In several cases, the ratio of the two integrals (26) can not be obtained in a closed form. Different approximate methods can be used, the important one which used in this paper is the MCMC method. In this method, the parameters samples generated from posterior distributions are used to computing the Bayes estimate of any function g(ψ) of parameters.

## MCMC Approach

Different classes of MCMC metod, an important one is Gibbs technique and another more general technique is Metropolis within-Gibbs. The empirical posterior distribution obtained from MCMC can be used to obtain different interval estimate of the parameters or which often do not exist in maximum likelihood technique. Also, the MCMC method can be used to obtain the empirical posterior of any function of the parameters (see Soliman et al. [26] and Abd-Elmougod et al [27]).

The joint posterior density function of the vector ψ can be written as $h∗(ψ_|z_)∝α1a1+mr−1θ1a2+mα1−1α2a3+nr−1θ2a4+nα2−1×exp−b1α1−b2θ1−b3α2−b4θ2−(α1+1)∑i=1rδilog⁡(θ1+zi)−α1(m−mr)log⁡(θ1+zr)−α2(n−nr)log⁡(θ2+zr)−(α2+1)∑i=1r(1−δi)log⁡(θ2+zi).$(26)

Then the conditional posterior PDF’s of α1, θ1, α2, and θ2 are as follows $α1|(θ1,α2,θ2,z_)∼Gamma(a1+mr,X1),$(27) $θ1|(α1,α2,θ2,z_)∝exp⁡[−b2θ1−(α1+1)∑i=1rδilog⁡(θ1+zi)+(a2+mα1−1)log⁡(θ1)−α1(m−mr)log⁡(θ1+zr)$(28) $α2|(α1,θ1,θ2,z_)∼Gamma(a3+nr,X2),$(29) and $θ2|(α1,θ1,α2,z_)∝exp−b4θ2−(α2+1)∑i=1r(1−δi)×log⁡(θ2+zi)+(a4+nα2−1)×log⁡(θ2)−α2(n−nr)log⁡(θ2+zr),$(30) where $X1=b1−mlog⁡θ1+∑i=1rδilog⁡(θ1+zi)+(m−mr)×log⁡(θ1+zr),$(31) and $X2=b3−nlog⁡θ2+∑i=1r(1−δi)log⁡(θ2+zi)+(n−nr)log⁡(θ1+zr).$(32)

The plots of (29) and (31) show that the generation from these distributions under similarity with the normal distribution, we use the MH method Metropolis et al. [28] with normal proposal distribution as the following algorithm.

## The algorithm of Metropolis–Hastings under Gibbs sampling

1. Set I = 1 and start with intial values parameters vector $\begin{array}{}{\underset{_}{\psi }}^{\left(0\right)}=\left({\stackrel{^}{\alpha }}_{1},{\stackrel{^}{\theta }}_{1},{\stackrel{^}{\alpha }}_{2},{\stackrel{^}{\theta }}_{2}\right).\end{array}$

2. Generate $\begin{array}{}{\alpha }_{1}^{\left(I\right)}\end{array}$ from Gamma(a1 + mr, X1) and $\begin{array}{}{\alpha }_{2}^{\left(I\right)}\end{array}$ from Gamma(a3 + nr, X2).

3. Generate $\begin{array}{}{\theta }_{1}^{\left(I\right)}\text{\hspace{0.17em}and\hspace{0.17em}}{\theta }_{2}^{\left(I\right)}\end{array}$ by using Metropolis-Hastings, with the $\begin{array}{}N\left({\theta }_{k}^{\left(I-1\right)},{\sigma }_{k}\right)\end{array}$ proposal distribution, k = 1, 2. Where σk is obtained from variances-covariances matrix.

4. The parameters vector $\begin{array}{}{\underset{_}{\psi }}^{\left(I\right)}=\left({\alpha }_{1}^{\left(I\right)},{\theta }_{1}^{\left(I\right)},{\alpha }_{2}^{\left(I\right)},{\theta }_{2}^{\left(I\right)}\right)\end{array}$ are obtained and set I = I + 1.

5. Repeat steps from 2 – 4, NMC times.

6. The Bayes estimate of ψ = (α1, θ1, α2, θ2) under the MCMC methods as $E(ψk|z_)=1NMC−MMC∑i=MMC+1NMCψk(i),k=1,2,3,4.$(33)

where MMC is the number of iterations, we need to get to stationary distribution. The posterior variance of ψk given by $V(ψk|z_)=1NMC−MMC∑i=MMC+1NMCψk(i)−E(ψk|z_)2.$

The credible intervals of ψk, afeter order $\begin{array}{}{\psi }_{k}^{\left(\text{MMC}+1\right)},{\psi }_{k}^{\left(\text{MMC}+2\right)},...,{\psi }_{k}^{\left(\text{NMC}\right)}\end{array}$ as ψk(1), ψk(2),…,ψk(NMC − MMC). Then the 100(1 − 2γ)% symmetric credible interval is $(ψk(γNMC−MMC),ψk(1−γNMC−MMC)).$(35)

## 5 Illustrative example

In this section, we adopted a simulated data set from the two Pareto distributions to illustrate the application of our proposed method, the joint type-II censoring data generated from two pareto distributions with the parameters vectors ψ = (α1, θ1, θ2, β2) = (0.8, 2.0, 0.5, 1.5) and m = n = 20 and r = 30. The two data set T and Y, the corresponding joint type-II censored data Z and the corresponding indicator δ are presented in Table 1.

Table 1

Joint type-II censored sample from two Pareto distributions

Based on the data presented in Table 1, point and 95% interval estimation of ML, bootstrap % and Bayes are presented in Table 2 and 3. In Bayesian MCMC method, the informative proior with hyper parameters {a1 = 1, a2 = 2, a3 = 1, a4 = 3, b1 = 1, b2 = 1, b3 = 2, b4 = 2} is used. Also, we run the chain for 11, 000 times and cut the first 1000 values as NMC‘burn-in’. Fig. 1-8 show simulation number of parameters generated by MCMC method and the corresponding histogram.

Figure 1

Simulation number of α1 obtained by MCMC method

Figure 2

Histogram of α1 obtained by MCMC method

Figure 3

Simulation number of β1 obtained by MCMC method

Figure 4

Histogram of β1 obtained by MCMC method

Figure 5

Simulation number of α2 obtained by MCMC method

Figure 6

Histogram of α2 obtained by MCMC method

Figure 7

Simulation number of β2 obtained by MCMC method

Figure 8

Histogram of β2 obtained by MCMC method

Table 2

The MLEs, Bootstrap and Bayes MCMC estimates of parameters

Table 3

95% asymptotic, Bootstrap-p, Bootstrap-t and credible intervals

## 6 Simulation studies

To assess the theoretical results obtained from estimation problem, we adopted simulation studies under different sample sizes for the two Pareto populations and different sample size r. The performance of the point estimations measured in terms of their average (AVG) and mean square error (MSE), where $AVGψk=1SN∑i=1SNψ^ki, and MSEψk=1SN∑i=1SNψ^ki−ψk2,$(36) where SN is the simulation number. Interval estimation, intervals obtained by using property distributions of the MLEs, the two different bootstrap confidence intervals or credibility intervals parented by MCMC method can be measured in average terms of interval widths (AW) and probability coverage (PC). We supposed values of the vectors parameters ψ = (α1, θ1, α2, θ2) = (1.0, 3, 1.5, 5). The non-informative priors (priors 0) for the parameters, which the joint posterior distribution of the parameters is proportional to the likelihood function and informative prior ( prior 1: a1 = 1.0, a2 = 3.0, a3 = 3.0, a4 = 5.0, b1 = 1.0, b2 = 1.0, b2 = 2.0, b2 = 1.0) are used. In all cases, we used the squared error loss function to compute the Bayes estimates. For these cases, we computed the MLEs and the 95% CIs for parameters and two bootstrap (BP and BT) with M = 500 also Bayes etimates with the help of MCMC method with NMC = 11000 and MMC = 1000. We repeated this process SN = 1000 times and computed the AVG values of estimates and corresponding MSE which presented in Table 4, the coverage probabilities (CP) and the average widths (AW) of 95% CIs for all the methods presented in Table

Table 4

The CPs and the AWs for the confidence interval of parameters

## 7 Concluding remarks

The object of this study is to obtain the relative merits of two competing duration of different life products, so the comparative lifetime experiments are important. In this paper, we have discussed different estimation methods such as MLE, bootstrap and Bayesian estimation of unknown parameters of two Pareto distributions with the help of the MCMC method based on a jointly type-II censoring samples. A numerical example and simulation study are presented to stand on and compare the performance of the suggested methods developed in this paper. From the results, we observe the following.

1. Numerical example and results in Tables 4 and 5 show that results of estimations problem of two Pareto distributions under jointly type-II censored scheme are acceptable.

2. The Bayes estimation performs better than the MLEs and bootstrap.

3. The results of MLE are close to one Bayes estimation under non-informative prior.

4. When the effective sample size r increases, the MSEs of the estimators, reduces significantly for a fixed n1, m2.

## Acknowledgement

This Project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant no.(G-260-363-37). The authors, therefore, acknowledge with thanks DSR for technical and financial support.

## References

• [1]

Rao U.V.R., Savage I.R., Sobel M., Contributions to the theory of rank order statistics: the two-sample censored case, Ann. Math. Stat., 1960, 31, 415.

• [2]

Basu A.P., On a generalized savage statistic with applications to life testing, Ann. Math. Stat., 1968, 39, 1591.

• [3]

Johnson R.A., Mehrotra K.G., Locally most powerful rank tests for the two-sample problem with censored data, Ann. Math. Stat., 1972, 43, 823.

• [4]

Mehrotra K.G., Johnson R.A., Asymptotic sufficiency and asymptotically most powerful tests for the two sample censored situation, Ann. Stat., 1976, 4, 589.

• [5]

Bhattacharyya G.K., Mehrotra K.G., On testing equality of two exponential distributions under combined type-IIcensoring, J. Am. Stat. Assoc., 1981, 76, 886.

• [6]

Mehrotra K.G., Bhattacharyya G.K., Confidence intervals with jointly type-II censored samples from two exponential distributions, J. Am. Stat. Assoc., 1982, 77, 441.

• [7]

Balakrishnan N., Rasouli A., Exact likelihood inference for two exponential populations under joint type-II censoring, Comput. Stat. Data. Anal., 2008, 52, 2725.

• [8]

Rasouli A., Balakrishnan N., Exact likelihood inference for two exponential populations under joint progressive type-II censoring, Commun. Stat. Theory. Methods., 2010, 39, 2172.

• [9]

Ashafaya R., Balakrishnanbc N., Abdel-Atyd Y., Bayesian inference based on a jointly type-II censored sample from two exponential populations, J. Stat. Comp. Simul., 2014, 84, 2427

• [10]

Kundu D., A new two sample type-II progressive censoring scheme, arXiv:1609.05805v1, stat.ME., 2016,19. Google Scholar

• [11]

Ismail A., Abdel-Ghaly A.A., El-Khodary A.H., Optimum constant-stress life test plans for Pareto distribution under type-I censoring, J. Stat. Comp. Simul., 2011, 81, 1835.

• [12]

Nofal Z.M., El Gebaly Y.M., New Characterizations of the Pareto Distribution, P. J. S. O. R., 2017, 13, 63. Google Scholar

• [13]

Mdziniso N.C.K., Cooray K., Odd Pareto families of distributions for modeling loss payment data, Scandinavian Actuarial Journal, http://dx.doi.org/10.1080/03461238.2017.1280527, 2017. Web of Science

• [14]

Balkema A.A., Haan L., Residual life time at great age, Ann. Probab., 1974, 2, 792.

• [15]

Arnold B.C., Pareto distributions. Fairland, MD, International Cooperative Publishing House,1983. Google Scholar

• [16]

Vidondo B., Prairie Y.T., Blanco J.M., Duarte C.M., Some aspects of the analysis of size spectra in aquatic ecology, Limnol Oceanogr, 1997, 42, 184.

• [17]

Childs A., Balakrishnan N., Moshref M., Order statistics from non-identical right truncated Lomax random variables with applications, Statist Papers, 2001, 42, 187.

• [18]

Awadhi S.A., Ghitany M.E., Statistical properties of Poisso–Lomax distribution and its application to repeated accidents data, J. Appl. Statist. Sci., 2001, 10, 365. Google Scholar

• [19]

Howlader H.A., Hossain A.M., Bayesian survival estimation of Pareto distribution of the second kind based on failurecensored data. Comput Stat Data, Comput. Stat. Data. Anal., 2002, 38. Google Scholar

• [20]

Abd-Elfattah A.M., Alaboud F.M., Alharby A.H., On sample size estimation for Lomax distribution, Aust. J. Basic. Appl. Sci., 2007, 1, 373. Google Scholar

• [21]

Hassan A.S., Al-Ghamdi A.S., Optimum step stress accelerated life testing for Lomax distribution, J. Appl. Sci. Res., 2009, 5, 2153. Google Scholar

• [22]

Davison A.C., Hinkley D.V., Bootstrap Methods and their Applications, Cambridge University Press, 1997. Google Scholar

• [23]

Efron B., Tibshirani R.J., An introduction to the bootstrap, New York Chapman and Hall, 1993. Google Scholar

• [24]

Hall P., Theoretical comparison of bootstrap confidence intervals, Annals of Statistics, 1988, 16, 927.

• [25]

Efron B., Censored data and bootstrap, Journal of the American Statistical Association, 1981, 76, 312.

• [26]

Soliman A.A., Abd Ellah A.H., Abou-Elheggag N.A., Abd-Elmougod G.A. , A simulation-based approach to the study of coefficient of variation of Gompertz distribution under progressive first-failure censoring, Indian Journal of Pure and Applied Mathematics, 2011, 42, 335.

• [27]

Abd-Elmougod G.A., El-Sayed M.A., Abdel-Rahman E.O., Coefficient of variation of Topp-Leone distribution under adaptive Type-II progressive censoring scheme: Bayesian and non-Bayesian approach, Journal of Computational and Theoretical Nanoscience, 2015, 12, 4028.

• [28]

Metropolis N., Rosenbluth A.W., Rosenbluth M.N., Teller A.H., Teller E., Equations of state calculations by fast computing machines, J. Chem. Phys., 1953, 21, 1087.

## About the article

Accepted: 2017-04-18

Published Online: 2017-08-10

Citation Information: Open Physics, Volume 15, Issue 1, Pages 557–565, ISSN (Online) 2391-5471,

Export Citation