589
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Sampling theorem and efficiency comparison of three local minimum variance unbiased estimators of the mean and variance of the exponential distribution

ORCID Icon & ORCID Icon | (Reviewing editor)
Article: 1492886 | Received 19 Sep 2017, Accepted 17 Jun 2018, Published online: 19 Jul 2018

Abstract

This article continues the works of references to improve and perfect the sampling theorem of exponential distribution. First, the distribution of the sample range of exponential distribution is derived, and that the sample range is mutually independent of the sample minimum is proven. Then, this article derives the distribution of the difference between sample maximum and mean and demonstrates that the difference of these two statistics is mutually independent of the sample minimum. Thus, three local minimum variance unbiased estimators of the mean could be constructed. The estimator built by sample minimum and the difference between sample mean and minimum is precisely the uniformly minimum-variance unbiased estimator (UMVUE) of the mean. Similarly, three local minimum variance unbiased estimators of the variance are derived. At last, the efficiency comparison is made among the above three local minimum variance unbiased estimators of mean and variance of the exponential distribution.

MR Subject classifications:

PUBLIC INTEREST STATEMENT

What is the sampling theorem of the exponential distribution? It includes the content about the distributions of the sample mean, sample maximum, sample minimum and their differences. It also includes the content of whether their differences are mutually independent of sample minimum. What is the local minimum variance unbiased estimation? Based on two mutually independent unbiased estimators, a kind of weighted linear unbiased estimators could be constructed, among which the one with the minimum variance is the local minimum variance unbiased estimation. One should remember that three local minimum variance unbiased estimators of mean and variance are not substituted for uniformly minimum variance unbiased estimators of mean and variance, respectively, but only rich in natural estimators.

1. Introduction

Sample minimum, sample maximum and sample mean are important statistics in exponential distribution. Sample minimum has an exponential distribution, and sample mean has a gamma distribution or Chi-square distribution with degree freedom of n. The difference of sample mean and minimum has a gamma distribution or Chi-square distribution with degree freedom of n–1. The difference between sample mean and minimum is mutually independent of the sample minimum (Arnold, Citation1968; Gupta & Kundu, Citation0000; Marshall & Olkin, Citation1967).

This article derives the distribution of the sample range and demonstrates that the sample range is mutually independent of sample minimum. Then, the distribution of the difference between sample maximum and mean is derived, and that the difference of these two statistics is mutually independent of the sample minimum is demonstrated (Cohen and Helm, Citation1973; Kundu & Gupta, Citation2009; Lawrance & Lewis, Citation1983; Nie, Sinha, & Hedayat, Citation2017).

Thus, the sampling theorem is improved. As natural corollary of the sampling theorem of the exponential distribution, a first local minimum variance unbiased estimators of expectation could be constructed by sample minimum and the difference between sample mean and minimum, which is precisely the UMVUE of the expectation. A second local minimum variance unbiased estimators of expectation could be constructed by sample minimum and sample range. A third local minimum variance unbiased estimators of expectation could be constructed by sample minimum and the difference between sample maximum and mean; similarly, three local minimum variance unbiased estimators of the variance are derived. At last, the efficiency comparison is made among the above three local minimum variance unbiased estimators of mean and variance of the exponential distribution (Al-Saleh & Al-Hadhrami, Citation2003; Baklizi & Dayyeh, Citation2003; Dixit & Nasiri, Citation2008; Guoan, Jianfeng, & Lihong, Citation2017; Li, Citation2016).

2. Sampling theorem of exponential distribution

The joint distribution of order statistics (X(1),...,X(n)) of exponential distribution is shown as follows:

Definition 2.1. If X  E(α), X1,...,Xn is a sample with sample size n from X  E(α), (X(1),...,X(n))has a joint density function:

(1) f(x(1),x(2),...,x(n))=n!αnexp[1nx(i)α],x(1) < x(2)... < x(n),α > 0(1)

Then, we could say (X(1),...,X(n)) is from a multivariate order statistics exponential distribution.

Notate nXˉ=1nXi=1nX(i), the sampling theorem is:

Theorem 2.1. If  X E(α), X1,...,Xn is a sample from X  E(α) with sample size n, (X(1),...,X(n)) are the order statistics, then

(1) 2nX(1)αχ2(2),2n(XˉX(1))αχ2(2(n1)), XˉX(1) is mutually independent of X(1).

(2) 2(n1)ln 1exp(X(n)X(1))αχ2(2), X(1) is mutually independent of (X(n)X(1)).

(3) X(1) is mutually independent of (X(n)Xˉ). The density function of (X(n)Xˉ):

(2) f(X(n)X¯)(x)= 0n2(1)kCn-1k(n-1-k)n-2(n-1)nn-2αexp(-(n-1)(k+1)x(n-1-k)α),x>0(2)

Proof. P(X(1) > x(1))=P(X1 > x(1),...,Xn > x(1))=exp[nx(1)α],

g(x(1))=nαenx(1)α,x(1) > 0,2nX(1)αχ2(2).

Notate U(i1)=X(i)X(1),i=2,...,n, V=X(1),

u(i1)=x(i)x(1),v=x(1)x(i)=u(i1)+v,i=2,,n.x(1)=v,

the joint distribution density of (U(1),...,U(n1),V) is:

(2.3) f(u(1),u(2),...,u(n1),v)=(n)!αnexp(1n1u(i)+nv)α,u(1) < u(2)... < u(n1),v > 0(2.3)

Therefore, (U(1),U(2)...,U(n1)) is the order statistics of sample (U1,U2...,Un1), which is the sample from UE(α) with sample sizen1,(U1,U2,...,Un1) is mutually independent of X(1).

2(nXˉnX(1))α=2(1nX(i)nX(1))α=2(1n1Ui)αχ2(2(n1)), XˉX(1) is mutually independent of X(1).

Then, prove part (2).

F(X(1),X(n))(x,y)=P(X(1) > x,X(n)y)=P(x < X1y,,x < Xny)=expxαexpyαn
(2.4) f(X(1),X(n))(x,y)=n(n1)α2expxαexpyαn2exp(x+y)α(2.4)

Transform as U1=X(1),U2=X(n)X(1),

(2.5) f(U1,U2)(u1,u2)=n(n1)α2expu1αexp(u1+u2)αn2exp(2u1+u2)α(2.5)

fU1(u1)=nαexp[nu1α], fU2(u2)=(n1)α[1exp(u2α)]n2exp[u2α].

Therefore, U1 is mutually independent of U2.

2nX(1)α  χ2(2), 2(n1) ln 1exp(X(n)X(1))α  χ2(2).

Notate U(i1)=X(i)X(1),i=2,...,n, W(1)=U(1),

W(k)=kU(k)1k1U(i),k=2,...,n1, then W(2)W(1)=2U(2)2U(1) > 0,

W(k)W(k1)=kU(k)(k1)U(k1)U(k1)=k(U(k)U(k1)) > 0,k=2,...,n1, the determinant is:

(2.6) J=100...0120...0113...0...............111...n1=(n1)!(2.6)

In 1n1U(i), the coefficient of W(k) is nk(k+1), k=1,...,n2, the coefficient of W(n1) is 1(n1). Derive the density function fW(n1)(w(n1)) of W(n1):

(2.7) fW(n1)(w(n1))=0w(n1)dw1w(1)w(n1)dw2...w(n4)w(n1)dwn3w(n3)w(n1)exp[w(n1)(n1)α]αn1exp1n2nw(k)k(k+1)αdwn2=(n1)n2nn2(n2)n2(n1)nn2expnw(n1)(n2)α+(n1)(n2)2(n3)n2(n)n2exp2nw(n1)(n3)α(n1)(n2)(n3)3×2×1(n4)n2(n)n2exp3nw(n1)(n4)α+...+(1)n2(n1)nn2expn(n2)w(n1)α1αexpw(n1)α,w > 0(2.7)

From W(n1)=(n1)(X(n)Xˉ), we could obtain f(X(n)Xˉ)(x)=(n1)fW(n1)((n1)x), then obtain:

f(X(n)X¯)(x)= (n1)n2nn2+(1)1Cn11(n2)n2nn2exp[nx(n2)α]+(1)2Cn12(n3)n2(n)n2
exp[2nx(n3)α]+(1)3Cn13(n4)n2(n)n2exp[3nx(n4)α]++(1)n2Cn1n2nn2exp[n(n2)xα]
(n1)αexp[xα]=0n2(1)kCn-1k(n-1-k)n-2nn-2exp[-knx(n-1-k)α](n-1)αexp[-xα]
(2.8) =0n2(1)kCn-1k(n-1-k)n-2(n-1)nn-2αexp(-(n-1)(k+1)x(n-1-k)α),x<0.(2.8)

List some specific situations:

When n=3:

f(X(3)Xˉ)(x)=43αexp(xα)43αexp(4xα),x > 0, it is a mixed exponential distribution.

When n=4:

f(X(4)Xˉ)(x)=2716αexp(xα)94αexp(3xα)+916αexp(9xα),x > 0,

When n=5:

f(X(5)X¯)(x)=256125αexp(xα)432125αexp(8x3α)+192125αexp(6xα)16125αexp(16xα)],x<0,

When n=6:

f(X(6)Xˉ)(w)=31251296αexp(xα)40081αexp(5x2α)+258αexp(5xα)5081αexp(10xα)+251296αexp(25xα),x>0X(1) is mutually independent of(X(n)Xˉ).

3. Three local minimum variance unbiased estimators of expectation

Theorem 3.1. If XE(α), X1,...,Xn is the sample from XE(α) with sample size n, X(1),...,X(n) are the order statistics, then the local minimum variance unbiased estimator, which is based on X(1)and XˉX(1), is the UMVUE of expectation.

Proof. From Theorem 2.1: 2nX(1)αχ2(2), 2n(XˉX(1))αχ2(2(n1)), XˉX(1) is mutually independent of X(1), we obtain n X(1) and n(XˉX(1))n1are both unbiased estimator of α, and the effective unbiased estimator is αˆ0=c(nX(1))+(1c)n(XˉX(1))n1,

(3.1) herec=D(n(XˉX(1))n1)D(nX(1))+D(n(XˉX(1))n1)=α2(n1)α2+α2(n1)=1n,(3.1)

Plug in and get αˆ0=Xˉ, which is the UMVUE of the expectation.

Theorem 3.2. If XE(α), X1,...,Xn is the sample from XE(α) with sample size n, X(1),...,X(n) are the order statistics, then the local minimum variance unbiased estimator, which is based on X(1) and (X(n)X(1)), is αˆ1=c1αˆ11+(1c1)αˆ12, here

(3.2) c1=21n1(1)k+1Cn1kk21n1(1)k+1Cn1kk21n1(1)k+1Cn1kk2+21n1(1)k+1Cn1kk21n1(1)k+1Cn1kk2(3.2)

αˆ11 is the unbiased estimator based on X(1), αˆ11=nX(1), αˆ12 is the unbiased estimator based (X(n)X(1)): αˆ12=X(n)X(1)1n1(1)k+1Cn1kk.

Proof. E(X(n)X(1))=0xn1α[1exp(xα)]n2exp(xα)dx=x[1(1exp(xα))n1]|0

+0[1(1exp(xα))n1]dx=0[1n1(1)k+1Cn1kexp(kxα)]dx=1n1(1)k+1Cn1kαk, Eαˆ12=E(X(n)X(1)1n1(1)k+1Cn1kk)=α, from Theorem 2.1: αˆ11 is independent of αˆ12, αˆ11 and αˆ12 are both unbiased estimator of α, when 0c11, αˆ1=c1αˆ11+(1c1)αˆ12 is the unbiased estimator of α, D(c1αˆ11+(1c1)αˆ12)=c12D(αˆ11)+(1c1)2D(αˆ12), take derivative of c1, make it equal to 0 and get: 2c1D(αˆ11)2(1c1)D(αˆ12)=0c1=D(αˆ12)D(αˆ11)+D(αˆ12),

here, Dαˆ11=DnX(1)=α24D2nX(1)α=α24×2×2=α2,

E(X(n)X(1))2=0x2n1α1exp(xα)n2exp(xα)dx=x21(1exp(xα))n1|0+
02x[1(1exp(xα))n1]dx=20[1n1(1)k+1Cn1kxexp(kxα)]dx=21n1(1)k+1Cn1kα2k2,
D(X(n)X(1))=21n1(1)k+1Cn1kα2k21n1(Cn1k(1)k+1αk)2,
Dαˆ12=D(X(n)X(1))1n1(1)k+1Cn1kk=21n1(1)k+1Cn1kα2k21n1(Cn1k(1)k+1αk)21n1(1)k+1Cn1kk2,

when c1=21n1(1)k+1Cn1kk21n1(1)k+1Cn1kk21n1(1)k+1Cn1kk2+21n1(1)k+1Cn1kk21n1(1)k+1Cn1kk2, αˆ1 is the local minimum variance unbiased estimator of expectation based on X(1)and (X(n)X(1)).

Theorem 3.3. If XE(α), X1,...,Xn is a sample from XE(α) with sample size n, X(1),...,X(n) are the order statistics, then the local minimum variance unbiased estimator of expectation based onX(1) and (X(n)Xˉ)is

(3.3) αˆ2=c2αˆ21+(1c2)αˆ22(3.3)

here c2=1(μ1(n))2μ2(n), αˆ21 is the unbiased estimator of expectation based on X(1), αˆ21=nX(1), αˆ22 is the unbiased estimator of expectation based on (X(n)Xˉ),αˆ22=(X(n)Xˉ)μ1(n), μ1(n)μ2(n) are the coefficients of α and α2 from E(X(n)Xˉ) and E(X(n)Xˉ)2, respectively. μ1(n)=[0n2(1)kCn1k(n1k)nnn2(n1)(k+1)2],

μ2(n)1pt=1pt20n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)3.

Proof. αˆ21=nX(1), Eαˆ21=α, Dαˆ21=α2, similar to Theorem 3.2, we only need to compute the expectation, second moment and variance of (X(n)Xˉ).

f(X(n)X¯)(w)= 0n2(1)kCn1k(n1k)n2(n1)nn2αexp((n1)(k+1)x(n1k)α),x>0.

E(X(n)X¯)=μ1(n)= 0n2(1)kCn1k(n1k)nnn2(n1)(k+1)2α, let μ1(n) denote the coefficient of α, then Eαˆ22=E(X(n)Xˉ)μ1(n)=α.

Similarly, E(X(n)X¯)2=20n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)3α2, let μ2(n)denote the coefficient of α2, then Dαˆ22=D(X(n)Xˉ)μ1(n)=[μ2(n)(μ1(n))2]α2(μ1(n))2,

c2=[μ2(n)(μ1(n))2]α2(μ1(n))2α2+[μ2(n)(μ1(n))2]α2(μ1(n))2=1(μ1(n))2μ2(n), when c2=1(μ1(n))2μ2(n), αˆ2 is the local minimum variance unbiased estimator of expectation based on X(1)and(X(n)Xˉ).

4. Three local minimum variance unbiased estimators of variance

LetDX=α2=λ.

Theorem 4.1. If XE(λ), X1,...,Xn is a sample from XE(λ) with sample size n, X(1),...,X(n) are the order statistics, then the local minimum variance unbiased estimator, which is based on (X(1))2and(XˉX(1))2, is

(4.1) λˆ0=d0λˆ01+(1d0)λˆ02(4.1)

here, d0=4n+25n2n+2, λˆ01 is the unbiased estimator based on (X(1))2, λˆ01=n2(X(1))22, λˆ02 is the unbiased estimator based(XˉX(1))2, λˆ02=n(XˉX(1))2n1.

Proof. From Theorem 2.1:2nX(1)λχ2(2), 2n(XˉX(1))λχ2(2(n1)), XˉX(1) is mutually independent of X(1). Obtain: n2(X(1))22andn(XˉX(1))2n1are both unbiased estimator of λ, and the effective unbiased estimator is λˆ0=d0(n2(X(1))22)+(1d0)(n(XˉX(1))2n1), here

E2nX(1)λ4=0x412expx2dx=16×24=3×27,
Dn2(X(1))22=6λ2λ2=5λ2;
E2n(XˉX(1))λ4=0x412n1Γ(n1)xn2expx2dx=16×(n+2)(n+1)n(n1),
Dn(XˉX(1))2n1=En2(XˉX(1))4(n1)2λ2λ2=(n+2)(n+1)n(n1)1λ2=(4n+2)λ2n(n1);
d0=D(nX¯X(1))2n1D(n2(X(1))22)+D(n(X¯X(1))2n1)=(4n+2)λ2n(n1)5λ2+(4n+2)λ2n(n1)=4n+25n2n+2,

Plug in and get λˆ0=(2n3+n2)(X(1))25n2n+2+5n2(XˉX(1))25n2n+2.

Theorem 4.2. If XE(λ), X1,...,Xn is a sample from XE(λ) with sample size n, X(1),...,X(n) are the order statistics, then the local minimum variance unbiased estimator, which is based on (X(1))2and(X(n)X(1))2, is λˆ1=d1λˆ11+(1d1)λˆ12,

(4.2) hered1=61n1(1)k+1Cn1kk4(1n1(1)k+1Cn1kk2)251n1(1)k+1Cn1kk22+61n1(1)k+1Cn1kk41n1(1)k+1Cn1kk22(4.2)

λˆ11 is the unbiased estimator based on (X(1))2, λˆ11=n2(X(1))22, λˆ12 is the unbiased estimator based(X(n)X(1))2: λˆ12=(X(n)X(1))221n1(1)k+1Cn1kk2.

Proof: E(X(n)X(1))2=0x2n1λ[1exp(xλ)]n2exp(xλ)dx=x2[1(1exp(xλ))n1]0

+20x[1(1exp(xλ))n1]dx=20[1n1(1)k+1Cn1kxexp(kxλ)]dx=21n1(1)k+1Cn1kλk2,

Eλˆ12=E((X(n)X(1))221n1(1)k+1Cn1kk2)=λ, from Theorem 2.1: λˆ11 is independent of λˆ12, λˆ11 and λˆ12 are both unbiased estimator of λ, when 0d11, λˆ1=d1λˆ11+(1d1)λˆ12 is the unbiased estimator of λ, D(d1λˆ11+(1d1)λˆ12)=d12D(λˆ11)+(1d1)2D(λˆ12), take derivative of d1, make it equal to 0 and get: d1=D(λˆ12)D(λˆ11)+D(λˆ12), here Dλˆ11=5λ2.

E(X(n)X(1))4=0x4n1λ1exp(xλ)n2expxλdx=x41(1exp(xλ))n1|0+
04x31(1expxλ)n1dx=401n1(1)k+1Cn1kx3expkxλdx=241n1(1)k+1Cn1kλ2k4,
Dλˆ12=E(X(n)X(1))44(1n1(1)k+1Cn1kk2)2λ2=61n1(1)k+1Cn1kk41n1(1)k+1Cn1kk22λ21n1(1)k+1Cn1kk22,

when

d1=61n1(1)k+1Cn1kk41n1(1)k+1Cn1kk2251n1(1)k+1Cn1kk22+61n1(1)k+1Cn1kk41n1(1)k+1Cn1kk22.

λˆ1 is the local minimum variance unbiased estimator of variance based on (X(1))2and (X(n)X(1))2.

Theorem 4.3. If XE(λ), X1,...,Xn is the sample from XE(λ) with sample size n, X(1),...,X(n) are the order statistics, then the local minimum variance unbiased estimator of variance based on (X(1))2and (X(n)Xˉ)2 is

(4.3) λˆ2=d2λˆ21+(1d2)λˆ22(4.3)

here d2=μ4(n)(μ2(n))2μ4(n)+4(μ2(n))2, λˆ21 is the unbiased estimator of variance based on (X(1))2, λˆ21=n2(X(1))22, λˆ22 is the unbiased estimator of variance based on (X(n)Xˉ)2, λˆ22=(X(n)Xˉ)2μ2(n), μ2(n), μ4(n)are the coefficients of λand λ2 from E(X(n)Xˉ)2 and E(X(n)Xˉ)4, respectively. μ2(n)=20n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)3, μ4(n)=240n2(1)kCn1k(n1k)n+3nn2(n1)4(k+1)5.

Proof. λˆ21=n2(X(1))22, Eλˆ21=λ, Dλˆ21=5λ2, similar to Theorem 4.2, we only need to compute the expectation, second moment and variance of (X(n)Xˉ)2:

f(X(n)X¯)(w)= 0n2(1)kCn-1k(n-1-k)n-2(n-1)nn-2αexp(-(n-1)(k+1)x(n-1-k)α),x>0.

E(X(n)X¯)2=μ2(n)=20n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)3λμ2(n), let denote the coefficient of λ, then Eλˆ22=E(X(n)Xˉ)2μ2(n)=λ.

Similarly, E(X(n)X¯)4=240n2(1)kCn1k(n1k)n+3nn2(n1)4(k+1)5λ2, let μ4(n)denote the coefficient of λ2, then Dλˆ22=D(X(n)Xˉ)2μ2(n)=[μ4(n)(μ2(n))2]λ2(μ2(n))2,

d2=[μ4(n)(μ2(n))2]λ2(μ2(n))25λ2+[μ4(n)(μ2(n))2]λ2(μ2(n))2=μ4(n)(μ2(n))2μ4(n)+4(μ2(n))2,

when d2=μ4(n)(μ2(n))2μ4(n)+4(μ2(n))2, λˆ2 is the local minimum variance unbiased estimator of variance based on (X(1))2and (X(n)Xˉ)2.

5. Efficiency comparison of three local minimum variance unbiased estimators of expectation and variance

Remark 5.1. The efficiency comparison of three local minimum variance unbiased estimators is the comparison of variances.

Dαˆ0=α2n,
Dαˆ1=21n1(1)k+1Cn1kk2(1n1(1)k+1Cn1kk)221n1(1)k+1Cn1kk2α2,
Dαˆ2=2 0n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)3 0n2(1)kCn1k(n1k)nnn2(n1)(k+1)22α22 0n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)3;
Dλˆ0=5(4n+2)λ25n2n+2,
Dλˆ1=56 1n1(1)k+1Cn1kk4( 1n1(1)k+1Cn1kk2)2λ25 1n1(1)k+1Cn1kk22+6 1n1(1)k+1Cn1kk4 1n1(1)k+1Cn1kk22,
Dλˆ2=5240n2(1)kCn1k(n1k)n+3nn2(n1)4(k+1)520n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)32λ2200n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)32+240n2(1)kCn1k(n1k)n+3nn2(n1)4(k+1)540n2(1)kCn1k(n1k)n+1nn2(n1)2(k+1)32

Let α=1, here is the variance comparison of three local minimum variance unbiased estimators of expectation and variance when sample size is 2 to 56.

Scatter plot with regression line 1: eDαˆ0Dαˆ1=a1nb1+c1,aˆ1=2.939,bˆ1=0.3144,cˆ1=0.4314

Scatter plot with regression line2: eDαˆ0Dαˆ2=a2nb2+c2,aˆ1=2.733,bˆ2=0.7192,cˆ2=1.045

Scatter plot with regression line3: eDαˆ1Dαˆ2=anb+c,a=0.7955,b=1.338,c=0.6916

Comment 5.1. If XE(α), X1,...,Xn is the sample from XE(α) with sample sizen, X(1),...,X(n) are the order statistics, then the efficiency comparison of three local minimum variance unbiased estimator of expectation is that: Dαˆ0 < Dαˆ1 < Dαˆ2.

Proof. Because αˆ0=Xˉ is the UMVUE of the expectation, we have Dαˆ0 < Dαˆ1,Dαˆ0 < Dαˆ2. Based on comparison among scatter plot with regression lines 1 or 2 as well as 3, we can obtain Dαˆ1<Dαˆ2. Hence, Dαˆ0 < Dαˆ1 < Dαˆ2.

Scatter plot with regression line4: eDλˆ0Dλˆ1=a3nb3+c3,aˆ3=2.665,bˆ3=0.5197,cˆ3=0.8951

Scatter plot with regression line5: eDλˆ0Dλˆ2=a4nb4+c4,aˆ4=3.089,bˆ4=0.971,cˆ4=1.104

Scatter plot with regression line6: eDλˆ1Dλˆ2=anb+c,a=1.02,b=1.614,c=0.6719

Comment 5.2. If XE(λ), X1,...,Xn is the sample from XE(λ) with sample sizen, X(1),...,X(n) are the order statistics, then the efficiency comparison of three local minimum variance unbiased estimator of variance is that: Dλˆ0 < Dλˆ1 < Dλˆ2.

Proof. Based on comparison among scatter plot with regression line 4 or 5 as well as 6, we can obtainDλˆ0 < Dλˆ1< Dλˆ2.

6. Discussion and conclusion

This article continues the works of references, to improve and perfect the sampling theorem of the exponential distribution. As natural corollary of the sampling theorem of the exponential distribution, one can obtain three local minimum variance unbiased estimators of mean and variance of the exponential distribution, respectively. We know that the sample mean is the UMVUE of expectation and n1+nXˉ2 is the UMVUE of variance. Therefore, three local minimum variance unbiased estimators of mean and variance are not substituted for uniformly minimum variance unbiased estimators of mean and variance, respectively, but only rich in natural estimators. From Tables and scatter plots with regression lines 1–6, we can draw a conclusion that Dαˆi(i=0,1,2) and Dλˆi(i=0,1,2) are strictly monotonous decreasing as nincreases; moreover, they are all convergent to zero, hence, they are all consistent.

Table 1. Variance comparison of three local minimum variance unbiased estimators of expectation under small sample

Table 2. Variance comparison of three local minimum variance unbiased estimators of variance under small sample

Remark 6.1. The advantages of those estimators are as follows: If sample is not complete or the record value of the sample mean is not given, and the record value of the difference between sample maximum and mean and the sample minimum are known, then the local minimum variance unbiased estimator of expectation, which is based on X(1) and (X(n)Xˉ) is a practical estimator; similarly, if sample is not complete or the record value of the sample mean is not given, and the record value of the sample maximum and the sample minimum are known, then the local minimum variance unbiased estimator of expectation, which is based on X(1) and (X(n)X(1)) is a recommendable estimator. If sample is not complete or the record value of the sample mean is not given, and the record value of (XˉX(1))2and(X(1))2 are known, then the local minimum variance unbiased estimator of variance, which is based on (X(1))2 and (XˉX(1))2is a practical estimator, similarly, under different sample condition, If sample is not complete or the record value of the sample mean is not given, and the record value of (X(n)Xˉ)2and (X(1))2are known, or the record value of (X(n)X(1))2and (X(1))2are known, then the local minimum variance unbiased estimator of variance, which is based on (X(n)Xˉ)2and (X(1))2or which is based on (X(n)X(1))2 and (X(1))2is a recommendable estimator, respectively.

Additional information

Funding

The authors received no direct funding for this research.

Notes on contributors

Muzhen Li

Guoan Li is an associate professor in Ningbo University, the Department of Financial Engineering, with more than 70 publications. His research interest includes prediction of earthquake hazards with application to actuarial science, parameter estimation of mixed generalized uniform distribution and its application to data science, multivariate statistical analysis and its application, statistical inferences of multivariate survival distribution under dependent samples, reasonable price assessment of land and real estate acquisition compensation.

References

  • Al-Saleh, M. F., & Al-Hadhrami, S. A. (2003). Estimation of the mean of the exponential distribution using moving extremes ranked set sampling. Statistical Papers, 44, 367–382. doi:10.1007/s00362-003-0161-z
  • Arnold, B. C. (1968). Parameter estimation for a multivariate exponential distribution. Journal of American Statistical Association, 63, 848–852.
  • Baklizi, A., & Dayyeh, W. A. (2003). Shrinkage estimation of P(Y<X)in the exponential case. Communications in Statistics - Simulation and Computation, 32(1), 31–42.
  • Cohen, A. C., & Helm, E. R. (1973). Estimation in the exponential distribution. Technometrics, 15(2), 415–418. doi:10.1080/00401706.1973.10489054
  • Dixit, U. J., & Nasiri, P. N. (2008). Estimation of parameters of a right truncated exponential distribution. Statistical Papers, 49, 225–236. doi:10.1007/s00362-006-0008-5
  • Guoan, L., Jianfeng, L., & Lihong, W. (2017). Parameter estimation for the multivariate exponential distribution which has a location parameter under censored samples or complete samples. Journal of Systems Science & Mathematical Sciences, 37(8), 1854–1865. (in Chinese) Research field.
  • Gupta, R. D., & Kundu, D. 0000. Generalized exponential distributions. Australian and New Zealand. doi:10.1111/1467-842X.00072
  • Kundu, D., & Gupta, R. D. (2009). Bivariate generalized exponential distributions. Journal of Multivariate Analysis, 100, 581–593. doi:10.1016/j.jmva.2008.06.012
  • Lawrance, A. J., & Lewis, P. A. W. (1983). Simple dependent pairs of exponential and uniform random variables. Operations Research, 31, 1179–1197. doi:10.1287/opre.31.6.1179
  • Li, G. A. (2016). Sampling fundamental theorem for exponential distribution with application to parameter estimation in the four-parameter bivariate exponential distribution of Marshall and Olkin. Statistical Research, 33(7), 98–102. (inChinese).
  • Marshall, A. W., & Olkin, I. (1967). A multivariate exponential distribution. Journal of American Statistical Association, 62(1), 30–44. doi:10.1080/01621459.1967.10482885
  • Nie, K., Sinha, B. K., & Hedayat, A. S. (2017). Unbiased estimation of reliability function from a mixture of two exponential distributions based on a single observation. Statistics and Probability Letters, 127, 7–13. doi:10.1016/j.spl.2017.03.026