151
Views
3
CrossRef citations to date
0
Altmetric
Original Articles

A consistent method of estimation for the three-parameter lognormal distribution based on Type-II right censored data

&
Pages 5693-5708 | Received 28 Feb 2014, Accepted 18 Jul 2014, Published online: 19 Jul 2016
 

ABSTRACT

In this paper, we propose a parameter estimation method for the three-parameter lognormal distribution based on Type-II right censored data. In the proposed method, under mild conditions, the estimates always exist uniquely in the entire parameter space, and the estimators also have consistency over the entire parameter space. Through Monte Carlo simulations, we further show that the proposed method performs very well compared to a prominent method of estimation in terms of bias and root mean squared error (RMSE) in small-sample situations. Finally, two examples based on real data sets are presented for illustrating the proposed method.

MATHEMATICS SUBJECT CLASSIFICATION:

Acknowledgments

The authors thank the Associate Editor and the referee for their valuable comments and suggestions that greatly improved this work.

Appendix A. Proofs

A.1. Proof of Proposition 1

Denote F( · ; σ, 1, 0) by G( · ; σ) and f( · ; σ, 1, 0) by g( · ; σ), for simplicity. Suppose Z1, …, Zn are n independent random variables from the standard lognormal distribution with shape parameter σ. For i = 1, …, n, let Zi: n denote the ith order statistic among Z1, …, Zn.

For r − 2 real values 0 ⩽ w2 ⩽ ⋅⋅⋅ ⩽ wr − 1 ⩽ 1, let us consider (A1) where C = n!/(nr)!.

It is easily shown that (EquationA1) is partially differentiable with respect to wi, i = 2, …, r − 1 (the proof is omitted for the sake of brevity). Then, the partial derivative of (EquationA1) with respect to wi, i = 2, …, r − 1, is given by We note that, for w2, …, wr − 1 for which 0 ⩽ w2 ⩽ ⋅⋅⋅ ⩽ wr − 1 ⩽ 1 does not hold, the partial derivative of (EquationA1) with respect to wi, i = 2, …, r − 1, is always equal to 0. Then, after suitable transformations of variables u and v, the proof of Proposition 1 gets completed.

A Appendix B. Proof of Theorem 1

First, we shall show that the likelihood equation has at least one solution. For σ > 0, given w2, …, wr − 1 such that 0 ⩽ w2 ⩽ ⋅⋅⋅ ⩽ wr − 1 ⩽ 1, note that (∂/∂σ)L(σ; w2, …, wr − 1) =L′(σ; w2, …, wr − 1) can be rewritten as (B1) where with C = n!/(nr)!, and Φ( · ) and φ( · ) being the cdf and pdf of the standard normal distribution, respectively.

For simplicity, we denote L′(σ; w2, …, wr − 1) by L′(σ), ξ(σ, u, v; w2, …, wr − 1) by ξ(σ, u, v), and ξ′(σ, u, v;w2, …, wr − 1) by ξ′(σ, u, v) in the remaining part of this Appendix.

Since exp {ξ(σ, u, v)} > 0, limσ↓0ξ′(σ, u, v) = ∞ and limσ → ∞ξ′(σ, u, v) < 0 for every σ > 0 and u and v > 0, there exists a positive real value δ1 such that L′(σ) > 0 for all σ in (0, δ1) and a positive real value δ2 such that L′(σ) < 0 for all σ > δ2. Also, for σ > 0, L′(σ) is continuous with respect to σ. Thus, L′(σ) = 0 always has at least one solution.

Next, we shall show that the number of solutions is exactly one. Let, for Δσ, (B2) Then, L′(σ + Δσ) can be rewritten as (B3)

From now on, let us focus on the case when Δσ ⩾ 0.

Let us denote the set {(u, v): ξ′(σ, u, v) = 0} by χ0(σ) and {(u, v): ξ′(σ, u, v) ≠ 0} by χ1(σ), for σ > 0. We then note that (B4) while (B5) for any σ.

Let σ* be one of the solutions of L′(σ) = 0. Then, (B6)

From (EquationB4) and (EquationB5), it is easy to see that ψ(σ*, Δσ, u, v), for any (u, v) ∈ χ1(σ), approaches 1 faster than ψ(σ*, Δσ, u, v) for any (u, v)∈ χ0(σ) approaching 1, when Δσ decreases. Hence, ψ(σ*, Δσ, u, v)ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu approaches ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu faster than ψ(σ*, Δσ, u, v)ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu approaching ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu, when Δσ decreases. Note further that ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu=ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu=0. Therefore, by the fundamental theorem of differential calculus, the sign of the RHS of (EquationB6) agrees with the sign of ψ(σ*, u, v)ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu for sufficiently small Δσ > 0, which implies (B7) since, for any Δσ > 0, ξ′(σ* + Δσ, u, v)<0 and exp {ξ(σ* + Δσ, u, v)}>0 for any (u, v) ∈ χ0(σ), and thus ψ(σ*, Δσ, u, v)ξ′(σ*, u, v)exp {ξ(σ*, u, v)} dvdu=ξ′(σ* + Δσ, u, v)exp {ξ(σ* + Δσ, u, v)} dvdu<0.

Analogously, we obtain the following inequality: (B8) The proof is very similar to that of (EquationB7) and is therefore omitted for the sake of brevity.

It follows from (EquationB7) and (EquationB8) that (∂/∂σ)L′(σ*) < 0, which clearly implies that L′(σ) changes sign only once with respect to σ.

From the facts established above, L′(σ)= 0 always has a unique solution with respect to σ, and this completes the proof of Theorem 1.

B.1. Proof of Lemma 1

Let us suppose now that X1, …, Xn are i.i.d. random variables from the three-parameter lognormal distribution with cdf in (Equation2), and X1: n, …, Xn: n are the order statistics obtained by arranging the above Xi’s in increasing order of magnitude. We also assume that Wi, i = 1, …, n, are the random variables whose order statistics are Wi: n = (Xi: nX1: n)/(Xr: nX1: n), i = 1, …, n. Then, by using Theorem 2 of Iliopoulos and Balakrishnan (Citation2009), conditional on Z1: n = u and Zr: n = v, where Z1: n = (X1: n − γ0)/exp (μ0) and Zr: n = (Xr: n − γ0)/exp (μ0), and μ0 and γ0 are the true values of μ and γ, respectively, we have to be distributed exactly as the order statistics from a sample of size r − 2 from the distribution with density (B9) for 0 ⩽ w ⩽ 1, are distributed exactly as the order statistics from a sample of size nr from the distribution with density (B10) for w ⩾ 1, and further that and are conditionally independent. Therefore, conditional on Z1: n = u and Zr: n = v for v > u ⩾ 0, we have the joint density function of and to be for 0 ⩽ w2: n ⩽ ⋅⋅⋅ ⩽ wr − 1: n ⩽ 1 ⩽ wr + 1: n ⩽ ⋅⋅⋅ ⩽ wn: n. The above result implies that Wi, i = 1, …, n, excluding the random variables to W1: n, Wr: n, and Wr + 1: n, …, Wn: n, are i.i.d. distributed with the conditional density function in (EquationB9), denoted by , where .

Now, denote by , and suppose that Z1: n = (X1: n′ − γ)/exp (μ) and Zr: n = (Xr: n′ − γ)/exp (μ), where X1: n and Xr: n are the first- and rth-order statistics from the three-parameter lognormal distribution with parameters σ ≠ σ0, μ ≠ μ0, and γ ≠ γ0. Then, for any fixed , such that 0 ⩽ u < v and 0 ⩽ u′ < v′, where , under the conditions and , it follows that converges in probability to uniformly in σ > 0 as n → ∞. We also observe that (B11) holds by Jensen’s inequality.

Hence, we have or (B12)

Now, let us note that the joint density of , with σ0 > 0, is given by and we see that (B13) since is bounded by 1. Then, by applying the dominated convergence theorem, we see that The proof of Lemma 1 is thus completed.

Funding

Hideki Nagatsuka was partially supported by the Grant-in-Aid for Young Scientists (B) 24710170, The Ministry of Education, Culture, Sports, Science and Technology, Japan, which facilitated him to do a collaborative research visit to the second author.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 1,069.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.