550
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Rate of complete second-order moment convergence and theoretical applications

&
Pages 566-574 | Received 20 Sep 2021, Accepted 20 May 2022, Published online: 06 Jun 2022

ABSTRACT

The purpose of this work is to present a novel mode of convergence, complete second-order moment convergence with rate, which implies almost complete convergence and gives a smaller rate of convergence. Indeed, this mode is easier to obtain and gives better performances than those of the almost complete convergence in the case of the nonparametric estimators with kernels of the density function, of the distribution function and of the quantile  function. A great advantage of the proposed approach is that less conditions are imposed to the kernel function thanks to the use of the mean squared error expression.

1. Introduction

The rate of convergence is an important concept that describes the speed with which a sequence converges to the limit. In this setting, a fundamental question is how fast the convergence is illuminating theoretical studies on the subject has been carried out proposing important new results, algorithms and applications to address this issue. The purpose of this work is to present a novel mode of convergence, namely the complete secondorder convergence with a rate. The novelty here is in the introduction of the complete secondorder convergence rate. Hence, we give proofs of different proprieties of this kind of convergence. The almost complete convergence is induced by second-order convergence in different context (see [Citation1–3] and recently see Yu et al. [Citation29]). The complete convergence concept was introduced by Hsu and Robbins [Citation4]. Then, it has been used by several authors such as Gut and Stadtmller [Citation5], Gut [Citation6,Citation7], Li et al. [Citation8], Sung [Citation9], Sung and Volodin [Citation10]. The interest of such a notion lies in the fact that almost complete convergence (a.c.) implies almost sure convergence (a.s.) due to the Borel–Cantelli lemma. However in some situations at least, it is much easier to obtain complete second-order moment convergence (c.s.m.) instead of almost complete convergence.

As a practical framework, rates of complete second-order moment convergence for the probability density , the distribution function and the quantile function kernel estimators are established. We also discuss about the speed at which these estimators converge. First, in the context of estimating probability density function, many studies using different methods have been proposed. The Kernel method is one of the best of these methods which seems to be convenient and does not require a multiple choice of parameters. Rosenblatt [Citation11] was the earliest pioneer of the class of kernel density estimators, using two parameters, namely the kernel and the bandwidth. For this estimator, convergence in probability was established by Parzen [Citation12]. Habbema et al. [Citation13], Hall and Kang [Citation14], Hall and Wand [Citation15], Gosh and Chaudhury [Citation16] and Gosh and Hall [Citation17] can be consulted for various works on the subject, in particular the estimation by classical kernels of the densities. In the case of independent observations, optimal rates of convergence to zero for mean square error and the bias of the kernel estimators have been addressed by several authors under varying conditions on the kernel (K) and the density (f ). As a contribution, a new complete second-order moment convergence with a rate is introduced for the first time to improve the rates of convergence for the bias and the mean square error (MSE) of kernel density estimators. Then the complete convergence of the density kernel estimator, under weaker conditions on the density function than those proposed in the literature, is achieved. As a consequence of the complete second-order moment convergence, the almost complete convergence is obtained with a better rate.

Second, the proposed mode of convergence is applied to the distribution function and quantiles kernel estimators. Notice that for the distribution function, Nadaraya [Citation18] proposed its kernel estimator. While, Parzen [Citation19] retraced the context of Nadaraya [Citation18] and constructed the kernel quantile estimators for which we established the rate of almost complete convergence. To the best of our knowledge, this is a new result obtained from the proposed convergence mode. A great advantage of the proposed approach is that less conditions are imposed to the kernel function thanks to the use of the mean squared error expression.

2. Complete second-order moment convergence with a rate

Throughout this paper, real-valued random variables are defined on a fixed probability space (Ω,A,P).

Let (Un)nN and (Vn)nN be two sequences of real numbers. We assume (Vn)nN does not vanish from a certain rank. We say that (Un)nN is dominated by (Vn)nN if there exist a real number M and an integer n0, such that, for all nn0, we have |Un|M|Vn|,and we note Un=O(Vn).

Definition 2.1

A sequence (Xn)nN of random variables is said to be almost complete second-order moment convergent (c.s.m) to the random variable X, with the convergence rate 1Un, if nNUnE(XnX)2<,where (Un) in a sequence of positive numbers. Note this mode XnX=Oc.s.m(1Un).

The following theorem shows that if (Xn)nN converges in complete second-order moment to X with a rate 1Un, then it is almost completely convergent with a rate of 1Un.

Theorem 2.1

If a sequence (Xn)nN of random variables verifies XnX=Oc.s.m(1Un),then XnX=Oa.c(1Un).And for β>1, ϵ>0 and the sequence (Vn)nN=(nβUn)nN we obtain nNnβP(|XnX|>ϵ)nNnβUnE(XnX)2<.

Proof.

Suppose that nNUnE(XnX)2<, then from the Markov inequality we obtain P(|XnX|1Un)UnE(XnX)2,so nNP(|XnX|1Un)nNUnE(XnX)2<.Now, 1Un is a rate of convergence, so 1Un0 which equivalent to, for each ϵ>0, there exist n0N such that for all nn0 we have 1Un<ϵ. So P(|XnX|>ϵ)P(|XnX|1Un)which implies nNnβP(|XnX|>ϵ)nNnβUnE(XnX)2<.

The following proposition gives some elementary calculus rules.

Proposition 2.1

Assume limnUn=+, XnlX=Oc.s.m(1Un) and YnlY=Oc.s.m(1Un), where lX, lY are real numbers. We have

(1)

(Xn+Yn)(lX+lY)=Oc.s.m(1Un);

(2)

Xn.YnlX.lY=Oc.s.m(1Un);

(3)

1Xn1lX=Oc.s.m(1Un), with lX0.

Proof.

  1. Immediately from the following inequality: nNUnE((Xn+Yn)(lX+lY))222(nNUnE(XnlX)2+nNUnE(YnlY)2).

  2. We have E(Xn.YnlX.lY)2=0+P(|Xn.YnlX.lY|>t)dt=0+P(|(XnlX)(YnlY)+lY(XnlX)+lX(YnlY)|>t)dtE|XnlX|2+E|YnlY|2+lY2E|XnlX|2+lX2E|YnlY|2,then nNUnE(Xn.YnlX.lY)2nNUnE|XnlX|2+nNUnE|YnlY|2+nNUnlY2E|XnlX|2+nNUnlX2E|YnlY|2<.

  3. For lX0 E(1Xn1lX)2=0+P(|1Xn1lX|2>t)dt=0+P(|XnlX|2>t.(Xn.lX)2)dt0+P(|XnlX|2>t)dtE|XnX|2,so nNUnE(1Xn1lX)2nNUnE|XnX|2<.

Now we have two properties which are consequences of the previous calculus rules.

Corollary 2.1

Assume limnUn=+, Xn=Oc.s.m(1Un) and Yn=Oc.s.m(1Un), where lY is a reel number. We have

(1)

(Xn.Yn)=Oc.s.m(1Un);

(2)

XnYn=Oc.s.m(1Un), with lX0.

Proof.

Follows directly from Proposition 2.1.

3. Theoretical applications

In this section, rates of complete second-order moment convergence for the probability density , the distribution function and the quantile function  kernel estimators are established. The following remark is very important to establish these rates.

Remark 3.1

In all that follows, the supposition of the convergence of n1Unhn4 or n1Unhn2 follows from the fact that, if hn<1 we have Unhn4Unhn2, so the convergence of n1Unhn2 implies the convergence of n1Unhn4. If hn>1 and hn0 while n, the convergence is obtained.

3.1. Kernel density estimator

Let X1, X2,…, Xn be independent and identically distributed copies of a random variable X, which has unknown continuous probability density function f. The kernel density estimator, noted fnˆ, of the unknown density f defined by Parsen [Citation12]; Rosenblatt [Citation11], is given by fnˆ(x)=1nhni=1nK(xXihn),where (hn)n1 is a sequence of positive numbers, usually called a bandwidth or smoothing parameter, and K is an integrable Borel measurable function satisfying K0 and RK(x)dx=1, called kernel. Assume that the kernel K and the density f functions verify the following conditions:

(H1)

xR, K(x)=K(x);

(H2)

supxRK(x)∣≤M<;

(H3)

+x2K(x)dx<;

(H4)

fC2 and f(2) is bounded.

Theorem 3.1

Under (H1)(H4) and supposing that n1Unhn4 or n1Unhn2 converges (see Remark 3.1), we have (1) fnˆ(x)f(x)=Oc.s.m(1Un),(1)

Proof.

To prove (Equation1), notice first that the mean squared error MSE of fnˆ, defined by MSE(fnˆ(x))=E(fnˆ(x)f(x))2,can be written as (2) MSE(fnˆ(x))=Bias(fnˆ(x))2+Var(fnˆ(x)).(2) Hence to prove (Equation1), it suffices to show that (3) n1UnVar[fnˆ(x)]<(3) and (4) n1UnBias(fnˆ(x))2<.(4) For the inequality (Equation3) Var[fnˆ(x)]=1n2hn2Vari=1nK(xXihn)1nhn2E[K2(xXhn)].Using the condition (H2) of the kernel K, it follows that Var[fnˆ(x)]M2nhn2.Therefore n1UnVar[fnˆ(x)]n1UnM2hn2.Since n1Unhn2 converges, so n1UnVar[fnˆ(x)]<.

On the other hand for the inequality (Equation4), we have E[fnˆ(x)]=E[1nhni=1nK(xXihn)]=RK(z)f(xzhn)dz.Using Taylor's series expansion of the function f about a point x up to order 3, (H3), (H1) and (H4), one obtain E[fnˆ(x)]=f(x)RK(z)dzhnf(x)RzK(z)dz+hn22f(2)(x)Rz2K(z)dzhn36f(3)(θ)×Rz3K(z)dz,where θ is a real number between x and zhn. Hence E[fnˆ(x)]f(x)+hn22f(2)(x)Rz2K(z)dz,Thus n1UnBias(fnˆ(x))2n1Unhn44[f(2)(x)Rz2K(z)dz]2=Cn1Unhn4<.The proof is completed.

Remark 3.2

If K is a symmetric compactly supported kernel, we obtain the complete second-order moment convergence of fnˆ under H4.

The choices of Un and hn are not arbitrary because their expressions must be selected so that the convergence of the obtained series is ensured.

Example 3.1

By choosing hn=ns and Un=nα, where s]0,1[ and α>1, we obtain n1UnVar[fnˆ(x)]n1M2n1+α2s,This Riemann's serie converges if 1+α2s>1, thus if α>2s. And n1Un(E[fnˆ(x)]f(x))2Cn11n4s+α,the right-hand side converge if α>14s.

Consequently, combining the two conditions of right-hand side series, one can obtain (Equation1) for α>2. Indeed, it can be checked that

  1. if (14s)]3,0], then 2s]0.5,1];

  2. if (14s)]0,1], then 2s]0,0.5];

  3. if 2s]1,2], then 14s]3,1];

  4. if 2s = 1−4s, than s=16.

So, the condition α>2 is always verified.

Corollary 3.1

Under (H1)(H4), we have fnˆf=Oa.c(nhnnα(logn)2),α>1.

Proof.

For the optimal bandwidth hn=n15 and a rate of convergence Un=nhnnα(logn)2, which satisfies the inequality Un<nhnlogn, where nhnlogn is the rate of almost complete convergence of the density kernel estimator, one obtain n1UnVar[fnˆ(x)]n1M2nα15(logn)2and n1Un(Bias[fnˆ(x)])2Cn11nα(logn)2.Combining the convergence conditions of the two series in the right hand side, we obtain (Equation1) for α>1.

3.2. Kernel distribution function estimator

Let X1, X2,…, Xn be independent and identically distributed copies of a random variable X, which has unknown continuous probability density f and distribution F functions. The kernel distribution function estimator Fnˆ, that was proposed by Nadaraya [Citation18], can be obtained by integrating the kernel density estimator fnˆ, as follows: Fnˆ(x)=xfnˆ(t)dt=1ni=1nH(xXihn),where the function H is defined from the kernel K as H(x)=xK(t)dt.Function H is a cumulative distribution function because K is a probability density function.

Assume that the following hypothesis are satisfied:

(H5)

+K(x)H(x)dx<;

(H6)

+x2K(x)H(x)dx<;

(H7)

+xK(x)H(x)dx0;

(H8)

FC2 and f is bounded.

Then Theorem 3.2 states the complete second-order moment convergence of Fnˆ to F.

Theorem 3.2

Under (H1), (H3), (H5)(H8) and supposing that n1Unhn4 or n1Unhn2 converges (see Remark 3.1), we have (5) Fnˆ(x)F(x)=Oc.s.m(1Un).(5)

Proof.

To prove (Equation5), we use the same argument used to check (Equation1). First for the bias, we have E[Fnˆ(x)]=E[1ni=1nH(xXihn)]=RH(xzhn)f(z)dz.Using integration by part, substitution xzhn=y, a Taylor series expansion of the function F about the point x up to order 2, (H1), (H3) and (H8), we obtain n1UnBias(Fnˆ(x))2Cn1Unhn4<.Now for the variance Var[Fnˆ(x)]=1nVari=1nH(xXihn)E[H2(xXhn)].Using (H1), (H5)–(H8), integration by part, substitution xzhn=y, one have Var[Fnˆ(x)]2hn2F(x)+K(y)H(y)dy+hn4f(x)+y2K(y)H(y)dyhn2C1hn2+C2hn4hn2,where C1 and C2 are two constants. Since (hn) converges to 0, so for every ϵ>0 there exists n0 such that hnϵ for all nn0, then n1UnVar[Fnˆ(x)]Mn1Unhn2,and the desired result is obtained

Remark 3.3

When K is symmetric and has a compact support [1,1], the proprieties of H are given in Baszczynska [Citation20], then Hypothesis H5 and H7 are verified. We can obtain the MSE of the kernel distribution estimator Fnˆ with the assumptions used by Azzalini [Citation21] and the kernel satisfying the above assumptions, we have the squared bias Bias2[Fnˆ(x)]=14hn4f2(x)(11y2K(y)dy)2+o(hn4),and the variance Var[Fnˆ(x)]=1nF(x)(1F(x))21nhnf(x)11yK(y)H(y)dy+o(hnn).Then the MSE is given by MSE[Fnˆ(x)]=14hn4f2(x)(11y2K(y)dy)2+1nF(x)(1F(x))21nhnf(x)×11yK(y)H(y)dy+o(hn4+hnn).So under H8 we obtain n1UnMSE(Fnˆ(x))<.

The next corollary gives a new rate of the almost complete convergence of Fnˆ to F.

Corollary 3.2

Under (H1), (H3), (H5)(H8), we have FnˆF=Oc.c(lognn2αhn8),α>1.

Proof.

Remarking that lognn2αhn8<lognn, where lognn is the rate of the almost complete convergence of Fnˆ to F, we get n1UnE(Fnˆ(x)F(x))2n1lognn2αhn8hn4n1lognnα.The last series converge for α>1.

Example 3.2

For the optimal bandwidth hn=n13 and Un=nα where α>1, one obtain n1UnBias(Fnˆ(x))2Cn1n43nαand n1UnVar[Fnˆ(x)]Mn1nαn23.The two right-hand side series converge simultaneously if α>53.

3.3. Kernel quantile function estimator

Let X1,X2,,Xn be independent and identically distributed copies of a random variable with absolutely continuous distribution function F. Denoting X(1)X(2)X(n) the corresponding order statistics. The quantile function, noted Q, is defined to be the left continuous inverse of F, given by Q(p)=inf{x:F(x)p},0<p<1.A kernel quantile estimator, based on the Nadaraya [Citation18] kernel distribution function estimator Qnˆ, is defined as Qˆ(p)=inf{x:Fnˆ(x)p},0<p<1.and given by Qˆn(p)=i=1nX(i)i1nin1hnK(xphn)dx,where K is a density function, while hn0 as n.

Our result of convergence is based on the expression of the MSE of the kernel quantile estimator, given by Sheather and Marron [Citation22] (Theorem 1, p. 5), when p is in the interior of (0,1), under conditions that the kernel K is symmetric about 0 with compact support, and Q(2) is continuous in a neighbourhood of p.

Theorem 3.3

Supposing that n1Unhn4 or n1Unhn2 converges (see Remark 3.1), we have (6) Qnˆ(x)Q(x)=Oc.s.m(1Un).(6)

Proof.

Building on Falk [Citation23] and David [Citation24], Sheather and Marron [Citation22] give the expressions of bias and variance of Qnˆ as Bias[Qnˆ(x)]=12hn2Q(2)(p)+x2K(x)dx+o(hn2)+o(n1),and Var[Qnˆ(x)]=1np(1p)(Q(p))21nhn(Q(p))2+xK(x)H(x)dx+o(hnn).So n1UnBias(Qnˆ(x))2Cn1Unhn4<and n1UnVar[Qnˆ(x)]Mn1UnnMn1Unhn2<.Finally we have obtained the almost complete convergence and the compete second-order moment convergence of the kernel quantile estimator Qnˆ.

Example 3.3

In the same way of Example 3.2 and using the same hn and Un, we obtain Qnˆ(x)Q(x)=Oc.s.m(1Un).

4. Simulation study

In this section, to present the performance of the new rate of convergence for a finite-size sample, we realize a simulation study. We give a visual impression of the quality of convergence by calculating the correspondent MSE together with the value of the rate of complete second-moment convergence and the value of the rate of almost complete convergence, based on a sample obtained from two theoretical models: Gamma kernel density estimates and the innovation one said Laplace kernel density developed by Khan and Akbar [Citation25], inspired to Chen's idea [Citation26]. Defined by KLaplace(x,h1/2)(u)=12hexp(|ux|h).In the second part, Normal and Epanechnikov kernel distribution estimators are used with optimal bandwidth and different sizes of normal and exponential sample to give a performance of the new rate of convergence. Finally, with the Normal model, one perform the quantile (25%, 50%,75%) estimates for the same sample size.

4.1. Kernel density MSE

We propose two schemes of Kernel estimations, Laplace and Gamma kernel density's estimates with optimal bandwidth h=n1/5. One conduct simulations of samples data from Exponential and Gamma density with sample sizes n = 100, n = 150, 200, 250, 300, 500, 800, 1000. We summarize the numerical calculations in Table .

Table 1. Mean simulated values and rate of convergences for density.

The convergence CSM rate is more efficient in terms of speed towards zero. Even if the rate is almost efficient in the AC convergence, we attend the good behaviour of the new rate for both kernel models.

According to the results obtained in Table , we remark that the values of the CSMC rate of the kernel density estimator for Laplace kernel (resp. Gamma kernel) are closer to the Exp MSE (resp. Gamma MSE) values, than the ACC rate one.

4.2. Kernel distribution MSE

In this case, the Normal kernel distribution and Epanechnikov kernel distribution with optimal bandwidth h=n1/3 are compared to Normal sample. The size varies between 100 and 1000. We summarize the numerical calculations in Table .

Table 2. Mean simulated values and rate of convergences for distribution.

Here too, we notice the fast convergence speed of the CSM rate. We also notice its similarity to the values of the MSE in both distributions. We remark that the CSM rate gives a good results for the Normal kernel.

By the results given in Table , we notice that, in the both cases normal and Eparechnikov kernels distribution, CSMC rate values of the kernel distribution estimator, are closer to the normal MSE and Exp MSE, respectively values than that of the ACC rate.

4.3. Kernel quantile MSE

Now we calculate the MSE of the normal kernel quantile estimator. The size varies between 100 and 1000. We summarize the numerical calculations in Table .

Table 3. Mean simulated values and rate of convergence for quantiles.

We also remark that the CSMC rate values are closer to the Kernel quantile estimators than the ACC rate values.

Eventually, we conclude that, in all cases, the CSMC rate of the kernel density, distribution and quantile estimators, gives better results than the ACC rate of the same estimators.

5. Real data analysis

Female infertility and BMI: This study aims to investigate the body mass index (BMI) of infertile women of childbearing age. We use data from 200 participants from the Ben Badis University Hospital Centre of Constantine, Algeria, in 2018.

The first step is to establish the conformity test between the sample and a normal distribution. One obtain, D = 0.079226 smaller than p-value 0.1623. This implies the use of the density and the distribution of the normal law. The second step is to estimate the density and the distribution functions and to represent them in a graph (Figure ). For the MSE between the real density and a normal kernel density is equal to 0.003128583 and the rate of CSM convergence is 0.0008729775 and the AC convergence is 0.06541179. We obtain for the distribution, MSE= 0.003128583 and the rate of CSM convergence is 0.0047592 and the AC convergence is 0.1627624. For the same rate of convergence, one have the MSE quantiles given by (25%, 0.0260281), (50%, 0.04186125), (75%,0.058767206). Note that the results of the real data come supported those of the simulation.

Figure 1. Density and distribution real data with density and distribution kernel estimate.

Figure 1. Density and distribution real data with density and distribution kernel estimate.

6. Conclusion

The present work proposed a new method to obtain the convergence rate of the MSE that is much more efficient in the CSMC case. Indeed, the CSM convergence rate gives better results than that of AC convergence. The previous results indicate that the choice of this type of convergence using kernel method estimate is a good alternative to the almost complete one. We can apply this type of convergence in any estimation that requires the study of the MSE, such as for example neural network, least squares method, …, and following the suggestion, it can be applied in neutrosophic statistics developed in Smarandache [Citation27] and Afzal et al [Citation28]. Moreover, maybe we can apply this type of convergence to extend the theorem of the law of large numbers.

Disclosure statement

No potential conflict of interest was reported by the authors.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

References

  • Chow D. On the rate of moment convergence of sample function of a random variables with bounded support. Bull Inst Math Acd Sin. 1988;16:177–201.
  • Liang H, Li D, Rosalsky A. Complete moment and integral convergence for sums of negatively associated random variables. Acta Math Sin (Engl Ser). 2010;26(3):419–432.
  • Qui D, Chen P. Complete moment convergence for i.i.d random variables. Statist Probab Lett. 2014;91:76–82.
  • Hsu P, Robbins H. Complete convergence and the law of large numbers. Proc Natl Acad Sci USA. 1947;33(2):25–31.
  • Gut A, Stadtmüller U. An intermediate Baum–Katz theorem. Statist Probab Lett. 2011;81(10):1486–1492.
  • Gut A. Marcinkiwicz laws and convergence rates in the law of large numbers for random variables with multidimensional indices. Ann Probab. 1978;6:469–482.
  • Gut A. Convergence rates for probabilities of moderate deviations for sums of random variables with multidimensional indices. Ann Probab. 1980;8(2):298–313.
  • Li D, Rao MB, Jiang T, et al. Complete convergence and almost sure convergence of weighted sums of random variables. J Theoret Probab. 1995;8(1):49–76.
  • Sung SH. Complete convergence for weighted sums of random variables. Statist Probab Lett. 2007;77(3):303–311.
  • Sung SH, Volodin A. On the rate of complete convergence for weighted sums of arrays of random elements. J Korean Math Soc. 2006;43(4):815–828.
  • Rosenblatt M. Remarks on some non parametric estimates of a density function. Ann Math Statist. 1956;27(3):832–837.
  • Parzen E. On estimation of a probability density function and mode. Ann Math Stat. 1962;33(3):1065–1076.
  • Habbema JDF, Hermans J, Vanden Broee K. A stepwise discriminant analysis program using density estimation. In: Bruckmnann G, editor. Comp stat 1974, Proceedings in Computational Statistics. Vienna: Physica Verlag; 1974. p. 101–110.
  • Hall P, Kang K-H. Bandwidth choice for non parametric classification. Ann Stat. 2005;33:284–306.
  • Hall P, Wand MP. On non parametric discrimination using density differences. Biometrika. 1988;75(3):541–547.
  • Ghosh AK, Chaudhuri P. Optimal smoothing in kernel analysis discriminant. Stat Sin. 2004;14:457–483.
  • Ghosh AK, Hall P. On error-rate estimation in nonparametric classification. Stat Sin. 2008;18:1081–1100.
  • Nadaraya EA. Some new estimates for distribution function. Theory of Probeb. Appl. 1964;9:497–500.
  • Parzen E. Non parametric statistical data modelling. J Amer Stat Assoc. 1979;74(365):105–131.
  • Baszczynska A. Kernel estimation of cumulative distribution function of a random variable with bounded support. Statist Trans. 2016;17:541–556.
  • Azzalini A. A note on the estimation of a distribution function and quantiles by a kernel method. Biometrika. 1981;68(1):326–328.
  • Sheather SJ, Marron JS. Kernel quantile estimators. J Amer Statist Assoc. 1990;85(410):410–416.
  • Falk M. Relative deficiency of Kernel type estimators of quantiles. Ann Statist. 1984;12:261–268.
  • David HA. Order statistics. 2nd ed. New York: John Wiley; 1981.
  • Khan JA, Akbar A. Density estimation by Laplace kernel. Working paper. Department of Statistics, Bahauddin Zakariya, Multan, Pakistan. 2021.
  • Chen SX. Probability density function estimation using gamma kernels. Ann Inst Statist Math. 2000;52(3):471–480.
  • Smarandache F. Neutrosophic statistics vs. classical statistics, section in Nidus Idearum/superluminal physics. Vol. 7, 3rd ed. 2019. p. 117.
  • Afzal U, Alrweili H, Ahamd N, et al. Neutrosophic statistical analysis of resistance depending on the temperature variance of conducting material. Sci Rep. 2021;11(1):Article ID 23939.
  • Yu Y, et al. On the complete convergence for uncertain random variables. Soft Comput. 2022;26(3):1025–1031.