253
Views
24
CrossRef citations to date
0
Altmetric
Original Articles

A Class of Improved Parametrically Guided Nonparametric Regression Estimators

, &
Pages 542-573 | Received 15 Nov 2005, Accepted 11 Dec 2006, Published online: 22 May 2008
 

Abstract

In this article we define a class of estimators for a nonparametric regression model with the aim of reducing bias. The estimators in the class are obtained via a simple two-stage procedure. In the first stage, a potentially misspecified parametric model is estimated and in the second stage the parametric estimate is used to guide the derivation of a final semiparametric estimator. Mathematically, the proposed estimators can be thought as the minimization of a suitably defined Cressie–Read discrepancy that can be shown to produce conventional nonparametric estimators, such as the local polynomial estimator, as well as existing two-stage multiplicative estimators, such as that proposed by Glad (Citation1998). We show that under fairly mild conditions the estimators in the proposed class are asymptotically normal and explore their finite sample (simulation) behavior.

JEL Classification:

ACKNOWLEDGMENTS

We thank participants of the Second Conference on Information and Entropy Econometrics, Amos Golan, Essie Maasoumi, Peter Phillips, Jeff Racine, and an anonymous referee for helpful comments. The authors retain responsibility for any remaining errors.

Notes

1Glad (Citation1998) established the order of the bias and variance for her estimator, but no result on its asymptotic distribution.

2See Cressie and Read (Citation1984) and Read and Cressie (Citation1988).

3If φ is such that E(φ(Z i (x), b 0)|x i ) = 0 for a unique b 0, with φ = (Z i (x) − b 0)K h n (x i  − x), then the maximization in (9) gives and provided that .

4Mutatis mutandis local polynomial estimators of order p ≥ 2 can also be obtained from the optimization in (Equation12).

5Since the parametric guide is linear, the LL and additively corrected estimators coincide.

6Here θ0 is calculated by minimizing the Kullback–Leibler discrepancy or maximizing the likelihood function. For α = 1, 0 the bias term doesn't involve θ0.

7The choice of α is guided by the intention of including both positive and negative values of α, as well as taking into account the special cases where α = 0 and α = 1 that correspond to the additively corrected estimator and that of Glad (Citation1998).

8Suitable estimators are given in the comments following Theorem 2.

Note: All entries for bias squared, variance, and mean square error are multiplied by 104.

Note: All entries for bias squared, variance and mean square error are multiplied by 104.

9As we do not prove the strict convexity of MISE with respect to α there may be several α that minimize MISE.

10Under the unrealistic assumption of a correctly specified parametric DGP, a suitable parametric estimator (possibly unbiased and efficient in an appropriately defined class) can be chosen, and the bias-variance tradeoff intrinsic to all nonparametric estimators considered herein can be bypassed.

11When the parametric guide is equal to m(x) the left-hand side of Eq. (Equation15) is identically zero for all x.

12In fact, optimal values of α do not vary significantly with n for n ≥ 100 in our simulations.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 578.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.