634
Views
16
CrossRef citations to date
0
Altmetric
Original Articles

I Got More Data, My Model is More Refined, but My Estimator is Getting Worse! Am I Just Dumb?

&
Pages 218-250 | Published online: 25 Sep 2013
 

Abstract

Possibly, but more likely you are merely a victim of conventional wisdom. More data or better models by no means guarantee better estimators (e.g., with a smaller mean squared error), when you are not following probabilistically principled methods such as MLE (for large samples) or Bayesian approaches. Estimating equations are particularly vulnerable in this regard, almost a necessary price for their robustness. These points will be demonstrated via common tasks of estimating regression parameters and correlations, under simple models such as bivariate normal and ARCH(1). Some general strategies for detecting and avoiding such pitfalls are suggested, including checking for self-efficiency (Meng, Citation1994; Statistical Science) and adopting a guiding working model.

Using the example of estimating the autocorrelation ρ under a stationary AR(1) model, we also demonstrate the interaction between model assumptions and observation structures in seeking additional information, as the sampling interval s increases. Furthermore, for a given sample size, the optimal s for minimizing the asymptotic variance of is s = 1 if and only if ρ2 ≤ 1/3; beyond that region the optimal s increases at the rate of log −1−2) as ρ approaches a unit root, as does the gain in efficiency relative to using s = 1. A practical implication of this result is that the so-called “non-informative” Jeffreys prior can be far from non-informative even for stationary time series models, because here it converges rapidly to a point mass at a unit root as s increases. Our overall emphasis is that intuition and conventional wisdom need to be examined via critical thinking and theoretical verification before they can be trusted fully.

JEL Classification:

ACKNOWLEDGMENTS

We thank Editor Ehsan Soofi for the invitation (and for his extraordinary patience) to contribute to this special volume in honor of Professor Arnold Zellner, who was a colleague and friend of one of us (Meng) during his Chicago years (1991–2001). We also thank many colleagues, especially Joseph Blitzstein, Ngai Hang Chan, and Ehsan Soofi for very helpful exchanges and conversations, Alex Blocker, Steven Finch, and Nathan Stein for proofreading and constructive comments, and the National Science Foundation for partial financial support.

Notes

See Prof. Arnold Zellner's CV at http://faculty.chicagobooth.edu/arnold.zellner/more/vita.pdf

The term “data augmentation” (Tanner and Wong, Citation1987, 32) is also well-known in the EM and MCMC literature, where it refers to creating artificial (missing) data for the purpose of constructing useful statistical algorithms. The connection with the discussion here is that the algorithmic efficiencies of these algorithms are (almost) exactly determined by the amount of augmented Fisher information; see van Dyk and Meng (Citation2001, Citation2010) for an overview and some detailed investigations.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 578.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.