879
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Improving econometric prediction by machine learning

ORCID Icon
Pages 1419-1425 | Published online: 14 Sep 2020
 

ABSTRACT

We present a Machine Learning (ML) toolbox to predict targeted econometric outcomes improving prediction in two directions: (i) by cross–validated optimal tuning, (ii) by comparing/combining results from different learners (meta–learning). In predicting woman wage class based on her characteristics, we show that all our ML methods’ predictions highly outperform standard multinomial logit ones, both in terms of mean accuracy and its standard deviation. In particular, we set out that a regularized multinomial regression obtains an average prediction accuracy almost 60% larger than that of an unregularized one. Finally, as different learners may behave differently, we show that combining them into one ensemble learner proves to preserve good predictive accuracy lowering the variance more than stand-alone approaches.

JEL CLASSIFICATION:

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1 The use of, let’s say, a tree-based propensity score for estimating the selection equation opens up the problem of correctly estimating the standard error of the average treatment effect in the second-step (outcome) equation. What is the asymptotic distribution of the average treatment effect estimator when the first-step propensity score is estimated via a highly non-parametric procedure? This is still an open question that could forage a new stream of research in causal inference. A possible empirical solution could be the use of the bootstrap, although one should prove that bootstrap is correct in this context.

2 The main reference on the statistics of meta–learning can be found in Van der Laan and Rose (Citation2011).

3 A challenging stream of research aims at understanding the relationship between data structure and ML models’ prediction ability. We know so far that when data present a strong inner ordering, some methods tend to outperform others. For instance, for image recognition purposes, deep neural networks are surprisingly accurate compared to other classification algorithms. This has to do with the inner ordering of images, such a human faces. In particular, convolutional neural networks are highly suited for this task.

4 All algorithms implementation and graphing have been programmed in Python 3.7, using the Stata/Python integrated interface available in Stata 16. All codes are available on request.

5 We assume that the observed learner–specific accuracies θˆj, j=1,,M, represent a random sample from a population that is normally distributed with mean θ and variance τ2. The weights are thus obtained considering a random-effects model where the θˆj=θj+j=θ+uj+j where j and uj are assumed to be independent with jN(0,σˆj2) and uj=N(0,τ2). The weights are thus calculated as wˆj=1/(σˆj2+τˆ2), with σˆj2 obtained by cross–validation, and τˆ2 by the random–effects maximum likelihood.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 205.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.