895
Views
3
CrossRef citations to date
0
Altmetric
Articles

The fragility of results and bias in empirical research: an exploratory exposition

Pages 347-360 | Received 05 Jun 2018, Accepted 01 Dec 2018, Published online: 15 Dec 2018
 

ABSTRACT

The results of empirical work in economics and finance are typically sensitive, inter alia, to model specification, sample period, variable definitions and estimation methods. If the underlying issue attracts bias of some sort, the reporting of the results tends to be selective, intended to confirm prior beliefs, which may be ideologically-driven, or to make the paper more publishable. In this paper the fragility (variability and sensitivity) of empirical results is demonstrated and the process whereby it enables research selection bias is described. For this purpose, several research areas are considered, including the determinants of economic growth, the effect of gender on risk aversion, the determinants of capital structure, the J-curve effect, the effect of corruption on foreign direct investment, and the Kuznets curve.

Acknowledgement

I am grateful to two anonymous referees for two rounds of detailed and useful comments.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Imad A. Moosa is currently a Professor of Finance at RMIT, Melbourne. Before taking on the present position he was a Professor of Finance at Monash University and La Trobe University, and a Lecturer in Economics and Finance at the University of Sheffield. Prior to becoming an academic in 1991, he was a professional economist and a financial journalist for over ten years, and he also worked as an economist at the Financial Institutions Division of the Bureau of Statistics, the International Monetary Fund (Washington DC). He has published 24 books and over 200 papers in scholarly journals. His most recent books are Econometrics as a Con Art: Exposing the Limitations and Abuses of Econometrics and Publish or Perish: Perceived Benefits versus Unintended Consequences.

Notes

1 The Leamer critique has been debated in the literature. Those arguing against it or casting doubt on its validity include Angrist and Pischke (Citation2010) and McAleer, Pagan, and Volker (Citation1985). Supportive arguments and evidence in response to McAleer et al. (Citation1985) are presented by Cooley and LeRoy (Citation1986). For details, see Moosa (Citation2017).

2 The ‘p-curve’ describes the density of reported p-values in a strand of literature. The underlying proposition is that if the null hypothesis were true (no effect), p-values should be distributed uniformly between 0 and 1. For more on p-hacking see, Gelman and Loken (Citation2013) and Simonsohn, Simmons, and Nelson (Citation2015).

3 The term ‘file drawer problem’ was coined by Rosenthal (Citation1979) to describe a situation where results are missing from a body of research evidence. Concern about this problem had been expressed earlier by Sterling (Citation1959) who warned of ‘embarrassing and unanticipated results’ arising from Type I errors if insignificant results are kept in a drawer or put in the bin as suggested by Gilbert (Citation1986).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 315.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.