ABSTRACT
The results of empirical work in economics and finance are typically sensitive, inter alia, to model specification, sample period, variable definitions and estimation methods. If the underlying issue attracts bias of some sort, the reporting of the results tends to be selective, intended to confirm prior beliefs, which may be ideologically-driven, or to make the paper more publishable. In this paper the fragility (variability and sensitivity) of empirical results is demonstrated and the process whereby it enables research selection bias is described. For this purpose, several research areas are considered, including the determinants of economic growth, the effect of gender on risk aversion, the determinants of capital structure, the J-curve effect, the effect of corruption on foreign direct investment, and the Kuznets curve.
Acknowledgement
I am grateful to two anonymous referees for two rounds of detailed and useful comments.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
Imad A. Moosa is currently a Professor of Finance at RMIT, Melbourne. Before taking on the present position he was a Professor of Finance at Monash University and La Trobe University, and a Lecturer in Economics and Finance at the University of Sheffield. Prior to becoming an academic in 1991, he was a professional economist and a financial journalist for over ten years, and he also worked as an economist at the Financial Institutions Division of the Bureau of Statistics, the International Monetary Fund (Washington DC). He has published 24 books and over 200 papers in scholarly journals. His most recent books are Econometrics as a Con Art: Exposing the Limitations and Abuses of Econometrics and Publish or Perish: Perceived Benefits versus Unintended Consequences.
Notes
1 The Leamer critique has been debated in the literature. Those arguing against it or casting doubt on its validity include Angrist and Pischke (Citation2010) and McAleer, Pagan, and Volker (Citation1985). Supportive arguments and evidence in response to McAleer et al. (Citation1985) are presented by Cooley and LeRoy (Citation1986). For details, see Moosa (Citation2017).
2 The ‘p-curve’ describes the density of reported p-values in a strand of literature. The underlying proposition is that if the null hypothesis were true (no effect), p-values should be distributed uniformly between 0 and 1. For more on p-hacking see, Gelman and Loken (Citation2013) and Simonsohn, Simmons, and Nelson (Citation2015).
3 The term ‘file drawer problem’ was coined by Rosenthal (Citation1979) to describe a situation where results are missing from a body of research evidence. Concern about this problem had been expressed earlier by Sterling (Citation1959) who warned of ‘embarrassing and unanticipated results’ arising from Type I errors if insignificant results are kept in a drawer or put in the bin as suggested by Gilbert (Citation1986).