1,600
Views
5
CrossRef citations to date
0
Altmetric
Original Article

The Impact of P-hacking on “Redefine Statistical Significance”

Pages 219-235 | Received 24 Dec 2017, Accepted 04 May 2018, Published online: 05 Sep 2018
 

Abstract

In their proposal to “redefine statistical significance,” Benjamin et al. claim that lowering the default cutoff for statistical significance from .05 to .005 would “immediately improve the reproducibility of scientific research in many fields.” Benjamin et al. assert specifically that false positive rates would fall below 10% and replication rates would double under the lower cutoff. I analyze these claims here, showing how the failure to account for P-hacking and other widespread reporting issues leads to exaggerated and misleading conclusions about the potential impact of the .005 proposal.

Notes

1 The term reproducibility is often treated as synonymous with replication (e.g., as in Benjamin et al., Citation2018), even though they have different meanings. Because I focus here on replication, I only use the term reproducibility when quoting from sources that use the two words interchangeably. See Tel Aviv University (n.d.) for the distinction between these two terms.

2 Behaviors motivated by nonscientific considerations such as promotion, tenure, prestige, money, and so on.

3 Note that a p value larger than α is not interpreted as evidence in favor of H0. In the following analysis, I do not consider the so-called accept-support context, in which the researcher’s hypothesis corresponds to H0, instead of H1, and large p values are interpreted as support for retaining H0.

4 I disregard attempts at P-hacking that fail to achieve p <  α. Becaues these p values are never labeled “significant,” they do not appear in the scientific literature and therefore do not contribute to the replication crisis. As mentioned before (see Footnote 3), I also do not consider the “accept-support” context of significance testing.

5 This is especially noteworthy given that Ioannidis is one of the many coauthors of Benjamin et al. (Citation2018).

6 In the Assessing the Impact section, we estimate that between 3% and 15% of all p values are hacked based on data from a replication study in psychology (Open Science Collaboration, Citation2015). These estimates are used here for illustration and should not be interpreted as a definitive range for the actual rate of P-hacking in the published literature.

7 As noted when h was originally introduced in the Scenario 2 section, we disregard new P-hacking attempts at the level that fail to produce a significant result, because these failed attempts do not contribute to the replication crisis.

8 The prior odds can be expected to decrease for the same reasons cited in the Scenario 3 section to explain why P-hacking will increase and power will decrease under the lower significance cutoff. Because under the lower cutoff it will be more challenging to obtain significant findings, scientists may respond by testing many more spurious hypotheses in search of significant results. Such a decrease in prior odds would lead to an increase in an FPR even in the absence of any other types of P-hacking. I do not consider this possibility any further here.

Additional information

Funding

This work was supported by National Science Foundation, [1554092].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.