Abstract
We review the literature on the hot hand fallacy by highlighting the positive and negative aspects of hot hand research over the past 20 years, and suggesting new avenues of research. Many researchers have focused on criticising Gilovich et al.'s claim that the hot hand fallacy exists in basketball and other sports, instead of exploring the general implications of the hot hand fallacy for human cognition and probabilistic reasoning. Noting that researchers have shown that people perceive hot streaks in a gambling domain in which systematic streaks cannot possibly exist, we suggest that researchers have paid too much attention to investigating the independence of outcomes in various sporting domains. Instead, we advocate a domain-general mechanistic approach to understanding the hot hand fallacy, and conclude by suggesting approaches that might refocus the literature on the important general implications of the hot hand fallacy for human probabilistic reasoning.
Notes
1However, in their original paper, Gilovich et al. (Citation1985) eliminated the relatively mundane possibility that belief in the hot hand fallacy could be due to a memory bias. They presented basketball fans with short sequences of hit and miss data and found that participants classified these sequences as “chance shooting”—sequences of hits and misses that mimicked the behaviour of a coin across several tosses—when the probability of alternation between hits and misses was between 0.7 and 0.8, rather than 0.5. Thus, people conceive of chance as excessively rapid alternation between equally likely options, which explains why they tend to construe even short strings of consistency as evidence of streaks or a hot hand.
2A quick survey of the published research on the hot hand fallacy shows that approximately 23 papers have reported domain-specific, non-mechanistic studies of the hot hand fallacy, whereas only five published manuscripts have explored the mechanisms that underpin the hot hand fallacy. We also discuss a handful of unpublished manuscripts in our review that adopt mechanistic approaches; although they contribute to the “mechanistic” side of the scale, they have not yet been peer reviewed. This tempers their impact and makes it difficult to guarantee their methodological efficacy.
3Furthermore, proponents of the adaptiveness approach fail to consider alternative models that may be more adaptive. For example, although Burns (Citation2004) showed that streaks gave players useful information about which of their team-mates are most likely to score in the long run, there may be other approaches that more effectively identify competent team-mates. Specifically, Burns did not compare the hot hand model to models that consider other performance data, like each player's percentage of scoring shots across the entire game. Adaptiveness is therefore difficult to evaluate in the absence of alternative models of behaviour that would provide a necessary comparison standard.