310
Views
0
CrossRef citations to date
0
Altmetric
Articles

Bayesian accounts and black swans: Questioning the erotetic theory of delusional thinking

Pages 456-466 | Received 12 Aug 2015, Accepted 14 Aug 2015, Published online: 15 Sep 2015
 

Abstract

Parrott and Koralus argue that a particular cognitive factor – “impaired endogenous question raising” – offers a parsimonious account of three delusion-related phenomena: (1) the development of the Capgras delusion; (2) evidence that patients with schizophrenia outperform healthy control participants on a conditional reasoning task; and (3) evidence that deluded individuals “jump to conclusions”. In this response, I assess these claims, and raise my own questions about the “erotetic” theory of delusional thinking.

Acknowledgements

I thank Rob Ross and Kengo Miyazono for valuable comments on a draft of this manuscript.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1. “Cognitive neuropsychiatry … adopts a ‘levels-of-explanation’ approach to the study of psychiatric symptoms such as delusions and hallucinations … model[ing] the clinical phenomenology of specific symptoms in terms of disruption to normal processing of information about self and the world” (Langdon, Citation2011, p. 449, italics in original).

2. The distinction between externally stimulated and self-initiated questions is comparable to the well-known distinction between Type 1 (“intuitive”) processing and Type 2 (“analytic”) processing (Kahneman, Citation2011). A number of authors have suggested that the cognitive style of deluded individuals can be characterised as insufficiently analytic (Aimola Davies & Davies, Citation2009; Freeman & Garety, Citation2014). As a result, they tend not to override (“question”?) their automatic reactions to stimuli.

3. “Distinguish two kinds of parsimony … qualitative and quantitative. A doctrine is qualitatively parsimonious if it keeps down the number of fundamentally different kinds of entity: if it posits sets alone rather than sets and unreduced numbers, or particles alone rather than particles and fields, or bodies alone or spirits alone rather than both bodies and spirits. A doctrine is quantitatively parsimonious if it keeps down the number of instances of the kinds it posits; if it posits 1029 electrons rather than 1037, or spirits only for people rather than spirits for all animals. I subscribe to the general view that qualitative parsimony is good in a philosophical or empirical hypothesis; but I recognize no presumption whatever in favor of quantitative parsimony” (Lewis, Citation1973, p. 87, italics in original).

4. In this respect their account seems inconsistent (they vacillate between endorsing a qualitative difference and endorsing a merely quantitative difference). An alternative way of reading P&K is as saying that aside from their key hypothesis, their account is qualitatively parsimonious (at one point they say “A notable virtue of this explanation is that it does not posit any cognitive states or operations that are radically different from that of normal subjects, with the exception of the notion that delusional subjects are less inquisitive” [my italics]). However, whereas on this interpretation P&K avoid inconsistency, they relinquish any real claim to qualitative parsimony.

5. “Damage to this normal safety-check mechanism (a deficit) is necessary, we think, to explain the presence of delusional beliefs” (Langdon & Coltheart, Citation2000, pp. 203–204).

6. According to P&K, my account of Capgras delusion (McKay, Citation2012) “claims the subject's prior in [the impostor] hypothesis is low and therefore discounted in her subsequent reasoning” (my italics). This is not quite accurate. Whereas I do suggest that deluded individuals underweight their prior beliefs (and thus adopt beliefs that best explain the evidence available to them), this is not because those prior beliefs are low (or high). My point is that even very low prior beliefs (e.g. in the existence of impostors) can yield high posterior beliefs if the conditionalisation process is distorted in the way I suggest.

7. Note that on the account of Adams et al., deluded individuals are over-responsive to sensory evidence, but do not necessarily depart from Bayesian reasoning. Instead, deluded individuals encode the precision of sensory evidence in an aberrant fashion (such that, relative to the precision of prior beliefs, the precision of sensory evidence is increased). The sensory evidence may, however, be combined with prior beliefs in a more-or-less Bayesian fashion.

8. One might object that there are plausible medical hypotheses that explain this evidence at least as well as the impostor hypothesis: for example, the hypothesis “This person is my loved one and I have suffered a stroke”. However, it is not obvious that this “stroke” hypothesis explains the evidence as well as (or better than) the impostor hypothesis. For one thing, the stroke hypothesis is quite general (P&K's example “I am misperceiving due to illness” is even more general). A stroke can cause any number of psychological and physical problems. Given that one has had a stroke, the likelihood of any specific impairment (an impairment in the autonomic response to familiar faces, say) may be quite low. A more specific hypothesis would be “This person is my wife and I have had a stroke that has disconnected my face recognition system from my autonomic nervous system”. However, on the one hand the patient may not realise that an impairment in the autonomic response to familiar faces is even a possible consequence of stroke. So the patient might not even generate the more specific hypothesis. But on the other hand patients who do generate this hypothesis may not come to the attention of delusion researchers: “We assume that many people with similar brain injuries actually construct less fantastic accounts – they might say … that their vision is funny, etc. But these people do not get such a lot of attention from the medical profession!” (Stone & Young, Citation1997, p. 338).

9. P&K claim that the difference between control and delusion participants in the “classical” (unincentivised) version of the Beads Task “is explained by particularly strong Bayesian rationality” (their italics). The authors suggest that delusional participants who decide when the posterior probability of one of the jars is .97 are performing “roughly as a Bayesian algorithm would”. In my view there is no basis for this claim. The “Bayesian algorithm” simply provides the probabilities of events – in the absence of relevant costs, the algorithm cannot imply anything about when to decide. Moreover, the notion that .97 is some kind of all-purpose rational stopping point is easily contestable: with a gun to my head, I would be happy to sample beads all day rather than stop at .97 and risk a 3% chance of losing my life. Obviously the stakes are far lower in the standard beads task, but the point is that without knowing what costs a participant anticipates, one cannot infer anything about the rationality or otherwise of their decision in this task.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.