1,067
Views
82
CrossRef citations to date
0
Altmetric
Articles

Knowledge ascriptions and the psychological consequences of changing stakes

Pages 279-294 | Received 01 Oct 2006, Published online: 20 May 2008
 

Abstract

Why do our intuitive knowledge ascriptions shift when a subject's practical interests are mentioned? Many efforts to answer this question have focused on empirical linguistic evidence for context sensitivity in knowledge claims, but the empirical psychology of belief formation and attribution also merits attention. The present paper examines a major psychological factor (called ‘need-for-closure’) relevant to ascriptions involving practical interests. Need-for-closure plays an important role in determining whether one has a settled belief; it also influences the accuracy of one's cognition. Given these effects, it is a mistake to assume that high- and low-stakes subjects provided with the same initial evidence are perceived to enjoy belief formation that is the same as far as truth-conducive factors are concerned. This mistaken assumption has underpinned contextualist and interest-relative invariantist treatments of cases in which contrasting knowledge ascriptions are elicited by descriptions of subjects with the same initial information and different stakes. The paper argues that intellectualist invariantism can easily accommodate such cases.

Notes

1This Hi/Lo case is a version of the bank case originally developed by Keith DeRose in support of contextualism, and adapted to support IRI by Jason Stanley. Contextualism has been defended in Lewis Citation1996; Cohen Citation1999; DeRose Citation1992; IRI in Stanley Citation2005, and, more guardedly, Hawthorne Citation2003; and strict invariantism in Williamson Citation2005 and elsewhere. Other ways of handling comparable epistemic shift intuitions include contrastivism [Morton and Karjalainen Citation2003; Schaffer Citation2005 and relativism [Richard Citation2004; MacFarlane Citation2005.

2John Hawthorne discusses some work on biased judgements of risk as one possible explanation of a tendency to ‘overproject’ one's sceptical sentiments in assessing the beliefs of others [Hawthorne Citation2003: 164]; he suggests our mere contemplation of problematic counter-possibilities may raise our estimation of their likelihood [cf. Williamson Citation2005. Stewart Cohen, Keith DeRose and Jonathan Schaffer argue that there is no clear advantage for IRI over contextualism in the psychological work cited by Hawthorne [Cohen Citation2005; DeRose Citation2005; Schaffer Citation2006. Cohen points out that the empirical data Hawthorne cites do not quite support the suggestion that mentioning a risk always elevates one's estimation of its likelihood, and there is further evidence—e.g. [Sherman et al. Citation1985—that imagining a risk does not have a uniformly positive impact on its perceived likelihood. While the focus of the current paper will be strictly on practical interests, epistemologists also need to do more to examine what happens to us psychologically when possibilities of error are mentioned.

3On increased searching for information, see e.g. Sanitioso and Kunda Citation1991; Huneke et al. Citation2004; on more complex cognitive strategies to reach a judgement, both when cued and spontaneously, see e.g. McAllister, Mitchell, and Beach Citation1979; Van Hiel and Mervielde Citation2003.

4The numerical anchoring bias, for example, is insensitive to financial incentives for accuracy when the anchor is supplied by the experimenter, but attenuated by incentives when the anchor is generated by the subject's own efforts. In anchoring, judgements are biased in the direction of a cue: subjects first asked whether Mount Everest is more or less than 2,000 feet tall went on to give a median estimation of 8,000 feet as its height; subjects who were instead asked whether it is more or less than 45,000 feet tall gave a subsequent median estimation of 42,550 feet [Jacowitz Citation1995. The effect persists even when subjects are forewarned about the effect, and when it is made vivid to subjects that the anchor is randomly selected, for example on the basis of the spin of a wheel of fortune [Tversky and Kahneman Citation1974; Chapman and Johnson Citation2002. The anchor does not have to be provided by the experimenter, however: a question like, ‘In what year did the second European explorer land in the West Indies?’ prompts the subject to think of 1492 on her own, and then adjust upwards from there. Epley and Gilovich have shown that financial incentives and forewarnings reduce anchoring effects only when the anchor is generated by the subject [Epley and Gilovich Citation2005. They argue that the more belief formation depends on effortful thinking rather than automatic and unconscious processes, the more sensitive it will be to perceived incentives.

5The quoted phrase is DeRose's, who joins advocates of IRI in finding that these cases elicit ‘stronger and more stable’ intuitions than cases not involving practical interests, or cases in which subjects or other parties to the conversation are unreasonable or mistaken about their interests [DeRose, Citation2005.

6Advocates of IRI could also object to certain features of Schaffer's version. In both cases Schaffer pops the question of whether the subject knows immediately after a sentence declaring, ‘He is right—the bank will be open’. Because this affirmation follows a brief description of the subject's inferring that the bank will be open from his memory of last week's opening, it's hard not to pick up a hint that the subject's inference is right, to read this as an affirmation that the thinking in this case is adequate. Schaffer's version also reduces the temptation to shift by making the evidence fresher and the odds of a change in hours lower (6 days have elapsed rather than 13).

7With such a change, it appears to me that Low-Stakes Sarah is being neurotic in worrying about error, not that Low-Stakes Hannah lacks knowledge. Schaffer also develops a pair of cases mentioning the possibility of error, and contends that the shift will disappear into a uniform denial of knowledge, but my intuitions on these cases are very weak. These cases also differ from Stanley's in inserting the suggestion of error in the voice of the narrator rather than the conversational partner of the subject, which makes a difference to the expected subjective confidence of the subject (a difference to be discussed shortly).

8Jon Kvanvig is also credited as having presented a similar suggestion on his blog Certain Doubts. See Stanley Citation2005: 6].

9As further evidence that Stanley's cases might have to do with the difference between the presence and absence of settled belief, it's worth noting that for Schaffer's stripped-down versions of High and Low, which more readily elicit the reaction that the subject knows in both cases, the subject is explicitly said to draw the conclusion that the bank will be open on the coming Saturday.

10From here on I'll be using ‘closure’ to mean non-specific closure. Note that motivation for specific closure and motivation for non-specific closure are not exclusive; it's possible to increase both forms of motivation simultaneously, for example, by changing both incentives to reach a certain outcome and incentives for accuracy, or by adjusting time pressure or environmental conditions. Perhaps surprisingly, motivation for specific and non-specific closure can function orthogonally. Unfortunately, the Bank cases involve both kinds of motivation; for a more fine-grained understanding of intuitive knowledge ascription one would also need to develop sets of cases that involve just one sort of closure motivation at a time.

11Time pressure: [Kruglanski and Webster Citation1991; background noise: [Kruglanski, Webster, and Klem Citation1993; affective perception of task: [Webster, Citation1993.

12Of course IRI does not insist that every difference in stakes makes a difference in knowledge ascription: it's consistent with Stanley's formulation that such shifts could be quite rare, and involve only fairly complex judgements.

13For example, Schaffer Citation2006 gives a number of Ignorant High Stakes cases with Low Stakes counterparts which elicit no intuitions of the epistemic shift IRI predicts.

14The hindsight bias impairs us from setting aside what we know while evaluating the perspective of another who should be taken to lack this knowledge. (It is asymmetrical—we have no difficulty ‘subtracting ignorance’ and reasoning about the perspective of others taken to be better informed than we are.) The bias cannot be cancelled by forewarnings or financial incentives for accuracy. For a review of the impact of the hindsight bias on knowledge ascriptions, see Nickerson Citation1999. It has been argued that hindsight, or ‘the curse of knowledge’, is the central psychological limitation on mental state reasoning [Birch and Bloom Citation2004.

15For comments on earlier drafts of this paper I am grateful to Kent Bach, Cheryl Misak, Diana Raffman, and Sergio Tenenbaum. A version of this paper was presented at the Central Division APA in April 2007; I owe thanks to members of the audience for their feedback and special thanks to Jason Stanley, who served as commentator, both for his astute comments on that occasion and for a thought-provoking discussion of these issues a year earlier.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 94.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.