256
Views
7
CrossRef citations to date
0
Altmetric
Articles

Continuing on

Pages 670-691 | Received 18 Nov 2015, Accepted 19 Nov 2015, Published online: 08 Jan 2016
 

Abstract

What goes wrong, from a rational point of view, when an agent’s beliefs change while her evidence remains constant? I canvass a number of answers to this question suggested by recent literature, then identify some desiderata I would like any potential answer to meet. Finally, I suggest that the rational problem results from the undermining of reasoning processes (and possibly other epistemic processes) that are necessarily extended in time.

Notes

1 The previous two paragraphs are adapted from my discussion of the Baseball case in Titelbaum (Citation2013).

2 For what it’s worth, a similar point could be made about rational constraints on how an agent’s various attitudes should relate at a given time. Arguments that rational credences satisfy Kolmogorov’s probability axioms typically begin by assuming that there’s some rational pressure for an agent’s synchronic credences in different propositions to line up with each other. The assumption is rarely commented upon only because it seems so obviously true.

3 Thanks to Greg Novack for helping me better articulate this position.

4 There is a small logical step here from the synchronic to the diachronic: Just because one is required to have attitude A at time 1 and one is required to have attitude B at time 2, that doesn’t necessarily mean that there’s a requirement to have A at 1 and B at 2. The details will depend on one’s deontic logic of rational requirements.

5 This is not Kelly’s definition of time-slice views, for two reasons: First, Kelly works with facts about justification rather than facts about rational permission. Second, Kelly ultimately defines a current time-slice theory as one on which the normative facts are grounded in non-historical facts. Moving from supervenience to grounding helps Kelly deal with cases in which certain non-historical facts are themselves grounded in historical facts. Yet clearly the grounding definition implies the supervenience condition in the text above. Moss (Citation2015), meanwhile, defines time-slice epistemology in terms of two claims: (1) ‘what is rationally permissible or obligatory for you at some time is entirely determined by what mental states you are in at that time’; and (2) ‘the fundamental facts about rationality are exhausted by these temporally local facts.’ I slightly prefer Kelly’s definition because it leaves open the possibility that non-historical facts other than facts about an agent’s attitudes might affect what is rationally permissible for the agent. (See also Hedden (Citation2015) for another approach to defining time-slice views.)

6 Kelly notes that on some epistemologies the constitution of an agent’s current total evidence will count as a historical fact (or as grounded in historical facts). For instance, an epistemology might combine Williamson’s (Citation2000) position that evidence is knowledge with a theory of knowledge invoking the etiology of beliefs. Yet most time-slicers assume that evidential facts are non-historical, so I will go along with that assumption here.

7 Hedden (Citation2015) does, while Moss (Citation2015) doesn’t.

8 Instead, she might respond that an agent who forms a rationally-incorrect initial belief will take it to be correct, and so will think there is some rational pressure to maintain that belief later on because she will continue to take it to be what’s (synchronically) required. In other words, the agent who initially errs is actually under no rational requirement to remain diachronically consistent, but our intuitions about her can be explained by the fact that in some sense it’s reasonable for her to think she’s under such a requirement. This line of response strikes me as unpromising, because we can always ask about cases in which the agent does not maintain her initial belief, nor does she remember its content. If there is a rational problem with the agent’s adopting a different belief in response to the same evidence in at least some such cases – as it will emerge I think there is, even when the initial belief was an incorrect response to the evidence – the time-slicer’s response will fail to account for that problem.

9 For additional arguments against the Uniqueness Thesis, see Kopec (CitationForthcoming), Kopec (Citation2015), Meacham (Citation2014), Schoenfield (Citation2014), and Kelly (Citation2014).

10 Some formulations of Uniqueness replace ‘exactly’ with ‘at most’ to allow for doxastic rational dilemmas.

11 Sometimes the expression ‘make up my mind’ is used as follows: ‘I wanted to go to graduate school for a long time, but in the fall of 2002 I finally made up my mind to do so.’ I’m not sure whether this use of the expression (to indicate resolution, as it were) is different from the use just indicated in the main text, but if so let me stipulate that this isn’t the sort of mind-making I’ll be discussing in this essay.

12 Moreover, I’ve tried to make Baseball acknowledgedly permissive: not only are there multiple conflicting beliefs rationally available to Ray when he makes his initial judgment; Ray is also aware of this permissiveness in his own situation. For the significance of acknowledgedly permissive cases, see Ballantyne and Coffman (Citation2012) and Titelbaum and Kopec (CitationForthcoming).

13 The ‘selection’ talk may be a bit misleading here; I don’t want to take a stand on whether making up one’s mind need be or can be a volitional activity. Moreover, my anti-Uniqueness stance doesn’t require the outcome of making up one’s mind to be underdetermined by causal factors – there may be a perfectly deterministic story by which I could look at your brain right now and figure out how you’re going to make up your mind about any given issue. I merely want to maintain that the epistemically relevant factors bearing on a particular instance of belief-formation may rationally underdetermine which directon a given agent goes.

14 The type of epistemic identity maintained in part by belief constancy need not be one of the types of identity considered in the personal identity literature, nor in the metaphysics of identity more generally. To my mind, Hedden’s (Citation2015) attacks on the identity approach assimilate it too quickly to these potentially independent discussions.

15 Earlier, I suggested that my answer to the Belief Question has a certain sort of priority over my answer to the Architecture Question. How does this interact with my preference for local vs. global answers to the Belief Question? Answer: Not at all, as far as I can see. The local/global issue is: Given a rationally problematic instance of belief inconstancy, is it rationally problematic only because (and when) it fits into a larger, undesirable pattern? This is different from the Architecture Question about the rational properties of cognitive architectures that causally generate constant or inconstant beliefs. One could give either a local or a global answer to the Belief Question and still consider that answer prior to one’s answer to the Architecture Question.

16 Compare the traditional problem with rule-utilitarian theories that agents may on occasion be able to break a rule without causing the general harm that’s supposed to motivate that rule. We may think that even in such cases breaking the rule is wrong, but the rule-utilitarian struggles to explain why that’s the case.

17 See especially Holton (Citation2008) on the reconsideration of intention, and Paul (Citation2015) for investigation of related questions in the belief case.

18 If Ray is uncertain at the later time what would be a rational response to his initial evidence, yet is fairly confident he was thinking rationally at the earlier time, he might take the fact that he settled on the A?s at the earlier time as evidence that that belief was supported by his evidence at that time. But then Ray’s later body of total relevant evidence isn’t the same as his initial body of evidence, because it includes an evidentially significant fact about his earlier attitudes.

19 Here’s one way to think about the distinction: Imagine a purely receptive entity (agent?) whom I’ll call the ‘passive believer.’ The passive believer takes in information about the world, is concerned to develop the most accurate representations of that world that it can, yet has no ability to act in the world as a result. If finite, the passive believer will still have pragmatic theoretical rationality concerns about how to best manage its representational resources, but presumably there are no demands of practical rationality on a passive believer.

20 Compare Ferrero’s use of ‘local’ terminology in his (Citation2012) and the fourth desideratum he lists in Section 1.3 (which he in turn attributes to Bratman (Citation2012, Section 1.5) and Bratman (Citation2010, 10–11 and 20–21). Elsewhere in the action theory literature, a ‘local’/‘global’ distinction is sometimes used to frame the question of whether a rational requirement that an agent displays certain general properties over time can generate requirements on particular attitudes considered singly. Notice that even if this question is answered in the affirmative, the resulting account of rational requirements on individual attitudes would not be ‘local’ in my sense.

21 To quote Jeff van Gundy on a May 2nd, 2014 broadcast of the NBA playoffs: ‘Man’s greatest right is to change his mind.’

22 Compare Nomy Arpaly’s distinction between theorizing about rationality that creates a ‘rational agent’s manual’ and theorizing that creates an ‘account of rationality.’ As she puts it, ‘Not everything which is good advice translates into a good descriptive account of the rationality of an action, or vice versa’ (Citation2000, 489).

23 Here, I’m availing myself of the distinction between reasons that apply to an agent (so to speak) and reasons that the agent has. One might be willing to grant that the agent’s belief makes it the case that there is some reason for the agent to believe p at , but because the agent is unaware of any such reason it’s not a reason that she has.

24 While I don’t, some people define the normative in terms of the presence of reasons. If we want to satisfy such a definition, we will have to find a reason somewhere in the vicinity of the soccer example, or else the statement that it would be bad for the cones to move during the game cannot be genuinely normative. Here’s a suggestion: We have a reason to tell the young child not to move the cones while we’re playing soccer. While this reason couldn’t appropriately figure in our half-time deliberations, it nevertheless is a reason somewhere in the normative mix of the situation, and seems to pair nicely with the claim that it’d be bad if the cones were moved. Later on I’ll point out where we could make a similar move (if we felt the need to do so) in my account of diachronic consistency.

25 Sarah Moss suggested this to me in conversation.

26 John Broome writes, ‘When you acquire a new attitude – for instance you learn something or you make a decision – many of your other attitudes may need to adjust correspondingly, to bring you into conformity with various synchronic requirements of rationality.... That some of our attitudes take time to catch up is a limitation of our human psychology.... Ideally rational beings would instantly update their attitudes when things change.’ (Citation2013, 153) Broome therefore sets aside the time lag in reasoning because the rational requirements he is considering are based on ideal agency. Given that human minds are realized in physical brains, human reasoning must be a causal process. (One might argue that as the generation of one attitude from another, reasoning would have to be a causal process even if minds weren’t physically realized. But I don’t know how to argue about causality among the non-physical.) And I believe it’s a metaphysical truth that causal processes take time. My conception of rational ideality does not involve idealizing away from the physical realization of minds. So even if we grant that considerations of rationality should attend to the conditions of ideal agency, I do not agree with Broome that ideally rational beings could update instantly.

27 While I came up with the basics of this account independently, it bears strong affinities to some of what John Broome says in his (Citation2013, especially Ch. 10). Kelly (CitationForthcoming) also discusses the relevance of reasoning processes to time-slicing. Meanwhile, Abelard Podgorski has in a number of works developed the idea of reasoning as a temporally-extended process into a much broader account of diachronic rationality.

28 Though this is a general tendency, not an ironclad rule. And that may be tied to the fact that some reasoning processes are extremely extended over time. Philosophers are certainly familiar with mulling over a particular argument or piece of reasoning over the course of years.

29 One might worry that even these cases are rationally dangerous: Given the requirement to keep one’s beliefs consistent, there’s always the risk that a given belief change will generate unnoticed inconsistencies with some of the agent’s other beliefs. Yet that seems to me just a cognitive fact of life. The key point here is that my account lumps the threat level of mind-changing in with that of the other two reasoning processes listed, and does not put it in the same boat as background flips-flops of opinion.

30 So (going back to the potential concerns of note 24), are there reasons involved at any level in my account of rational diachronic consistency? If one wanted, one could say that my answer to the Architecture Question provides reasons at the level of Bratmanian ‘creature construction’: I have explained some reasons why a designer of cognitive creatures like us might want to provide them with a cognitive architecture that keeps beliefs intact.

31 I’m grateful to Sarah Moss, Michael Bratman, and Sergio Tenenbaum for discussion of this concern.

32 Forget about the target of the rational problem; one might complain that I still haven’t explained what the nature of that problem is. What exactly is this ‘vitiation’ that occurs when a reasoning process loses its premises? I haven’t answered that question because I think a number of possible answers are available, and I haven’t settled on one that I like best. But just because it’s interesting, here’s one option: We might see reasoning as an attempt not just to generate new beliefs that follow from one’s other beliefs, but instead to generate new beliefs grounded or based in the appropriate way on one’s other beliefs. Viewed this way, reasoning is a process of constructing a cognitive state with a particular justificatory structure. If that’s right, then the disappearance of the beliefs from which a process of reasoning began makes it impossible for that reasoning to be successful.

33 Going back to the last point of the Extensional Adequacy discussion, one might worry that if the rational flaw in cases of belief inconstancy lies entirely with the continuation of reasoning, this will open the door once more for a time-slicing account. The thought would be that it’s a purely synchronic matter whether one is engaged at a given time in processes of reasoning whose premises one believes. Yet shifting the locus of negative evaluation in cases of diachronic inconstancy to reasoning processes seems to me like a bad move for the time-slicer. A reasoning process is a temporally extended affair; evaluations of such processes seem intrinsically diachronic to me. Podgorski’s work is once more illuminating on this point.

34 I am grateful to Sarah Paul for copious discussion of this essay and for suggestions concerning many of the references. I am also grateful to audiences at the spring 2014 Informal Formal Epistemology Meeting and the fall 2015 conference on Belief, Rationality, and Action over Time, both held at UW-Madison.

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.