458
Views
2
CrossRef citations to date
0
Altmetric
Articles

Moral Learning, Rationality, and the Unreliability of Affect

Pages 460-473 | Received 11 Feb 2017, Published online: 28 Aug 2017
 

ABSTRACT

James Woodward and John Allman [2007, 2008] and Peter Railton [2014, 2016] argue that our moral intuitions are products of sophisticated rational learning systems. I investigate the implications that this discovery has for intuition-based philosophical methodologies. Instead of vindicating the conservative use of intuitions in philosophy, I argue that what I call the rational learning strategy fails to show philosophers are justified in appealing to their moral intuitions in philosophical arguments without giving reasons why those intuitions are trustworthy. Despite the fact that our intuitions are outputs of surprisingly sophisticated learning mechanisms, we do not have reason to unreflectively trust them when offering arguments in moral philosophy.

Notes

1 This is not to say that many philosophers aren't also interested in vindicating the use of moral intuitions in everyday life. But the best explanation, I think, of why philosophers feel that Haidt and others pose a challenge is that our capacity to form everyday moral intuitions plays a central role in philosophical argumentation.

2 Crockett [Citation2013] and Cushman [Citation2013] argue that we use two systems, the model-based and the model-free systems, in making moral judgments. Model-based systems assign values to outcomes, recommending the action that produces the highest-value outcome—usually, in moral dilemmas, the action that a consequentialist would recommend. The model-free system, in contrast, only evaluates state-action pairs, where the value of each pair is determined by a reinforcement history—recommending in moral dilemmas the same actions as a deontological theory. In what follows, I assume that value representations of model-based and model-free systems can be inputs to the rational learning process.

3 See Railton [Citation2016: 6–7] for an argument that the empirical literature supports the claim that this empathic concern for others, which develops early in phylogeny, is moral, properly speaking: it shows a proper concern with others’ suffering, rather than a self-interested concern with minimizing one's own distress at the sight of others’ suffering.

4 He makes this point with his hypothetical example of a lawyer having a gut feeling that, appearances to the contrary, the trial is not going in her client's favour. This feeling turns out to be right. It draws, Railton claims, not just on her experience in the actual courtroom but on ‘her years of legal practice, honing skills of interrogation, exposition, and persuasion’ and of her ‘powerful mind, ‘emotional intelligence’, and fundamental social and personal skills, enlarged and refined over a lifetime’ [Citation2014: 822]. As a result, that feeling attunes her to various cues about her social environment: to the body language and facial expressions of the jurors and of her client, among other things, and it deserves her trust because it is so sophisticatedly sensitive to these cues.

5 His main example of such skilful activity is the lawyer mentioned in the previous footnote.

6 This updating process conforms to what Xu and Tenenbaum [Citation2007] call the ‘size principle’. The idea, roughly, is that if all of the examples of a fep conform to the subcategory with the smallest extension, then subjects should assign a higher probability for the subcategory being the correct meaning of ‘fep’.

7 For references, see Perfors et al. [Citation2011: 318].

8 It may be objected that Nichols et. al's study only applies to intuitions that are rule-like in form, but that many of our moral intuitions are particularistic. I treat the Bayesian mechanisms on which Nichols et. al focus as constructing a rational mapping from inputs to outputs of various forms, sentential and rule-like, or not. Additional empirical results could extend Nichols et. al's work to cover particularistic intuitions as well. For the conclusion that the outputs of Bayesian learning mechanisms need not be sentential in form, see Lidz et al., [Citation2003: 295–303], who apply the Bayesian Cognition Thesis to learning deep-structure grammatical rules.

9 To be clear, the supervenience basis would include facts like ‘such-and-such causes so-and-so pain.’ My use of ‘moral facts’ is meant to include facts that don't refer to moral properties.

10 The System 1/System 2 distinction may turn out to be not very helpful in the Bayesian framework, in that framework representations from both ‘systems’ share similar rational etiologies. But I continue to use the distinction, as it is commonly used in the literature on moral intuitions that is inspired by work on Haidt, Greene, and others.

11 This, at least, is how American subjects typically responded. As an anonymous referee noted, it's possible that the responses would be quite different in different cultures.

12 Similarly, Woodward and Allman think that implicit learning about the inefficacy of torture for purposes of extracting information doesn't require direct experience of torture; it needs only experience of the ‘intentional infliction of pain and humiliation’ on others [Citation2007: 200].

13 Railton's ‘Moral Realism’ was first published in 1986; Cushman's and Crockett's pieces were published in 2013.

14 The evolutionary rationale for having System 1 processes is that they quickly draw our attention to imminent threats in the environment or to chances of improving our survival [Kahneman Citation2013: 35]. The same selection pressures that led to both human babies and rats having sophisticated System 1 mechanisms for performing statistical inference also, it is reasonable to assume, made those mechanisms reliably sensitive to features of the environment relevant to the organism's welfare.

15 Thanks to an anonymous referee for pressing me to respond to this objection.

16 Other defenders of intuitions don't obviously pursue what I've called the rational learning strategy. But these accounts are less empirically developed than Railton's, and don't identify the cognitive mechanisms that generate intuitions. According to Horgan and Timmons's [Citation2007] ‘morphological rationalism’ account, intuitive judgments can embody information contained in moral principles, where these principles causally generate those judgments. Horgan and Timmons's remain uncommitted as to how we acquire these principles, to whether they are learned or innate. My argument applies, should a more developed version of their account claim that such principles are outputs of domain-general learning mechanisms. Sauer [Citation2012] also gives a learning story of the formation of our moral intuitions. Moral intuitions, he argues, can be shaped in an ongoing process by feedback from deliberate, explicit reasoning. My argument doesn't call into question the authoritativeness of intuitions shaped entirely by explicit processes of reasoning. Nevertheless, much of what we call learning takes place through non-conscious processes. To the extent that such processes play a role in moral education that supplements explicit reasoning, my argument applies to Sauer's account as well.

17 My thanks to Jerry Gaus, Connie Rosati, Mark Timmons's, two anonymous referees, an associate editor for this journal, and to Stephen Hetherington for their helpful suggestions. Above all, though, I wish to thank Shaun Nichols, who generously read and commented on several drafts, and was a constant source of encouragement.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 94.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.