697
Views
3
CrossRef citations to date
0
Altmetric
Articles

Implicit racial bias and epistemic pessimism

&
Pages 79-101 | Received 05 Apr 2016, Accepted 27 Oct 2016, Published online: 12 Jan 2017
 

Abstract

Implicit bias results from living in a society structured by race. Tamar Gendler has drawn attention to several epistemic costs of implicit bias and concludes that paying some costs is unavoidable. In this paper, we reconstruct Gendler’s argument and argue that the epistemic costs she highlights can be avoided. Though epistemic agents encode discriminatory information from the environment, not all encoded information is activated. Agents can construct local epistemic environments that do not activate biasing representations, effectively avoiding the consequences of activation. We conclude that changing our local environments provides a way to avoid paying implicit bias’s epistemic costs.

Acknowledgments

For comments and conversations, we are grateful to Joseph Vukov, Shane Wilkins, Carlo DaVia, Peter Seipel, and anonymous referees. The first author would like to thank the Leverhulme Trust for funding the development of the Implicit Bias and Philosophy Research Network; the first author’s discussions in the Network’s workshops contributed to this paper.

Notes

1. Over the last several years, a number of philosophers have sought to understand the philosophical implications of implicit bias research. For an overview of recent discussions, see Brownstein (Citation2015). Issues concerning responsibility for biased judgments are taken up by Holroyd (Citation2012, 2015), Saul (Citation2012a), and Crouch (Citation2012). In an epistemological vein, Puddifoot (Citation2016) and Saul (Citation2012b) argue that implicit bias challenges various theses concerning epistemic justification. A cluster of issues at the interface of epistemology and the psychology of implicit bias are explored by Sullivan-Bissett (Citation2015), Holroyd and Sweetman (Citation2016), and Mandelbaum (Citation2015). Edited volumes by Brownstein and Saul (Citation2016a,Citationb) explore metaphysical, psychological, epistemological, moral, and political issues raised by implicit bias.

2. Another epistemic cost discussed by Gendler is cross-race facial deficit. This phenomenon is characterized by difficulties in distinguishing faces of people of different races: it is easier for Whites to distinguish individuals among White faces but difficult to distinguish individuals among Black faces. Mugg (Citation2013) argues that this doesn’t constitute an epistemic cost. We won’t rehearse the details of Mugg’s argument here, nor will we weigh in on whether cross-race facial deficit is a genuine epistemic cost. For our purposes, we can set aside discussion of cross-race facial deficit and focus on the other epistemic costs Gendler discusses.

3. A successful replication of this experiment by Gibson, Losee, and Vitiello (Citation2014), which administered a questionnaire immediately after the math test, strongly suggests that subjects who were familiar with stereotypes about the math abilities of Asians and women were unaware that the priming affected them.

4. Shih and colleagues (Citation1999) note that this is especially striking, because there is also a stereotype that Asians are good at math. In fact, when they primed Asian-American female students to reflect on languages spoken at home, their performance on a subsequent math test improved relative to those students who weren’t asked to reflect on languages spoken at home.

5. Spencer, Steele, and Quinn (Citation1999) suggest that the testing situation itself is enough to activate the stereotype and decrease performance.

6. Egan (Citation2011) and Mugg (Citation2013) both recognize D1 as a crucial step in Gendler’s argument.

7. D2 leaves open the possibility that agents fail to encode information about inequality in the first place. That’s possible but downright unlikely given the ubiquity of representations of inequality in our society.

8. Koehler (Citation1996) likewise distinguishes between representations of base rates, but as a result of implicit vs. explicit learning. What we are calling “implicit” discounting of base rates maps onto Koehler’s “direct experience” of base rates that leave a “trace” in the representational system. Given enough traces, the information becomes cognitively available. This “contrasts with the explicit learning of a single summary statistic that does not produce multiple traces and is associated with less accurate judgments” (Citation1996, p. 7). Gendler acknowledges these complexities (p. 37, Note 6), but we think that they are more relevant to the discussion than she suggests.

9. Could we say that the cost of (1), base rate neglect is mapped to the cost of (4), implicit irrationality and the cost of (2), association regulation is mapped to (3), explicit irrationality? No. Costs (1) and (3) are results of encoding racial categories while (2) and (4) are results of failing to encode. It’s not possible that equivalent costs are results of both encoding and failing to encode racial categories.

10. Here’s an objection to premise 6: there is evidence that merely encoding the biasing information has epistemic costs. For example, Hahn, Judd, Hirsh, and Blair (Citation2014) show that subjects are surprisingly accurate at predicting outcomes of tests of their implicit attitudes. Consequently, merely encoding (but not activating) implicit attitudes carries an epistemic cost, at least for some agents: trying to avoid activation of biasing representations. The objection runs together two ways in which biasing information can generate epistemic costs: direct and indirect. Biasing information directly generates epistemic costs when activation of representations of that information causes an agent’s judgments to be less reliable, causes an agent to become cognitively depleted, and so on. Biasing information indirectly generates epistemic costs when unactivated representations of that information cause an agent’s judgments to be less reliable, cause an agent to become cognitively depleted, and so on. Take an analogy. Smith’s car swerved because she turned the steering wheel hard; Smith is the direct cause of the car’s movement, but Smith turned the wheel hard to avoid the pothole; the pothole is the indirect cause of the swerve. The study by Hahn and colleagues (Citation2014) shows that encoded information about race can indirectly generate epistemic costs. An agent’s awareness of her bias can cause her to take preventative measures against its activation. But in this case, the biasing information itself does not generate epistemic costs; instead, the awareness of the implicit bias directly generates the costs. Thus, awareness of implicit bias directly generating epistemic costs is a case distinct from biasing information indirectly generating epistemic costs. In what follows, we are concerned only with how biasing information directly generates epistemic costs. (Thanks to an anonymous reviewer for helpful discussion of the objection.).

11. For an overview of dual-process theories, see Evans (Citation2008) and Frankish (Citation2010). Kahneman (Citation2011) is a book-length overview of his work on dual-process theories. Evans and Stanovich (Citation2013) review major objections to dual-process theories. They argue that there is no generic dual-systems account: different accounts highlight different mechanisms, capacities, and properties. Consequently, many of the objections to dual-systems in general fail. We do not propose to argue that there is some general account. Our aim is only to identify a handful of properties that seem to hold of Systems 1 and 2. The review works mentioned above fit with our characterization of Systems 1 and 2. As far as we know, there is yet to be a case of a conscious System 1 process or an automatic System 2 process. We thank an anonymous reviewer for pressing us on this point.

12. Two common architectures for how the two systems work are parallel-competitive processing and default-interventionist processing. The former suggests that Systems 1 and 2 work in parallel and that the outputs of each jostle for position within the cognitive system (e.g. Epstein, Citation1994). The latter suggests that judgments are usually the product of System 1 unless System 2 overrides the outcome of System 1 (e.g. Kahneman, Citation2011).

13. See Peters and Ceci (Citation1982) and Saul (Citation2012a).

14. Saul (Citation2012a, Note 7) notes that this reason for rejection is atypical in psychology journals (also see Lee & Schunn, Citation2010). This suggests, perhaps, that unprestigious institutional affiliation plays some role in the dismissive attitudes of the reviewers.

15. Andreychik and Gill (Citation2012) report that some agents will justify their own biased attitudes by a kind of empathy: in explaining their own biases, agents will appeal to the oppression of groups who are discriminated against. It’s not clear, though, whether agents think of such “external explanations” as merely explaining their implicit bias or normatively justifying their bias. Andreychik and Gill suggest that explaining is most likely since at least some of the agents appealing to external explanations are motivated by compassion and empathy for targets of discrimination. On the other hand, subjects in Uhlmann and Nosek’s threat condition may be motivated to engage in confabulation as a result of cognitive dissonance. How so? In the threat condition, subjects are asked to bring to mind a time when they failed. A reasonable inference is that subjects experience cognitive dissonance between thinking well of themselves and thinking poorly of themselves. Resolving the dissonance in this case involves attributing their failures to be egalitarian-minded to cultural influences. An anonymous reviewer made the interesting suggestion that these subjects’ confabulated judgments are accurate; however, the accuracy of their judgments is irrelevant to their being confabulations. (Thanks to Keith Payne for the pointers to the literature and to an anonymous reviewer for the objection.).

16. A relevant question for further philosophical and psychological research is: How does the social environment impinge on System 2 processes? Consider this case of biased social representations and cognitive functioning: Blacks and Latinos are disproportionately represented as lawbreakers in television news (Bjornstrom, Kaufman, Peterson, & Slater, Citation2010). Plausibly, this contributes to forging connections in System 1 between representations of Blacks and Latinos and feelings of fear and danger. Now consider how these social messages affect System 2 processes. One explanation is that social messages about Blacks and Latinos indirectly affect System 2 via System 1: socially shaped System 1 processes affect System 2 processes. One consequence is that System 2 is untouched directly by social messages but only receives its information from System 1. Or living in a racially structured society might affect System 2 directly: a person might believe, as a consequence of seeing disproportionately more Blacks and Latinos as lawbreakers on television news, that a Black or Latino man in her neighborhood is more likely to commit a crime than a White man. How these elements are organized isn’t as important for now as is highlighting that (1) living in a racially structured society has System 1 and System 2 effects, (2) System 1 and System 2 effects are distinct, and (3) System 1 effects impinge on System 2 processes and vice versa.

17. For example, Mandelbaum (Citation2015) says that implicit attitudes are the result of unconscious beliefs – “honest-to-goodness propositionally structured mental representations that we bear the belief relation to” (p. 635). These unconscious beliefs eventuate in biased behaviors (cf. Mandelbaum, Citation2014). For related views, see Levy (Citation2015), Machery (Citation2016), and Schwitzgebel (Citation2013).

18. Madva and Brownstein (Citationin press; see also Brownstein & Madva, Citation2012) endorse this kind of view. They describe implicit states as “mutually co-activating semantic-affective-behavioral ‘clusters’ or ‘bundles’.” Their position is similar to Gendler’s, who describes implicit states as having affective, representational, and behavioral components.

19. Thanks to an anonymous reviewer for helpful comments on this matter.

20. Objection: aren’t CV cases really instances of fast bias? Employers are sifting through applications and reaching decisions based on, for example, applicants’ names. In picking “Emily” over “Lakisha,” the employers rely on System 1, not System 2. We expect that employers in such cases offer reasons to rationalize their decisions or at least are disposed to do so. Such rationalizing makes typical résumé cases instances of slow racial bias. But even if we aren’t correct about that, it does not follow that there are no cases of slow racial bias. Consider the growing literature on racial bias in jury deliberations. Given the length of time that jurors take to deliberate, it’s reasonable to suppose that if there is bias in jurors’ deliberations, it is slow bias. Sommers and Ellsworth (Citation2000), for example, identify a range of cases in which Whites are liable to exhibit anti-Black bias in a courtroom setting, including interracial trials (e.g. a Black defendant and a White victim) where race is not a salient factor. In such cases, Whites are convicted at a rate of under 70% while Blacks are convicted at a rate of 85%. By contrast, in cases where race is a salient factor, the conviction rates for Whites and Blacks are both around 75%. (Thanks to an anonymous reviewer for suggesting we discuss this objection.).

21. http://www.techyville.com/2012/11/news/unemployed-black-woman-pretends-to-be-white-job-offers-suddenly-skyrocket/# Spivey doesn’t report whether the emails and phone calls were from different employers. But even if every email were duplicated as a phone call – that is, if Bianca White received only nine requests for interviews – that still means that switching from “Yolanda Spivey” to “Bianca White” resulted in seven more interviews during the week she ran the experiment.

22. In fact, when subjects take their time on IATs, evidence of implicit bias goes down sharply (Fiedler & Bluemke, Citation2005; Cvencek et al., Citation2010).

23. Egan (Citation2011) argues that we need not appeal to aliefs to account for the effects of System 1 representations.

24. Our position leaves open that System 2 processes can exhibit bias without activation of bias-promoting System 1 representations. It is possible that System 2 can harbor biased representations without input from System 1 – that’s a fair description of the overt racist. But Gendler’s discussion focuses on implicitly biased agents who are committed to egalitarian ideals. After all, what makes implicitly biased judgments so alarming is that they are made by people who are explicitly committed to egalitarian ideals. Consequently, we can safely assume that System 2 processes are not biased.

25. But isn’t failing to activate relevant information a form of base rate neglect? Not always. In an example from the Central Intelligence Agency’s Psychology of Intelligence Analysis, noted by Gendler (Citation2011), subjects tended to neglect the ratio of Vietnamese to Cambodian jet fighters. Subjects in this case (as well as the ones described in Tetlock et al., Citation2000) failed to consciously appreciate information. But there are also cases in which subjects fail to consciously appreciate information but don’t commit base rate neglect. Jones is baking a cake for a party and doesn’t recall that one partygoer is lactose-intolerant. Even if Jones could have brought that information to mind, he is not thereby committing base rate neglect. Failing to consciously appreciate information is a necessary but not a sufficient condition for base rate neglect. We are suggesting that subjects will fail to consciously appreciate information when it is not activated, and thus remain unbiased, but that is not sufficient for counting as base rate neglect. (Thanks to an anonymous reviewer for discussion.).

26. For a survey of such data, see Lai et al. (Citation2013).

27. Amodio nicely captures our concern: “implicit racial biases are particularly difficult to change in a cultural milieu that constantly reinforces racial prejudices and stereotypes” (Citation2014, p. 679). Our suggestion is to change one’s local sociocultural milieu. We might add that Madva’s conclusion coincides with Amodio’s for mitigating the effects of implicit biases: focus on individual control-based interventions.

28. Note that our conclusions are similar to ones discussed in Haslanger (Citation2015) and Fricker (Citation2010). Haslanger argues that explanations of injustice invoking implicit biases are incomplete without appeal to social structures. We agree. But our position focuses on avoiding epistemic costs raised by implicit biases, not on explanations of injustice due to implicit biases.

29. Mugg (Citation2013) offers a similar response to Gendler’s worries about executive depletion.

30. No pain, no gain, as Jane Fonda once proposed.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.