Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Latest Articles
234
Views
0
CrossRef citations to date
0
Altmetric
Article

How Partisanship Can Moderate the Influence of Communicated Information on the Beliefs of Agents Aiming to Form True Beliefs

ORCID Icon
Received 07 Feb 2024, Accepted 21 May 2024, Published online: 01 Jul 2024

ABSTRACT

Partisan epistemology – individuals granting greater credibility to co-partisan sources in evaluating information – is often taken to be evidence of directionally motivated reasoning in which concerns about group membership override concerns about accuracy. Against this dominant view, I outline a novel accuracy-based account of this mode of reasoning. According to this account, partisan epistemology stems from the inference that co-partisans are more likely to be right as they have superior epistemic access to the relevant facts and seek to realize the correct values. I argue that this theory fits better with relevant findings than motivated-reasoning theories of partisan epistemology. Finally, I suggest it has adequate explanatory power vis-à-vis patterns of misinformation belief.

Individuals aligned with different political factions often hold distinct views on various social, economic and cultural issues, such as economic health, immigration impacts, climate change, race’s role in life outcomes and election integrity (Ditto et al. Citation2019; Finkel et al. Citation2020; Marietta and Barker Citation2019). The views held by these partisan groups also tend to be logically independent of each other (Joshi Citation2020). Understanding a person’s ideological orientation thus allows for a relatively high-confidence prediction of their stance on rationally orthogonal policies.

What explains this? If these beliefs stemmed from individuals’ independent evaluation of underlying causal mechanisms, we would likely see greater diversity in viewpoints within ideological groups and weaker consistent correlations (Levy Citation2023b). However, the observation of uniform correlations among group members implies that these beliefs are influenced by shared external factors. Therefore, the simplest explanation for the clustering of attitudes is that these positions reflect, either directly or indirectly, the opinions expressed by political elites that individuals align with (Leeper and Slothuus Citation2014; Lenz Citation2013). Such cues constitute testimony that people routinely rely on, so this is a plausible and parsimonious explanation for the relative homogeneity of rationally orthogonal attitudes within political groups.

The partisan clustering of logically independent beliefs seems to be at least partly explainedFootnote1 by people’s disposition to update comparatively more on testimony from partisan sources – those with whom they share moral and political values – than other sources. We can call this general pattern – individuals granting greater credibility to co-partisan sources in evaluating information – partisan epistemology (cf. Rini Citation2017). Empirical evidence supports its existence. Kahan et al. (Citation2011), for instance, asked their subjects to evaluate whether they considered certain scientists, all equally qualified, as ‘experts’ whose opinions should be considered by the average person on issues like climate change, nuclear waste disposal and gun control. The participants’ judgments of the scientists’ expertise were influenced by whether the scientists’ views aligned with those of the participants’ cultural groups. If a scientist supported the predominant view within a participant’s cultural group, that scientist was likely to be deemed an ‘expert’ on the matter; if not, they were less likely to be seen as such. Individuals, the researchers conclude, seem prone to selectively assess evidence in patterns that reflect their group identities.

Consider as well the study by Hanel et al. (Citation2018), which aimed to determine if mere source attribution could lead to polarization between groups. Participants evaluated aphorisms such as Bible verses, quotes from ancient philosophers and statements from political leaders. Without source attribution, agreement was high. But when sources were identified (like the Bible or specific politicians), agreement varied based on ideological affiliations. For instance, atheists showed less agreement with aphorisms presented as Bible verses, while Christians showed more agreement. Similarly, political party supporters showed more agreement with aphorisms attributed to a politician from their party and less agreement with those attributed to a rival party politician.

A common view is that asymmetrical (dis)agreement with ideologically (un)aligned sources stems from motivated irrationality. Kahan himself, for instance, has urged that the extent to which a source shares the audience’s normative values can moderate the extent to which to which the audience updates its beliefs on the information communicated by the source, because people are prone to engage in identity-protective cognition (Kahan Citation2017; Kahan et al. Citation2012). That is, daily life offers people all kinds of incentives to hold beliefs similar to those with whom they share normative values (in terms of psychological benefits such as friendship and social support). Because of this, people are prone to evaluate communicated information with the goal of securing these benefits rather than getting at the truth. On this account, we have significantly greater trust in the claims of those belonging to groups with which we identify because we are trying to fit in (rather than be right).Footnote2

To illustrate further, De Cruz (Citation2020) presents an account which she sees as different from Kahan’s, but is relatively similar for my purposes, as both Kahan (Citation2017) and De Cruz (Citation2020, 445) claim ‘that concerns about group membership are often more salient than epistemic concerns[;] the former often trump the latter’. On this view, people embrace discredited perspectives because aligning with these views enables them to better align and connect with others who share similar beliefs. This mechanism of social belonging, according to De Cruz (Citation2020, 441), ‘can explain why people sometimes accept testimony even if they deem the source inaccurate’.

On this reading, then, belief in misinformation results from agents being careless with respect to the accuracy of their beliefs. A behavior which, in turn, results from the identity-relevance of these misperceptions, which minimizes accuracy motivations at the cost of other psychological motivations such as a drive to conform to a group’s beliefs. For example, if someone accepts the false claim that rural areas do not receive their fair share of public resources, they might do so primarily to reinforce their identity as a loyal member of a social group in which this belief is widely held.

Prima facie, this account seems to have ample empirical support: individuals’ tendency to assign greater credibility to co-partisan sources features prominently in popular explanations of why people believe misinformation. For example, many Republican voters might believe the 2020 presidential election was fraudulent because they trusted Donald Trump’s claims about election fraud. Similarly, a significant number of conservatives might doubt the safety and effectiveness of COVID-19 vaccines as a result of trusting conservative commentators who questioned these vaccines. And in the same vein, a substantial portion of American conservatives presumably rejects the reality of human-caused climate change, influenced by trust in conservative skeptics and a general mistrust towards liberal figures who prominently support climate change awareness in the U.S.

The normative status of partisan epistemology has been the subject of much recent philosophical attention (Chambers Citation2018; De Cruz Citation2020; Dorst Citation2023; Hannon Citation2023; Lepoutre Citation2020; Levy Citation2019; Millar Citation2023; Rini Citation2017; Sleat Citation2023; Woodard Citationforthcoming). The current paper seeks to add to this debate. By integrating of disparate lines of evidence, I aim to show how an accuracy-based account of the phenomenon has more explanatory power than often thought. While it’s often assumed that partisan epistemology is evidence of directionally motivated reasoning in which concerns about group membership override concerns about accuracy, I outline an alternative account which seeks to make sense of partisan epistemology as a mode of reasoning motivated by concerns for accuracy instead (§1). In section two, I argue that this theory – of how partisanship can moderate the influence of communicated information on the beliefs of agents aiming to form true beliefs – offers a superior explanation of relevant empirical findings than alternative social-benefits theories of the phenomenon. Since one important benefit of these theories has been taken to be that they’re at a special advantage in accounting for (patterns of) belief in misinformation, I next argue that this advantage is smaller than it seems (§3). Section 4 concludes.

1. Accuracy-Based Partisan Epistemology

There are two features of sources that people routinely use to filter testimony: competence (the extent to which the source is likely to possess accurate information) and benevolence (the extent to which the source is likely to care about our interests) (Mercier and Sperber Citation2017, 105–108; Sperber et al. Citation2010). In evaluating the source of a message, people seek to assess whether the source is right (competence) and has our best interests at heart (benevolence). This is an empirical claim about how people evaluate sources. But it also seems the normatively right way to go about it if your goal is to have accurate beliefs.Footnote3 In assessing whom to trust, we’re assessing which agents’ beliefs should influence ours. It does not make sense to let our beliefs be influenced by a source that is not competent with respect to a given topic, since she’s not likely to be right. Similarly, it would be an epistemic mistake to let our beliefs be influenced by a competent source seeking to mislead us, since she is not likely to communicate (useful) accurate information to us even if she is right. In addition to being competent, this makes it important to establish that a testifier is benevolent towards us.

Crucially, as Mercier (Citation2017, 106) points out, because individuals belonging to the same significant social group will ‘tend to share more interests than random individuals’, such individuals ‘should be more benevolent toward one another’. Thus, the fact that someone is part of a social group to which I belong suggests they are likely to be benevolent towards me. If benevolence is a criterion for evaluating the reliability of information, and if members of social groups we identify with are more likely to exhibit benevolent traits, it follows that people would place more trust in sources from their own groups, even when their primary concern is accuracy. This might reflect a truth-seeking tendency to defer to ingroup members on the grounds that they share your values and are thus less likely to deceive you (Levy Citation2019).

Moreover, co-partisanship could be an epistemically valid cue of competence as well, in the domain of politically and morally normative claims, for example. As Rini (Citation2017) points out, partisan affiliation reflects a person’s value commitments. When I learn a person’s partisan affiliation, I learn something not just about the descriptive claims she likely endorses, but also about which political and moral values she holds highly. So when I learn that a source of testimony shares my partisan affiliation, I learn that, according to me, she tends to get normative questions right. On normatively relevant claims, then, partisanship can be relevant to assessing the competence of testifiers. This makes a differential receptivity to testimony based on partisan affiliation in such cases potentially consistent with accuracy motivation.

For example, in evaluating healthcare policy proposals like government-funded prescription drug plans, alignment with a social identity or party favoring universal healthcare will lead to greater receptiveness to testimony from co-partisan experts advocating policies that align with this normative preference – the typical pattern of partisan epistemology often ascribed to tribal motives – even if one is motivated by accuracy. This is because the co-partisan source’s arguments are not only about policy specifics. They are also grounded in normative views on healthcare equity and government intervention. Experts who share your partisan affiliation are more likely to propose policies that are better in your view, since ‘better health care policies’ here just means ‘policies that are more likely to realize the values I find important in this context’. Since a co-partisan expert has correct distal normative preference (or values) such as solidarity and a caring government, I can infer that her proximal policy proposals about e.g. the funding of prescription drugs are more likely to get things right, even if I can’t evaluate the complex object-level arguments she gives. Since you can have no first-order reasons for evaluating different testimonies – as being a non-expert you’re unable to assess the relative merits of the policy proposals – differential receptivity to co-partisan testimony is thus consistent with accuracy motivation in such matters. Co-partisan testimony is more likely to align with your value-driven perspective on healthcare, making their downstream judgment on specific policies more trustworthy in your eyes.

There’s a second way in which shared values serve as a valid cue for competence. Even in the absence of standard expertise – in terms of credentials and so on – shared values could indicate greater epistemic access (and hence competence). This is because part of what it means to belong to a social group with shared values is that one occupies a distinctive position in the social structure (Young Citation2002), which implies one is subjected to a distinctive set of social constraints and enablements by the laws, norms and physical infrastructure constituting the social context (Lepoutre Citation2020). Now, because people with similar identities and values experience group-specific constraints and enablements, they have distinctive knowledge that members of other groups may lack. For instance, legislators who come from working-class or minority communities bring to the table a deep understanding of the challenges and needs inherent to their backgrounds. Their advocacy for certain policies isn’t just a reflection of party allegiance, but is often rooted in an advanced understanding of these specific challenges, informed by their lived experiences. Such perspectives are epistemically valuable in virtue of encoding a rich array of information. In this way, shared values can rationally indicate a source’s increased epistemic access and hence competence. Therefore, because group membership often reflects epistemically important social perspectives, it serves as a practical heuristic for deciding which testimonies to trust regarding politically significant issues (Lepoutre Citation2020).Footnote4

Applying similar logic to benevolence yields an interesting result. Under what conditions is co-partisanship a valid indicator of this epistemically relevant property of sources? In other words, when do sources with different normative values have most incentive to mislead you? On claims that are highly politicized, it would seem. Individuals identified with specific political groups then have most reason to advocate for their party’s policies. Consequently, in polarized debates, sources from outside one’s partisan group are more likely to present information that serves their group’s competitive interests rather than aiming for objective transmission of knowledge. Their testimony might then not aim to inform but to strategically influence the narrative. This dynamic makes co-partisanship a crucial indicator of benevolence and epistemic trustworthiness, as it suggests a lower chance of strategic manipulation in testimony. Therefore, the more polarized a topic is, the more epistemic virtue dictates skepticism of courses from different groups. So paradoxically, the more polarized a topic is, the more partisan epistemology can be motivated by accuracy (because it increases the extent to which shared values provide reassurance that the testifier respects our epistemic interests and is not seeking to exploit us).

That said, there likely are some cases where people overextend trust to partisans. For example, Marks et al. (Citation2019) observe that people prefer to defer to co-partisans even on mundane and apolitical tasks such as recognizing new shapes. The researchers term these epistemic spillovers – a phenomenon others describe as ‘remarkable and disturbing’, as on these accounts, ‘it reveals the astounding ease with which we overextend partisan cues into completely irrelevant domains. It’s as if we think someone is a worse mechanic simply because he is on the other side of the political spectrum’ (Woodard Citationforthcoming, 20). However, this comparison overly narrows the scope of potential accuracy-based explanations. Participants in these studies might not necessarily believe co-partisans are more competent; instead, they could infer that those sharing their values are less prone to deception. As anyone who has ever run a psychology experiment knows, some participants will behave less than seriously on tasks that are trivial and meaningless (such as recognizing non-existent shapes). This makes co-partisanship a salient indicator of trustworthiness, as individuals may assume that those who share their values are less likely to engage in common deceptive behaviors towards them.

Still, there will undeniably be other, real-world cases where partisans assign more weight to co-partisanship than they ought to in evaluating testimony (Millar Citation2023). But aiming at accuracy does not make one infallible. So as long as this lack of discernment is, in typical cases, reasonably constrained, my accuracy-based explanation of partisan epistemology is not undermined by it. I take up this point in section three.

It has been noted that both motivations – social belonging and accuracy – are consistent with the same observable behavior. This is known as the observational equivalence problem of motivated reasoning (Druckman and McGrath Citation2019; van Doorn Citation2023). For example, while group-level data might indicate that conservatives reject climate science, this aggregate observation may stem from a variety of individual-level psychological mechanisms. Perhaps identity-protective cognition causes people to embrace biased beliefs and distorts the processes by which they seek out and process information, and partisan epistemology is the outcome of this process. But these patterns could also arise not from motivated irrationality but simply from people employing normatively appropriate cues (competence and benevolence) to judge testimony. So if a Republican’s views on climate change are shaped by the beliefs of fellow partisans, it raises the question: is she influenced by this information because she seeks to affirm her identity as a devoted Republican or because she is aiming for accuracy and believes that the consensus among Republicans is likely to be correct? Many commentators on this debate have argued that the controversy has reached an impasse due to this problem of observational equivalence (Bullock Citation2009; Druckman and McGrath Citation2019; Tappin, Pennycook and Rand Citation2021). If accuracy motivation is our default position – that is, if ‘we ought to assume that individuals typically aim at accuracy when forming beliefs except in instances where we have compelling evidence to the contrary’ (Millar Citation2023, 12) – we should perhaps conclude that the claim that partisan epistemology is due to motivated reasoning lacks empirical support. In the next section, I review whether the accuracy-based explanation fares any better.

2. Evaluating the Accuracy-Based Account

Let’s take the competence and benevolence aspects in turn.

It is of course true that, especially in cases of knowledge asymmetry, you shouldn’t open-mindedly engage with the arguments of a source you suspect to hold non-benevolent attitudes to you (Köymen and Dutilh Novaes Citationforthcoming). However, there seem to be reasons to think benevolence might be less accuracy-related than competence. For instance, some critics have worried it’s not clear that there is, in fact, a correlation between co-partisanship and benevolence. Being ‘like me’ is a fallible indicator of benevolence, and it is frequently exploited for dubious political objectives. As Worsnip (Citation2022) observes, ‘exploitation can come from those who are (perceived to be) on one’s political “side” just as much as those on the other side’. Furthermore, the significance of a speaker’s benevolence towards you diminishes when they are making public statements aimed at both their supporters and opponents (Millar Citation2023, 16). The risk of being intentionally misled is lower in such scenarios because the speaker’s message is not tailored exclusively to one group. For example, imagine you’re a firm supporter of environmental conservation and view a prominent industrialist, known for their skepticism towards climate change, as being opposed to your views. You might be inclined to believe that this industrialist could intentionally mislead you to further their own agenda. However, your reasons for suspecting their lack of benevolence towards environmentalists also suggest that they would want to maintain credibility with their supporters, and thus they are unlikely to disseminate misinformation in public statements. Therefore, as an environmental supporter, you should assume that public statements from such industrialists about environmental policies are made in good faith, as misleading information would affect their credibility with their supporters as much as with their opponents. These considerations seem to shed doubt on the idea that partisan epistemology is often motivated by accuracy when the underlying reasoning is that testimony from co-partisan sources is more likely to non-deceitful. Since I’m chiefly interested in whether partisan epistemology is consistent with a motivation to get things right, I will be focusing mostly on the competence aspect in what follows.

In section 1, I argued that co-partisanship might serve as a cue for competence because (a) members of a social group often have distinct knowledge that others lack and (b) they share your views – i.e. get normative questions right, by your lights – on what we should aim for in e.g. health care and immigration policies and so on. Woodard (Citationforthcoming), however, argues that the latter defense of accuracy-oriented partisan epistemology fails. For it to succeed, she claims, it must be the case that people defer to co-partisans because they share the same values antecedently. This, however, has been called into question: some evidence seems to indicate people share values because they defer to co-partisans, rather than the other way around. This challenges the possibility that partisanship functions as a tool for achieving accurate beliefs about e.g. what healthcare policy to support in order to realize underlying values.

For instance, Levendusky (Citation2009, 113) found that, in cases where issue position and party no longer aligned, 53% involved changing issue position to match those of party, while only 28% involved changing party to match issue position. Woodard (Citationforthcoming) claims it would be expected here for partisanship to shape voters’ values due to the need to harmonize their policy stances with these values. As when party leaders change their policy stances, party followers are likely to change their preferences accordingly to match the party line, rather than to replace a compound and often habitual party attachment with a similarly complex connection to a competing party. Together with related empirical evidence, Woodard thus takes this to show that partisanship more often shapes normative preferences than the other way around.

She gives this influence a decidedly non-epistemic reading. It indicates, she writes, that ideological commitments, including values, are ‘more often an effect of partisanship than its cause’ (Achen and Bartels Citation2016, 234). This, in turn, tells us that partisan epistemology does not involve using co-partisan’s expert testimony as a source of information for how to best realize shared underlying values (as in my health care example). Rather, she claims, partisan deference reflects ‘merely an emotional group attachment’ (Woodard Citationforthcoming, 14).

In arguing for this non-accuracy interpretation of partisan deference, Woodard (Citationforthcoming, 14) asks: ‘partisanship is just one social identity among many. There are many other identities that plausibly correlate with shared values. Why rely on partisanship rather than one’s gender, race, or religion?’ The answer, she suggests, is that we ‘defer only for reasons that have to do with group loyalty’ (Woodard Citationforthcoming, 16). That argument, however, overlooks how deeply intertwined partisanship is with these other identities and its resulting epistemic value as a system of integrated political commitments. The political views of numerous non-elite voters are shaped less by elite designations like left or right and more by their social group identities, including race, class or regional affiliations. For example, when questioned about their policy views, people readily give explanations in terms of social group concerns such as concerns relating to their class, race and regional identity (Converse Citation1964, 14). Their views regarding policies or problems that bear on their group interests tend to hang together in a fairly coherent and integrated way (Converse Citation1964, 38–43). So it seems misguided to pit these identities against partisanship in argument aiming to show its arbitrariness, as political commitments are often structured around these very identities. And as I’ll argue now, we shouldn’t conceive of this structuring as a mere emotional group attachment with no epistemic grounding. It’s better seen as mode of political cognition capable of generating political understanding, reliance on which is thus consistent with epistemic goals. We may rely on social group identities to structure political concerns, in other words, because, from the subject’s point of view, this seems like an approach that gets things right.

Being part of a specific social group often means experiencing a unique set of challenges and advantages. Communities, whether they are defined by their rural or urban settings, racial backgrounds like black or white or socio-economic statuses such as middle-class or working-class, encounter distinct laws, societal norms, economic conditions and physical environments. The insights gained from understanding these unique group-specific constraints, along with the normative considerations that arise from them, play a crucial role in evaluating various political issues, policies and public figures. Consequently, it seems consistent with accuracy motivation to structure one’s political commitments based on one’s social group memberships rather than stick to one’s own initial assessment of a policy (Lepoutre Citation2023). This means that studies finding that people change policy views to match their pre-existing political commitments (rather than change political commitment to match issue position) do not show people are directionally motivated political reasoners. Rather, voters might simply revisit their views in response to positions taken by presumably better-informed elites. This seems plausible, as in order to cope with the complexity of politics, it makes sense for voters to respond to partisan cues that guide them through the complicated political space instead of engaging in the cognitively demanding activity of understanding the policy implications on their own. That makes partisan deference consistent with accuracy motivation.

Testimonial networks are rationally taken to add epistemic value to the information that propagates through them (Levy Citation2021), so e.g. the ordinary conservative need not be busy protecting her identity when she dismisses climate change. Rather, she’s deferring to networks of co-partisans she has good reason to think are appropriately plugged into experts who know better. When a belief that reverberates around this network is near-universally held with high confidence, her deference is perhaps not a case that requires a special non-accuracy motivation for its explanation. She simply conforms to the beliefs of those around her, whose behavior and assertions are best explained by attributing to them a matching belief, secured by testimony from news sources and politicians, which, she has good reason to believe, receive testimony from genuine experts (Levy Citation2023a). She remains sensitive to signs of dissent from those who possess expertise and who are trustworthy (by her lights), and her confidence will fall when the proportion of believers in the network shows signs of dropping. This behavior can be parsimoniously explained as the result of bread-and-butter updating in response to higher-order evidence (van Doorn Citation2023).

Empirical evidence supports this interpretation of partisan deference as rooted in accuracy rather than in social-emotional motivations. Whereas motivated-reasoning theories of how party leader endorsements influence partisans’ attitudes predict that the causal effect of persuasive communication will be nullified cues from party leaders, experiments show this is not the case. For example, Tappin et al. (Citation2023) exposed Trump-voting Republicans or Biden-voting Democrats to persuasive messaging about a policy issue and found they responded by (1) updating their attitudes toward the message on average and (2) updating by a similar amount even when confronted with the fact that Trump or Biden’s position, respectively, was opposed to the message. When exposed to randomized stated positions of leaders from both American parties, partisans have furthermore been found to not automatically adopt the stated position of their party and even update in the direction of positions of the leader of the opposition party (Fowler and Howell Citation2023). The fact that partisans respond positively to the leaders of the opposition party suggests that opinion leadership does not reduce to group identities. Rather, consistent with Bayesian reasoning in light of informational deficiencies (Graham Citation2023; Hill and Huber Citation2019), citizens revisit their views in response to positions taken by presumably better-informed politicians from both parties (Fowler and Howell Citation2023). This suggests partisan epistemology is often rooted in accuracy concerns as the impact of partisan motivated conformity is more constrained than most theories of directionally motivated reasoning allow.

By contrast, in their acclaimed Democracy for Realists, Achen and Bartels (Citation2016, 215) claim that identity ‘transcends thinking’. But this picture on which partisan naively adhere to elite cues is also inconsistent with the empirical evidence. Bolsen, Druckman and Cook (Citation2014), for instance, find that the presence of a partisan endorsement significantly increases information processing time. It leads to more cognitive effort being expended on one’s interpretation of the substance of political information – not as a cue to avoid effortful cognition. This interpretation is bolstered by the finding that, generally, people value accuracy (Pennycook and Rand Citation2022), as reflected in how they allocate cognitive resources. Engaging in more reasoning has been found to lead people to have greater coherence between judgments and their prior beliefs about climate change but not to exacerbate the impact of partisanship on these beliefs (Bago, Rand and Pennycook Citation2023). That is, deliberation enhances the degree to which an individual assesses new information based on how well it aligns with relevant beliefs formed from previous encounters with information (Bago, Rand and Pennycook Citation2023). It does not, conversely, increase the extent to which one evaluates information for social benefits – by e.g. better allowing one to discredit the evidence if it is not congenial to one’s identity and partisan commitments. As such, deliberation magnifies differences based on prior beliefs but not differences based on identity, even on a highly politicized topic such as climate change where identity-protective incentives surely exist. The fact that this is what more reasoning does to people’s evaluation of information indicates that partisans are not seeking to protect their identities when they give diverging information ratings. Rather, it suggests that partisan epistemology (as defined at the beginning of this paper) is often explained by people with different prior beliefs making a good-faith effort to form accurate beliefs.

Given this, if the influence of co-partisans’ beliefs on an individual’s beliefs stems from the perception that co-partisans have epistemically significant traits of competence and benevolence, then logically the strength of this influence should rise with increased trust in co-partisans. This, too, is what we observe (Bolsen, Druckman and Cook Citation2014), indicating that aligning political beliefs with group identity (rather than the other way around) is consistent with accuracy motivation. Further limits on partisan deference also point to this conclusion. Afrouzi et al. (Citation2023) find that the source persuasion effect – the added persuasive effect of a specific source delivering a particular message over and above the substance of a message – obtains only in the specific and unusual case when a party leader delivers a message against party lines to members of his own party. Jakesch et al. (Citation2022) show that implausible headlines were unlikely to be believed, even if attributed to co-partisan sources, but that plausible headlines were readily believed even if attributed to mistrusted sources. People, then, are not persuaded (of misinformation) through simple in-group cues. In fact, party cues are unlikely to move a citizen’s position if she believes an area to be more important (Barber and Pope Citation2023). People mainly rely on it when they have no strong prior opinion on an issue. This demonstrates the effectiveness of trust mechanisms: when you have developed trust in an entity over an extended period, it is logical to rely on their guidance regarding matters where your own expertise is limited (Mercier Citation2020). Since individual’s issue positions can be resilient in the face of party cues, which are mainly used in surprising scenarios and when content norms give out, they are, I propose, best seen as instruments for accuracy.

All in all, voters often seem to have coherent systems of political commitments, and the frameworks structuring these systems (social group identities) appear to be epistemically justified in principle and co-existent with accuracy concerns. Advocates of identity-protective cognition have, by contrast, argued that we believe to belong. De Cruz (Citation2020, 441), for instance, claims that ‘social belonging can explain why people sometimes accept testimony even if they deem the source inaccurate’. But first, the literature on cognitive agency suggests we cannot willingly accept testimony we believe to be inaccurate (van Doorn Citation2024). And second, if that was a dominant mechanism, we should observe different experimental results. Namely, that deliberation moves one’s initial assessment of testimony to a position that is more identity-protective, as people override their fast intuitive appraisal to optimize for social belonging rather than accuracy. We should then also observe that the impact of persuasive messaging is diminished by opposing signals from party leaders, as exposure to these cues triggers party identification and loyalty in individuals, eliciting an emotional response and a directional drive to align with the party’s stance. But that is not what studies find. The idea that partisan epistemology is driven by social concerns also fails to explain why there appear to be significant evidential constraints on what partisans can convince themselves of and why they their cognitive resources to increase accuracy rather than in-group belief coherence.Footnote5 Accordingly, accuracy-based accounts provide a superior explanation of a wide range of relevant findings demonstrates.

3. Can This Explain Belief in Misinformation?

It is often thought that non-accuracy accounts of partisan epistemology provide a superior explanation of patterns of misinformation belief. For instance, a considerable number of Republican voters may hold the belief that stricter gun laws are ineffective largely because they trust the narratives provided by Republican leaders who argue that such measures do not reduce crime. Similarly, those who reject the benefits of universal healthcare might do so due to their trust in Republican viewpoints and skepticism towards Democratic politicians and policy experts advocating for it. The theory that sees partisan epistemology as motivated irrationality can readily account for such results: it’s easy to explain why a mode of reasoning yields non-truth outcomes if you believe its existence serves truth-independent goals such as social belonging. By contrast, accuracy-based explanations of partisan epistemology would seem to have a harder time explaining these outcomes.

Nevertheless, the fact that partisan epistemology sometimes leads to suboptimal credences is consistent with an accuracy-based explanation of why people sometimes engage in it. Accuracy-oriented does not mean infallible. But, of course, the more inaccurate outcomes partisan epistemology outputs, the less plausible accuracy-based accounts of it become. As such, I have two aims in this section. I first argue that the misinformation-explanandum is smaller than it seems and then show how the accuracy-based partisan epistemology account can go a long way towards meeting it.

How many misperceptions are due to partisan epistemology? Although it appears there are numerous incorrect (and occasionally strange) beliefs, they are rarer than often perceived. Substantial evidence suggests that many claims of false partisan beliefs are not genuinely held (Altay, Hacquin and Mercier Citation2020; Bullock et al. Citation2015; Hannon Citation2021; Mercier Citation2020; Schaffner and Luks Citation2018). It appears that these reports are frequently not genuine expressions of beliefs but rather manifestations of an alternate motive presented as a belief statement. For example, perceptions regarding the state of the economy correlate with the president’s party affiliation: when the president is from the opposition party, political adherents assert that the economy is faring worse compared to when their own party is in power. But their economic conduct does not seem to mirror their reported perceptions (Bullock and Lenz Citation2019). This suggests that the co-partisan stated economy ‘perceptions’ do not resemble actual beliefs (cf. Hannon and De Ridder Citation2021; Levy Citation2022). Similarly, Schaffner and Luks (Citation2018) discovered that Republicans were significantly more inclined than Democrats to claim that an unmarked photograph of the Trump inauguration depicted a larger crowd than a similar photo of the Obama inauguration. Yet it appears implausible that they genuinely believe the first photo shows a larger crowd (as it very obviously does not). Their answer thus likely reflects political support rather than genuine belief. Beyond merely expressing support, people also misreport beliefs for trolling, finding them amusing, outrageous or ‘edgy’ (Lopez and Hillygus Citation2018).

Accordingly, incentives to accurately report opinions diminish the partisan disparity in responses to such factual questions (Prior, Sood and Khanna Citation2015). Bullock et al. (Citation2015), for example, find that small payments for correct answers reduce the gap between Democrats and Republicans in response to factual questions by at least 60%. Similarly, while supporters of the incumbent president’s party typically report better economic conditions than opponents, offering financial incentives for accurate responses significantly reduces this tendency (Prior et al. Citation2015).

If these survey responses truly represented genuine beliefs, then offering financial incentives to partisans for correct answers would not alter their responses.Footnote6 However, it does influence them. This suggests that partisans ‘do not hold starkly different beliefs about many important facts’ (Bullock et al. Citation2015, 522). It also reveals that partisans have the capacity to acknowledge inconvenient truths and can be willing to report them (Anglin Citation2019). Since survey responses mask shared, bipartisan beliefs about factual matters, they are not revealing misinformation or political disagreement (since as many people report believing bizarre claims rather than actually believing them, survey evidence indicating false beliefs are widely held should not be taken at face value). In sum, coalitional motives might play a role in the spread of misinformation, but where misinformation is not actually believed, there are no corresponding misperceptions we can ascribe to them.

One might feel there’s a tension here. On the one hand, I’m arguing that partisan reports of misperceptions are not strong evidence that partisan epistemology leads to false beliefs, because such reports are often aimed at expressing support for one side of politics. Yet if that’s the explanation, doesn’t that – on the other hand – indicate partisan epistemology is not motivated by accuracy after all? The tension is merely apparent. The aim here is to show that the explanandum of partisan misperceptions is smaller than it may seem. That expressions of such misperceptions are often instances of ‘the public misinforming us’ (Hannon Citation2021, 10), which may be motivated by social goals, does nothing to establish that partisan epistemology (as I’ve defined it in this paper) when it does lead to actual beliefs is likewise so motivated.

I now turn to the second aim of this section. There are still at least some misperceptions out there, and accuracy-oriented accounts of partisan epistemology might still seem hard to reconcile with those. Some of them are orthogonal to partisanship – better seen as a consequence of other personal characteristics (Petersen, Osmundsen and Arceneaux Citation2023). Some others, by contrast, could be a consequence of partisan epistemology. That does not immediately mean, however, that the reasoning in question was motivated by e.g. social belonging. When people believe misinformation on the basis of co-partisan testimony, the specific application of this method could also involve a rational error rather than a truth-independent motivation. In these cases, co-partisanship likely plays a role in filtering testimony that it shouldn’t play.

A perceptive person might think a co-partisan is likely to be more competent and benevolent towards her; as a result, she might enhance her evaluation of the co-partisan’s testimony somewhat and reduce her estimation of an anti-partisan’s testimony to some degree. However, this inclination can become excessively generalized (Millar Citation2023). For instance, some individuals may place such a high value on benevolence that they consider co-partisanship essential for accepting any testimony, thereby dismissing the input of anti-partisans irrespective of their actual expertise. Others might overestimate the significance of co-partisanship as a marker of capability, especially in relation to intricate technological or medical topics, leading to skewed perceptions of reliability. This undeniably sometimes happens. But that partisan epistemology is sometimes used by reasoners who are not discerning enough (Millar Citation2023) is compatible with my thesis about why they might employ it. Since the extent and depth of political disagreements is likely overstated (Hannon Citation2021), we should not, I believe, infer from this that partisanship is thoroughly misshaping our perceptions of reality.

Even in cases where partisan epistemology actually does lead to accepting falsehoods, accuracy-based accounts have more explanatory power than it might seem. Accepting falsehoods within one’s model of political reality can, for starters, coexist with, and even enhance, one’s understanding of that reality (Lepoutre Citation2023). Consider, for instance, the belief among some rural residents that they receive fewer tax dollars per capita than urban areas. Although Cramer (Citation2016, 90–93) notes this is factually incorrect – as rural areas receive roughly the same amount of tax dollars per capita as urban areas – this falsehood may nevertheless help represent a complex underlying reality. Rural sociologists point out that converting tax dollars into public goods is more challenging in rural areas for several reasons. Firstly, due to economies of scale, it costs more per person to provide services like broadband and education in less populated areas (Johnson et al. Citation1995, 386). Secondly, many rural areas experience a ‘brain drain’ where educated individuals move to urban areas, leaving a shortage of experienced public planners in rural communities. This lack of expertise means rural areas often struggle to implement tax-funded policies as effectively as urban areas, leading to a perception of receiving fewer resources despite similar per capita funding (Dewees, Lobao and Swanson Citation2003, 195). Thus, the misconception highlights a real distributive inequality that rural regions experience in comparison to urban areas. In these instances, therefore, falsehoods can help illuminate significant aspects of the political landscape. So even though it seems likely that partisan epistemology played a role in rural residents coming to hold this misperception, that is not evidence they were not seeking to get things right.

A second consideration is this. Truth-seeking Bayesian updating does not prescribe when to accept a belief as true. Different decision rules or thresholds might be used for deciding when enough evidence has been accumulated. Purely evidential considerations underdetermine what we ought to believe until they receive pragmatic supplementation (Owens Citation2002). The upshot of this uncertainty is that when expert A provides testimony that p is true, this tells us not only (hopefully) something about p, but it also reveals that, given A’s epistemic standard for assertion, she takes her evidence for p to be strong enough to warrant asserting its truth. But, as philosophers of science are increasingly pointing out, non-epistemic values (such as moral and political concerns) have a role to play in setting scientists’ epistemic standards for assertion. These normative values influence the choice of decision rules or criteria for determining when sufficient evidence has been gathered and belief and assertion are justified. Now, if B’s epistemic threshold for asserting q is shaped in part by B’s normative judgments, then normative considerations become epistemically pertinent to evaluating B’s resulting testimony. Hence, if a listener X disagrees with B’s normative judgments, this gives X a defeasible reason for doubting the epistemic reliability of B’s testimony (Lepoutre Citation2020). And since partisan affiliation is a reliable indicator of broad categories of normative commitments, it is a proxy for this form of competence.

For example, consider someone, X, from a rural area with few jobs, who believes job creation in such areas should be a priority. If environmental regulations threaten major industries and jobs in these areas, X might require stronger evidence to accept climate change claims. This heightened epistemic standard makes sense given her normative concerns related to the perceived economic risks posed by these regulations. If a scientist, B, is less aware of the challenges faced by remote municipalities – perhaps due to differing life experiences – X has grounds to suspect that, because of this ethical oversight, B may not maintain adequate epistemic standards. Thus, if B holds normative views that X considers flawed, this provides a provisional reason for X to believe that B is applying inappropriate epistemic criteria (Lepoutre Citation2020). This is another way in which partisan epistemology could lead to false beliefs even when one is motivated by accuracy.

A third and final mechanism by which accuracy motivation can yield false beliefs through partisan epistemology applies in situations where a co-partisan makes a pressing claim on your scarce cognitive resources (Rini Citation2017). When an individual shares a politically charged story on social media, this act of distributing political news involves making normative decisions by the person conveying the information. Often these are decisions about what is normatively important – what is worth one’s limited cognitive resources in times where we are all perpetually reasoning in a rush. When someone aligned with your normative values shares a story, their appeal is not just about the content. It is also an implicit claim on its normative importance. The fact that someone who, I believe, gets questions of normative importance right claims this story to be important and worth my scarce cognitive resources can lead me to override initial skepticism about stories that seem implausible, thereby explaining some instances of partisan misbelief. Given that a co-partisan is likely to align with my normative views, when they share a politically relevant story that appears dubious, the credibility we typically extend to those within our political group might override our reservations about distorted testimony. This way, partisan epistemology can lead to the acceptance of false narratives even if one is motivated by accuracy and tries to rationally manage one’s scarce cognitive resources.

In sum, there are mechanisms by which accuracy-motivated partisan epistemology can go awry. Their potential to explain false beliefs is of course limited (otherwise it would not make sense to say that they stem from accuracy motivations anymore). But since, as argued, the misinformation documented by survey researchers is likely not an accurate reflection of what individuals believe, I’ve outlined why the theory of accuracy-based partisan epistemology does not seem to have that much difficulty accounting for it.

4. Conclusion

Many worry that the number of people who allow partisan loyalty to shape their beliefs at the cost of a genuine commitment to truthfulness. As Vox journalist David Roberts (Citationn.d.) for instance put it, today ‘information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders’. In this paper, I’ve argued that the relationship between partisan epistemology and truthfulness is more complex than such perspectives suggest.

While the motivated irrationality explanation attributes misinformation and polarization to partisanship causing severely biased information processing, I’ve shown how partisanship in testimony can stem from accuracy motivation more often than is usually recognized.Footnote7 I’ve further argued that this account provides a superior explanation of empirical data to the idea that partisan epistemology merely serves tribal goals and disregards accuracy. (For example, if partisan epistemology is driven by social benefits, we’d expect further limitations on the influence of party loyalty and motivation on information processing.) Instead of viewing partisan epistemology as a form of politically motivated reasoning that inherently hinders truth, it should be acknowledged as a potential means to reinforce accuracy. As Sleat (Citation2023, 19) points out, ‘in this regard, Keller (Citation2007, 6–7) is surely wrong to assume that demonstrating “loyalty-in-belief”, as he puts it, means to hold false beliefs or to employ lax epistemic standards to how one assesses evidence or new information’.

As a first upshot, this suggests that (partisan) identity concerns might be an overestimated explanans of why people believe what they do. It’s commonly claimed, for example, that ‘motivated reasoning occurs when we reason differently about evidence that supports our prior beliefs than when it contradicts those beliefs’ (Caddick and Feist Citation2022, 428). But, as I argue in van Doorn (Citation2023), this inference is erroneous. We should similarly be wary of inferences of motivated reasoning based on an analogous asymmetry in reasoning about testimony from sources which do versus do not share one’s normative values.

When I say partisan epistemology can be driven by a pursuit of accuracy rather than merely serving tribal goals, that is not to say that it always is – just that the possibility is often not given its sufficient due. Future work could test the boundaries of reconciling partisan epistemology with accuracy motivation by exploring the tipping point at which reliance on partisan cues shifts from rational evaluation to uncritical acceptance of misinformation. This research would explore how partisans balance trust in co-partisan sources against a commitment to accuracy, especially in scenarios where partisan cues might lead to erroneous beliefs. When co-partisanship plays a role in filtering testimony that it shouldn’t play, is that due to a lack of discernment, or is it due to non-accuracy motivations? Since this resembles the observational equivalence paradox of motivated reasoning research (Druckman and McGrath Citation2019), developing a finer-grained normative framework for assessing partisan epistemology than its common dismissal as motivated irrationality seems a fruitful avenue for making progress. Given what’s happening in democracies around the globe, partisan epistemology raises the question of how, why and under what conditions it turns into a pathological mode of reasoning driven by tribalism. But conceptualizing it as such from the get-go will at least sometimes, and perhaps often, obscure the truth and hinder the production of effective misinformation interventions.

I’ve also argued that the extent to which data on misinformation suggests that partisan epistemology is primarily driven by truth-independent goals is smaller than often assumed. This, in turn, indicates that, in the fight against misinformation, imperatives like ‘don’t be biased’ or ‘be accurate’ won’t prevent partisan-epistemology-based misperceptions, since most people harboring such false beliefs got there as they were trying to be accurate in the first place (assuming their reported misperception is sincere). So then what? Experiments seeking to prevent biased assimilation (Lord, Ross and Lepper Citation1979) suggest the critical thing is to get people to scrutinize both sides equally well (Liu Citation2017; Schuette and Fazio Citation1995). Whether and how this can be applied to misinformation interventions, I leave to further research.

Acknowledgments

I’d like to thank Simon Rippon and members of the CEU Epistemology of Democracy seminar for helpful discussions.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Maarten van Doorn

Maarten van Doorn work combines empirical studies – focusing on aspects such as biases in our seemingly fallible thinking, misinformation and motivated reasoning – with philosophical arguments about rationality and social epistemology. After the successful publication of his Dutch popular science book (Waarom we beter denken dan we denken), which was nominated for the Socratesbeker 2024, he’s looking to further develop his writing and research, possibly outside academia.

Notes

1. For example, differences in exposure to trusted versus non-trusted sources probably also plays a role. Thanks to an anonymous reviewer for pointing this out.

2. For the formulation of these views in terms of ‘being right’ versus ‘fitting in’, see Levy (Citation2021, 81).

3. Indeed, it could be suggested from an evolutionary perspective that humans are equipped with cognitive processes that enable them to embrace information that benefits them and dismiss influences that are detrimental to their knowledge (Mercier Citation2020).

4. This viewpoint also offers a compelling justification for voters to be represented by individuals from their own social groups and to endorse a specific political party, especially if their group is predominantly associated with that party. And so ‘the fact that members of one’s social group disproportionately belong to or support party X might well constitute a pro tanto epistemic reason to support X’ (Lepoutre Citation2020, 50).

5. Of course, proponents of motivated reasoning may argue that motivation only sways the outcome in ambiguous cases (e.g. Dunning and Balcetis Citation2013; Sharot and Garrett Citation2016), but these are precisely the situations where the evidence can go either way. More generally, ‘it seems unlikely that people possess an extra belief process which permits wishful thinking, but that it only operates in the least convincing cases, where there is genuine ambiguity’ (Sommer, Musolino and Hemmer Citation2023, 30).

6. Assuming subjects don’t wrongly imagine the experiments themselves to be misinformed so that they conclude they need to lie to get paid. Thanks to an anonymous reviewer for pointing this out.

7. Of course, in practice, both social and epistemic motivations likely contribute to each specific instance of accepting testimony. However, it remains an open and pertinent question as to whether social or epistemic factors predominantly moderate the impact of communicated information on audience beliefs and under what circumstances these influences are normatively justified. For instance, understanding which mechanism – social or epistemic – primarily shapes partisan beliefs is crucial for devising effective strategies in science communication.

References

  • Achen, C., and L. Bartels. 2016. “Partisan Hearts and Spleens: Social Identities and Political Change.” In Democracy for Realists, 232–266. Princeton: Princeton University Press.
  • Afrouzi, H., C. Arteaga, and E. K. Weisburst. 2023. “Is it the Message or the Messenger? Examining Movement in Immigration Beliefs.” Working Paper 31385. National Bureau of Economic Research. https://doi.org/10.3386/w31385.
  • Altay, S., A.-S. Hacquin, and H. Mercier. 2020. “Why Do So Few People Share Fake News? It Hurts their Reputation.” New Media & Society 24 (6): 1303–1324. https://doi.org/10.1177/1461444820969893.
  • Anglin, S. M. 2019. “Do Beliefs Yield to Evidence? Examining Belief Perseverance Vs. Change in Response to Congruent Empirical Findings.” Journal of Experimental Social Psychology 82: 176–199. https://doi.org/10.1016/j.jesp.2019.02.004.
  • Bago, B., D. Rand, and G. Pennycook. 2023. “Reasoning About Climate Change.” Proceedings of the National Academy of Sciences Nexus 2 (5): pgad100. https://doi.org/10.31234/osf.io/vcpkb.
  • Barber, M., and J. C. Pope. 2023. “Does Issue Importance Attenuate Partisan Cue-Taking?” Political Science Research and Methods 1–9. https://doi.org/10.1017/psrm.2023.28.
  • Bolsen, T., J. N. Druckman, and F. L. Cook. 2014. “The Influence of Partisan Motivated Reasoning on Public Opinion.” Political Behavior 36 (2): 235–262. https://doi.org/10.1007/s11109-013-9238-0.
  • Bullock, J. G. 2009. “Partisan Bias and the Bayesian Ideal in the Study of Public Opinion.” The Journal of Politics 71 (3): 1109–1124. https://doi.org/10.1017/S0022381609090914.
  • Bullock, J. G., A. S. Gerber, S. J. Hill, and G. A. Huber. 2015. “Partisan Bias in Factual Beliefs About Politics.” Quarterly Journal of Political Science 10 (4): 519–578. https://doi.org/10.1561/100.00014074.
  • Bullock, J. G., and G. Lenz. 2019. “Partisan Bias in Surveys.” Annual Review of Political Science 22 (1): 325–342. https://doi.org/10.1146/annurev-polisci-051117-050904.
  • Caddick, Z. A., and G. J. Feist. 2022. “When Beliefs and Evidence Collide: Psychological and Ideological Predictors of Motivated Reasoning About Climate Change.” Thinking & Reasoning 28 (3): 428–464. https://doi.org/10.1080/13546783.2021.1994009.
  • Chambers, S. 2018. “Human Life is Group Life: Deliberative Democracy for Realists.” Critical Review 30 (1–2): 36–48. https://doi.org/10.1080/08913811.2018.1466852.
  • Converse, P. E. 1964. “The Nature of Belief Systems in Mass Publics (1964).” Critical Review 18 (1–3): 1–74. https://doi.org/10.1080/08913810608443650.
  • Cramer, K. J. 2016. The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker. Chicago: University of Chicago Press.
  • De Cruz, H. 2020. “Believing to Belong: Addressing the Novice-Expert Problem in Polarized Scientific Communication.” Social Epistemology 34 (5): 440–452. https://doi.org/10.1080/02691728.2020.1739778.
  • Dewees, S., L. Lobao, and L. E. Swanson. 2003. “Local Economic Development in an age of Devolution: The Question of Rural Localities.” Rural Sociology 68 (2): 182–206. https://doi.org/10.1111/j.1549-0831.2003.tb00134.x.
  • Ditto, P. H., B. S. Liu, C. J. Clark, S. P. Wojcik, E. E. Chen, R. H. Grady, J. B. Celniker, and J. F. Zinger. 2019. “At Least Bias Is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives.” Perspectives on Psychological Science 14 (2): 273–291. https://doi.org/10.1177/1745691617746796.
  • Dorst, K. 2023. “Rational Polarization.” The Philosophical Review 132 (3): 355–458. https://doi.org/10.1215/00318108-10469499.
  • Druckman, J. N., and M. C. McGrath. 2019. “The Evidence for Motivated Reasoning in Climate Change Preference Formation.” Nature Climate Change 9 (2): 111–119. https://doi.org/10.1038/s41558-018-0360-1.
  • Dunning, D., and E. Balcetis. 2013. “Wishful Seeing: How Preferences Shape Visual Perception.” Current Directions in Psychological Science 22 (1): 33–37. https://doi.org/10.1177/0963721412463693.
  • Finkel, E. J., C. A. Bail, M. Cikara, P. H. Ditto, S. Iyengar, S. Klar, L. Mason, et al. 2020. “Political Sectarianism in America.” Science 370 (6516): 533–536. https://doi.org/10.1126/science.abe1715.
  • Fowler, A., and W. G. Howell. 2023. “Updating Amidst Disagreement: New Experimental Evidence on Partisan Cues.” Public Opinion Quarterly 87 (1): 24–43. https://doi.org/10.1093/poq/nfac053.
  • Graham, M. 2023. “Measuring Misperceptions?” American Political Science Review 117 (1): 80–102. https://doi.org/10.1017/S0003055422000387.
  • Hanel, P. H., U. Wolfradt, G. R. Maio, and A. S. Manstead. 2018. “The Source Attribution Effect: Demonstrating Pernicious Disagreement Between Ideological Groups on Non-Divisive Aphorisms.” Journal of Experimental Social Psychology 79: 51–63. https://doi.org/10.1016/j.jesp.2018.07.002.
  • Hannon, M. 2021. “Disagreement or Badmouthing? The Role of Expressive Discourse in Politics.” In Political Epistemology, edited by E. Edenberg and M. Hannon, 297–318. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780192893338.003.0017.
  • Hannon, M. 2023. “Public Discourse and its Problems.” Politics, Philosophy & Economics 22 (3): 336–356. https://doi.org/10.1177/1470594X221100578.
  • Hannon, M., and J. De Ridder. 2021. “The Point of Political Belief.” In The Routledge Handbook of Political Epistemology, edited by M. Hannon and J. De Ridder, 156–166. New York: Routledge. https://doi.org/10.4324/9780429326769-19.
  • Hill, S. J., and G. A. Huber. 2019. “On the Meaning of Survey Reports of Roll‐Call ‘Votes’.” American Journal of Political Science 63 (3): 611–625. https://doi.org/10.1111/ajps.12430.
  • Jakesch, M., M. Naaman, and M. Michael. 2022. “Belief in Partisan News Depends on Favorable Content More Than on a Trusted Source.” https://psyarxiv.com/tex3n/download?format=pdf.
  • Johnson, K. M., J. P. Pelissero, D. B. Holian, and M. T. Maly. 1995. “Local Government Fiscal Burden in Nonmetropolitan America1.” Rural Sociology 60 (3): 381–398. https://doi.org/10.1111/j.1549-0831.1995.tb00579.x.
  • Joshi, H. 2020. “What Are the Chances You’re Right About Everything? An Epistemic Challenge for Modern Partisanship.” Politics, Philosophy & Economics 19 (1): 36–61. https://doi.org/10.1177/1470594X20901346.
  • Kahan, D. M. 2017. “Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2973067.
  • Kahan, D. M., H. Jenkins-Smith, and D. Braman. 2011. “Cultural Cognition of Scientific Consensus.” Journal of Risk Research 14 (2): 147–174. https://doi.org/10.1080/13669877.2010.511246.
  • Kahan, D. M., E. Peters, M. Wittlin, P. Slovic, L. L. Ouellette, D. Braman, and G. N. Mandel. 2012. “The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks.” Nature Climate Change 2 (10): 732–735. https://doi.org/10.1038/nclimate1547.
  • Keller, S. 2007. The Limits of Loyalty. Cambridge, UK: Cambridge University Press.
  • Köymen, B., and C. Dutilh Novaes. Forthcoming. “Reasoning and Trust: A Developmental Perspective.” In Why and How We Give and Ask for Reasons, edited by P. Stovall and L. Koreň. Oxford: Oxford University Press.
  • Leeper, T. J., and R. Slothuus. 2014. “Political Parties, Motivated Reasoning, and Public Opinion Formation.” Political Psychology 35 (S1): 129–156. https://doi.org/10.1111/pops.12164.
  • Lenz, G. S. 2013. Follow the Leader?: How Voters Respond to Politicians’ Policies and Performance. Chicago: University of Chicago Press.
  • Lepoutre, M. 2020. “Democratic Group Cognition.” Philosophy & Public Affairs 48 (1): 40–78. https://doi.org/10.1111/papa.12157.
  • Lepoutre, M. 2023. “Political Understanding.” British Journal of Political Science 53 (2): 346–365. https://doi.org/10.1017/S0007123422000023.
  • Levendusky, M. 2009. The Partisan Sort: How Liberals Became Democrats and Conservatives Became Republicans. Chicago: University of Chicago Press.
  • Levy, N. 2019. “Due Deference to Denialism: Explaining Ordinary People’s Rejection of Established Scientific Findings.” Synthese 196 (1): 313–327. https://doi.org/10.1007/s11229-017-1477-x.
  • Levy, N. 2021. Bad Beliefs: Why They Happen to Good People. Oxford: Oxford University Press.
  • Levy, N. 2022. “Conspiracy Theories As Serious Play.” Philosophical Topics 50 (2): 1–20. https://doi.org/10.5840/philtopics202250214.
  • Levy, N. 2023a. “Response to Commentators.” Philosophical Psychology 1–14. https://doi.org/10.1080/09515089.2023.2188051.
  • Levy, N. 2023b. “Too Humble for Words.” Philosophical Studies 180 (10–11): 3141–3160. https://doi.org/10.1007/s11098-023-02031-4.
  • Liu, C.-H. 2017. “Evaluating Arguments During Instigations of Defence Motivation and Accuracy Motivation.” British Journal of Psychology 108 (2): 296–317. https://doi.org/10.1111/bjop.12196.
  • Lopez, J., and D. S. Hillygus. 2018. “Why So Serious?: Survey Trolls and Misinformation.” Why So Serious. https://doi.org/10.2139/ssrn.3131087.
  • Lord, C. G., L. Ross, and M. R. Lepper. 1979. “Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence.” Journal of Personality and Social Psychology 37 (11): 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098.
  • Marietta, M., and D. C. Barker. 2019. One Nation, Two Realities: Dueling Facts in American Democracy. Oxford: Oxford University Press.
  • Marks, J., E. Copland, E. Loh, C. R. Sunstein, and T. Sharot. 2019. “Epistemic Spillovers: Learning Others’ Political Views Reduces the Ability to Assess and Use Their Expertise in Nonpolitical Domains.” Cognition 188: 74–84. https://doi.org/10.1016/j.cognition.2018.10.003.
  • Mercier, H. 2017. “How Gullible are We? A Review of the Evidence from Psychology and Social Science.” Review of General Psychology 21 (2): 103–122. https://doi.org/10.1037/gpr0000111.
  • Mercier, H. 2020. Not Born Yesterday: The Science of Who We Trust and What We Believe. Princeton: Princeton University Press.
  • Mercier, H., and D. Sperber. 2017. The Enigma of Reason. Cambridge, MA: Harvard University Press.
  • Millar, B. 2023. “Partisan Epistemology and Misplaced Trust.” Episteme, 1–21. https://doi.org/10.1017/epi.2023.46.
  • Owens, D. 2002. Reason without Freedom. New York: Routledge. https://doi.org/10.4324/9780203464601.
  • Pennycook, G., and D. G. Rand. 2022. “Nudging Social Media Toward Accuracy.” The ANNALS of the American Academy of Political and Social Science 700 (1): 152–164. https://doi.org/10.1177/00027162221092342.
  • Petersen, M. B., M. Osmundsen, and K. Arceneaux. 2023. “The ‘Need for Chaos’ and Motivations to Share Hostile Political Rumors.” American Political Science Review 117 (4): 1486–1505. https://doi.org/10.1017/S0003055422001447.
  • Prior, M. G. Sood, and K. Khanna 2015. “You Cannot Be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions.” Quarterly Journal of Political Science 10 (4): 489–518.
  • Rini, R. 2017. “Fake News and Partisan Epistemology.” Kennedy Institute of Ethics Journal 27 (2S): E-43–E–64. https://doi.org/10.1353/ken.2017.0025.
  • Roberts, D. n.d. “Donald Trump and the Rise of Tribal Epistemology.” Vox. Accessed June 4, 2021. https://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology.
  • Schaffner, B. F., and S. Luks. 2018. “Misinformation or Expressive Responding? What an Inauguration Crowd Can Tell Us About the Source of Political Misinformation in Surveys.” Public Opinion Quarterly 82 (1): 135–147. https://doi.org/10.1093/poq/nfx042.
  • Schuette, R. A., and R. H. Fazio. 1995. “Attitude Accessibility and Motivation as Determinants of Biased Processing: A Test of the MODE Model.” Personality and Social Psychology Bulletin 21 (7): 704–710. https://doi.org/10.1177/0146167295217005.
  • Sharot, T., and N. Garrett. 2016. “Forming Beliefs: Why Valence Matters.” Trends in Cognitive Sciences 20 (1): 25–33. https://doi.org/10.1016/j.tics.2015.11.002.
  • Sleat, M. 2023. “Truth and Loyalty.” Political Theory, https://doi.org/10.1177/00905917231204892.
  • Sommer, J., J. Musolino, and P. Hemmer. 2023. “Updating, Evidence Evaluation, and Operator Availability: A Theoretical Framework for Understanding Belief.” Psychological Review, https://doi.org/10.1037/rev0000444.
  • Sperber, D., F. Clément, C. Heintz, O. Mascaro, H. Mercier, G. Origgi, and D. Wilson. 2010. “Epistemic Vigilance.” Mind & Language 25 (4): 359–393. https://doi.org/10.1111/j.1468-0017.2010.01394.x.
  • Tappin, B. M., A. Berinsky, and D. Rand. 2023. “Partisans’ Receptivity to Persuasive Messaging is Undiminished by Countervailing Party Leader Cues.” Nature Human Behavior 7 (4): 568–582. https://doi.org/10.1038/s41562-023-01551-7.
  • Tappin, B. M., G. Pennycook, and D. G. Rand. 2021. “Rethinking the Link Between Cognitive Sophistication and Politically Motivated Reasoning.” Journal of Experimental Psychology: General 150 (6): 1095–1114. https://doi.org/10.1037/xge0000974.
  • van Doorn, M. 2023. “The Skeptical Import of Motivated Reasoning: A Closer Look at the Evidence.” Thinking & Reasoning 1–31. https://doi.org/10.1080/13546783.2023.2276975.
  • van Doorn, M. 2024. “Misinformation, Observational Equivalence, and the Possibility of Rationality.” Philosophical Psychology 1–31. https://doi.org/10.1080/09515089.2024.2358089.
  • Woodard, E. Forthcoming. “What’s Wrong with Partisan Deference?” In Oxford Studies in Epistemology, edited by A. Worsnip, J. Chung, and T. S. Gendler. Vol. 8. Oxford: Oxford University Press.
  • Worsnip, A. 2022. “Review of Bad Beliefs: Why They Happen to Good People” Notre Dame Philosophical Reviews. https://ndpr.nd.edu/reviews/bad-beliefs-why-they-happen-to-good-people/.
  • Young, I. M. 2002. Inclusion and Democracy. Oxford: Oxford University Press.