Publication Cover
Continuum
Journal of Media & Cultural Studies
Volume 36, 2022 - Issue 3
5,229
Views
0
CrossRef citations to date
0
Altmetric
Special Issue: Media and Fakery, Guest Editors: Celia Lam, Wyatt Moss-Wellington, and Filippo Gilardi

Overcoming ‘confirmation bias’ and the persistence of conspiratorial types of thinking

ORCID Icon

ABSTRACT

Discussions on media and fakery are usually premised on the general public being manipulated by mainstream media bias or fabrications emanating from the Internet. It is less common in the discipline of media and communication to speculate about users’ reasons for accepting what appear to be basic untruths: I will suggest here that discussions about users’ complicity must become more central to our attempts to understand media and fakery. Rather than a simple rebuttal of the ‘facts’ or the promotion of big data methodologies, this paper will suggest that deploying convincing counter-narratives are a better means of convincing those we suspect of being susceptible to confirmation bias and conspiratorial types of thinking that there are better ways of understanding contemporary politics.

Introduction

As the rise of QAnon in the past few years has demonstrated, conspiracy theories are flourishing seemingly as never before and having a profound impact on our politics and everyday lives. As will be demonstrated in the first part of this paper, the most rational approach to counteracting conspiracy theories is to employ media literacy (Boyd Citation2018). This approach focuses on encouraging individuals to develop critical tools to select and analyse various forms of media, to produce one’s own content and to communicate in a self-reflective manner (Boyd Citation2018; Wallis and Buckingham Citation2019, 190–191). With a focus on the USA in particular, this paper will begin with a discussion the elements of the media literacy approach which appear to be effective in combating conspiracy theories, alongside the identification of areas where it appears to have less traction. In these latter cases, the paper will explain why they are largely impervious to an approach that relies solely on media literacy, before ending with some suggestions of how to build a multi-perspectival strategy to mitigate the spread and impact of conspiracy theories.

Methodology

This paper adopts a normative critique of the utility of media literacy in combatting online conspiracy theories. Its selection of the contemporary political situation in the United States as a case study was done so because of the commonly held view that conspiracy theories like that of QAnon have a stranglehold on large sections of the electorate (LaFrance Citation2020). This was also useful in the sense that it highlighted the extent to which the normative model of media literacy is inadequate in explaining the reasons why these conspiracy theories continue to persist in a pluralistic media environment where adherents are likely to be exposed to information challenging their beliefs. This widened my scope to a critical review of psychological, political and identarian explanations of this current phenomena as a means of developing my own normative approach to tackling the persistence of these views.

Media literacy and its shortcomings

In relation to online information, early media literacy strategies focused on the need to use authoritative sources of information, while also exploiting the abundance of digital resources to make sure that those sources and their associated content were as varied as possible (White Citation2014, 3–24). A focus on the source of online information is common to many advocacies of a media literacy approach, which can incorporate not only the most basic evaluation of sources, but also a more in-depth understanding of media bias which might include considering how commercial imperatives might make some sources less reliable than others (Barzilai and Chinn Citation2020, 111–112). This last point is consistent with media literacy approaches which focus as much on encouraging critique of the overall media ecosystem as they do on the evaluation of discrete information sources (Bolger and Davison Citation2018, 18).

There is some evidence that a media literacy approach can have some success in combating online falsehoods (Barzilai and Chinn Citation2020, 113). However, the scale of the current problem has led many academics and policy-makers to advocate a more aggressive approach. Part of this involves ‘nudging’ strategies to steer the user away from suspect information sources and towards a greater reliance on authoritative content. Nudging is a method associated with the work of behavioural theorists Thaler and Sunstein (Citation2009), who argue that it can be used to help people make the most optimal decision (Andi and Akesson Citation2021, 109). A nudge can come in many forms: as a general warning to users to beware of inaccurate information online (Barzilai and Chinn Citation2020, 113; Hameleers Citation2020, 14) (sometimes referred to as ‘prebunking’ (Kozyreva, Lewandowsky, and Hertwig Citation2020, 128)); exposing users to authoritative news articles in advance of them searching for information about a particular subject online (van der Meer and Hameleers Citation2020); a technical alteration of the binary choices that users have to make (Kozyreva, Lewandowsky, and Hertwig Citation2020, 128); self-nudging by, among other things, eschewing certain content or platforms (Kozyreva, Lewandowsky, and Hertwig Citation2020, 133). In experimental conditions at least, these types of strategies seem to a certain extent to be effective (Andi and Akesson Citation2021; Hameleers Citation2020; Mazarr et al. Citation2019; van der Meer and Hameleers Citation2020).

There is an admission, though, even among advocates of these approaches that what participants do within the structured confines of an academic experiment might not transfer to their untethered searching for online information in their own time (Andi and Akesson Citation2021, 121). This is particularly the case where users spend a lot of time in informational echo-chambers on social media where their views are not challenged (Mazarr et al. Citation2019; Zollo Citation2019). Some sort of digital ethnography might bring us closer to answering this question, but we know enough about how being observed in social science experiments can moderate participants behaviour to assume that they are likely to behave differently outside the research lab.

Alongside this practical limitation of a media literacy approach is a more political concern that it places the onus on the user to evaluate information sources, rather than addressing the structural problems inherent in the online information ecosystem (Marwick Citation2018, 509). For this reason, many scholars recommend concrete measures to rectify these structural defects, rather than a focus on the modification of the information-seeking strategies of online users. Part of this structural problem centres on the quality of information and its plenitude. In relation to the former, the verisimilitude of much false information online makes it incredibly difficult for even the most literate of media users to distinguish between the authentic and inauthentic. This is particularly problematic in relation to so-called deep fake videos, whose fabrication is virtually impossible to detect (Alibasic and Rose Citation2019, 465; Kozyreva, Lewandowsky, and Hertwig Citation2020, 142). Exacerbating this problem is the deluge of information to which users, especially those in the most industrialized nations on which this paper primarily focuses, are exposed on a daily basis, which limits how much time can be spent trying to verify the authenticity of sources and their content. The most common and effective means of managing huge amounts of information is through filtering. This, though, encourages people to congregate in online silos or echo-chambers as the information that they filter tends to be that which does not align with their existing worldviews (van der Meer and Hameleers Citation2020, 3). Powerful algorithms reinforce these tendencies by directing users to information tailored to their existing preferences.

While we can see how difficult it is for a media literacy approach to address these issues, it could be argued that making users aware of these aspects of the online media ecosystem will at least promote vigilance. However, this type of critical approach to online information is undermined by the economic logic of the Internet and social media. In order to enable users to search for vast amounts of information online, the Internet has to be funded by advertising. This is essentially an attention economy (Goldhaber Citation1997). And in the struggle to gain and retain our attention, advertisers will do whatever it takes to make money. Technical features like ‘likes’ and default settings which force us to make binary choices and which amplify sensational content are the logical manifestation of an attention economy (Kozyreva, Lewandowsky, and Hertwig Citation2020, 125–127). The realization that there are structural features of the Internet which will always encourage the circulation of content not on the basis of its truthfulness but on its capacity to generate more advertising revenue, has led the European Union to propose interventions which go beyond the policing of content:

… the Commission is invited to re-examine the matter [of the self-regulation of online content and a Code of Practices] in Spring 2019 and decide, on the basis of an intermediate and independent evaluation of the effectiveness and efficiency of these measures, whether further actions should be considered for the next European Commission term. This may cover options for additional fact-finding and/or policy initiatives, using any relevant instrument, including competition instruments or other mechanisms to ensure continuous monitoring and evaluation of the implementation of the Code [of Practices] (European Commission Citation2018, 6).

Despite its explicit injunction not to undermine the editorial independence of content creators, it is clear that it is sceptical of the effectiveness of self-regulation (European Commission Citation2018, 30). While the EU has had no compunction in levying huge fines on Google in particular (European Commission Citation2019), these were essentially for distorting markets through monopolization and restrictive practices related to advertising. The proposed Digital Services Act goes further in its targeting of harmful content and disinformation which could imperil the democratic rights of EU citizens. While it does not define exactly what constitutes harmful content or disinformation, mainly because it wants to protect freedom of expression, it calls for greater transparency on the part of digital platforms and new forms of accountability to tackle transgressions (EU Citation2020). The UK Government too is in the process of introducing legislation that will not only target content deemed harmful to children, but will also tackle disinformation aimed at adults. The government’s response to its own White Paper cites ‘misinformation and disinformation about vaccines’ and is clear that, while it does not want to infringe people’s freedom of expression, its media industry regulator Ofcom will have enforcement powers should platforms themselves fail to regulate content (UK Government Citation2020).

These measures are an admission that a particular model of media literacy that operated relatively effectively in the mass media age is struggling to cope in the age of social media. The Internet was developed very much as a libertarian project in the US, where freedom of expression was paramount and the consumer/individual archetype could use their own judgment about what and how they accessed online information without interference from the state, much as they did in most aspects of their lives (White Citation2014, 125–129). This absolutist vision of freedom of expression became the operating logic of the Internet, at least insofar as it pertained in western democracies. This libertarianism was, though, largely a reflection of practices developed in spaces dominated by relatively well-off white males, and might explain why these platforms have failed to adequately address online trolling, especially that which is aimed at women and people from BAME backgrounds, and generally amplified division within American society (Phillips and Milner Citation2021). The recent developments in the EU and the UK can be understood as a legislative response to this ongoing structural problem. But we should not necessarily infer that users are simply credulous individuals whose media consumption causes them potential harm. In this sense, alongside these well-meaning legislative interventions we should consider the ways in which the social media user/audience is more autonomous than superficially appears. This will be done by focusing on one of the most important acts of online agency, that of sharing content.

Sharing online content

Content sharing on platforms like Facebook and Twitter is considered to be the most effective vehicle for spreading information, which is why this is the focus of much of the academic literature on media literacy in digital media (Andi and Akesson Citation2021; Campbell Bailey and Hsieh-Yee Citation2021; Chadwick, Vaccari, and O’Loughlin Citation2018; Kozyreva, Lewandowsky, and Hertwig Citation2020; Marwick Citation2018; Monsees Citation2020). And this literature is almost overwhelmingly negative about the impact of sharing information. This impact is particularly pernicious when it is activated by bots, programmes which automate the rapid spread of information, usually for nefarious purposes (Prier Citation2017); like the aforementioned deep fake videos, this type of technology causes acute anxiety, particularly where, as was the supposition in much of the reporting about Russian hacking in the 2016 US Presidential Election, it emanates from foreign intelligence agencies. There is also some evidence that its structural bias gears social media platforms towards sharing the most sensational content, because that it what is most likely to generate the advertising revenue on which they depend. That is the reason why advocates of nudging strategies often focus on sharing content, as they believe these are the best methods for countering the defaults that encourage us almost unthinkingly to pass on information (Andi and Akesson Citation2021; Kozyreva, Lewandowsky, and Hertwig Citation2020). These nudging strategies can include making users aware of the negative impact that sharing certain types of information can have (Andi and Akesson Citation2021; Campbell Bailey and Hsieh-Yee Citation2021, 15). There are also ‘technocognition’ strategies which might involve users going through a series of steps, like providing some proof that they have looked for a certain amount of time at the article that they are sharing. An example of this is Norwegian public broadcaster NRK’s function that will only enable users to comment on articles if they have passed a short test on its content (Kozyreva, Lewandowsky, and Hertwig Citation2020, 129).

These approaches are sensible within a media literacy framework, even if the expectation is that skills learned by the participants might not be replicated when they are searching for information in their everyday lives. As adumbrated above, this practical concern is secondary to a more philosophical question about the way in which users conceptualize the media content that they engage with. Whether or not this is the intention, media literacy seem to be based on the tacit understanding that media content will cause harm unless we take measures to counter its influence, a recapitulation of arguments associated with the media transmission model. The negative tenor of much of the literature on the specific act of sharing on social media platforms appears to confirm this. According to this, users tend to share and like content with only a cursory evaluation as to its credibility, and, not surprisingly, a significant amount of it is false information (Chadwick, Vaccari, and O’Loughlin Citation2018; Kozyreva, Lewandowsky, and Hertwig Citation2020; Zollo Citation2019).

An intuitive explanation for this is that users are deceived through not having enough time and/or insufficient critical literacy skills into passing on false information, whose negative impact is manifest in exacerbating social division. There is, though, little proof in the wider literature of users’ lack of agency; it is merely inferred based on an understanding of what a rational person could reasonably be expected to believe. The flaw in this argument is in the normative judgment that a rational person would never knowingly share false information. Evidence suggests that users often knowingly share false news stories (Campbell Bailey and Hsieh-Yee Citation2021; Chadwick, Vaccari, and O’Loughlin Citation2018; Kahne and Bowyer Citation2017; Kozyreva, Lewandowsky, and Hertwig Citation2020; Marwick Citation2018; Monsees Citation2020). One of the main reasons for so doing is that many users prioritize the reinforcement of their existing views rather than truth or authenticity when they are selecting and sharing news stories (Hameleers Citation2020; Kozyreva, Lewandowsky, and Hertwig Citation2020; Marwick Citation2018; van der Meer and Hameleers Citation2020). There are varying degrees of complicity with sharing false information, with the tendency uncritically to accept online friends’ recommendations being one of the most common forms of self-justification (Marwick Citation2018, 504). Nonetheless, a disproportionately large number of users react to attempts to identify the falsehoods that they are sharing not with acceptance, but with a robust defence of their position, a phenomena referred to by researchers as the ‘backfiring’ effect (Hameleers Citation2020, 5).

The reasons why this backfiring effect occurs when people are challenged about their sharing of falsehoods illustrate the essential limitations of a media literacy approach. These reasons are psychological and political. Think about the mainstream media’s almost obsessive zeal in identifying the number of lies or questionable assertions that former US President Donald Trump made while in office (Aratani Citation2020). This would be an effective approach should those who are most likely to support Trump be primarily interested in independent verification of the truthfulness of his statements; if other considerations are a priority then it will have minimal effect. Indeed, the present day United States epitomizes Kahne and Bowyer’s (Citation2017, 5) observation about the behaviour of online users: ‘ … in a polarized environment, judgements of truth claims are often shaped more by whether or not individuals’ prior perspectives on the issue align with the claims than by how well informed the individuals are or their capacities to reason … ’. In this environment, ‘directional motivation’ – where users subject content that does not align to their worldview to more scrutiny that that which does – pertains more than ‘accuracy motivation’ – where establishing the authenticity of the content is the primary goal (Kahne and Bowyer Citation2017, 6). People often use memes to get their political message across, caring little about whether these are accurate representations of an issue and more about their effectiveness in spreading propaganda (Boyd Citation2018).

Three key elements of the interaction between humans and digital platforms

Confirmation bias

A general term for describing what users are subject to here is ‘confirmation bias’, the tendency to gravitate towards information which conforms to your existing worldview rather than deviates from it. To a large degree this is a psychological phenomenon. Exposing ourselves to information that potentially shatters our worldview can cause cognitive dissonance, and many will go to great lengths to avoid this discomforting experience (van der Meer and Hameleers Citation2020, 3). Kahneman (Citation2012) describes two separate processes for the way in which human cognition works, which he names system 1 and system 2. System 1 is essentially intuitive, the kind of thinking that is almost automatic. System 2 is a more deliberate form of thinking, and corresponds to the way in which humans like to think that they process information. System 1 is a ‘fast’ way of thinking; System 2 is ‘slow’. While theoretically System 2 is intended to have the final say when making decisions, invariably it tends to support our intuition. Haidt (Citation2013) has a similar formulation, where the ‘rider’ [System 2] is serving the ‘elephant’ [System 1]. Although the former is where conscious reasoning takes place, the latter is according to Haidt (Citation2013, loc. 117) responsible for 99% of our mental processing. In Shermer’s observation, this means forming beliefs first and then using information to justify them, a process he calls ‘belief-dependent realism’ (Shermer Citation2012, loc. 156). These beliefs are based on the brain’s identification of patterns in information and its affixing of agency or intention to them

Once beliefs are formed, the brain begins to look for and find confirmatory evidence in support of those beliefs, which adds an emotional boost of further confidence in the beliefs and thereby accelerates the process of reinforcing them, and round and round the process goes in a positive feedback loop of belief confirmation. As well, occasionally people form beliefs from a single revelatory experience largely unencumbered by their personal background or the culture at large. Rarer still, there are those who, upon carefully weighing the evidence for and against a position they already hold, or one that they have yet to form a belief about, compute the odds and make a steely-eyed emotionless decision and never look back. Such belief reversals are so rare in religion and politics as to generate headlines if it is someone prominent, such as a cleric who changes religions or renounces his or her faith, or a politician who changes parties or goes independent (Shermer Citation2012, loc. 156).

For Haidt (Citation2013, loc. 1638), this should make us suspicious even of those who exhibit reason in their argumentation, principally because skilled argumentation can be used as effectively to support a conspiracy theory as it can to make a more sensible case.

Identity

But the exhibiting of confirmation bias is not solely a reactive process. Its potency lies in its association with an individual’s sense of identity. Here, we can see a link with Bourdieu’s notion of the habitus, a social theory which posits that an individual’s thoughts cannot be disentangled from his/her habits and practices. This means that changing one’s mind about important political or cultural issues usually involves changing the habits and practices associated with that belief (Misztal Citation2019, 52–53). This could range from something as anodyne as no longer reading the bible to the much more disruptive impact of no longer attending events at the local church around which most of you, your family and friends’ social lives revolves. Faced with cutting yourself out of community life might thus deter your from renouncing your belief in God. For this reason, the most effective purveyors of disinformation embed durable practices into their theological/ideological programmes. In the case of QAnon, adherents are encouraged to do their own research within a sprawling ecosystem of clues, characters and memes which has been likened to a ‘live-action role-playing game’ (LaFrance Citation2020, 34). Haidt (Citation2013, loc. 5172) has traced the US political system’s descent into acute partisanship from the mid-1990s onwards from a seemingly mundane change in the personal and social practices of Congressmen and women:

Newt Gingrich, the new speaker of the House of Representatives, encouraged the large group of incoming Republican congressmen [sic] to leave their families in their home districts rather than moving their spouses and children to Washington. Before 1995, congressmen from both parties attended many of the same social events on weekends; their spouses became friends; their children played on the same sports teams. But nowadays most congressmen fly to Washington on Monday night, huddle with their teammates and do battle for three days, and then fly home on Thursday night. Cross-party friendships are disappearing; Manichaeism and scorched Earth politics are increasing.

This increasing partisanship was arguably exacerbated by many of the structural features of digital media outlined throughout this paper. In this regard, reinforcing and displaying one’s sense of identity becomes integral to the development of knowledge from online information. Thus, the act of sharing content can be as much a means of ‘identity signalling’ (Campbell Bailey and Hsieh-Yee Citation2021, 14) as it is of trying to educate others. A crude example of this is the retelling of a conversation that Alice Marwick had with one of her friends:

At a recent workshop on partisan media, my friend “Carly” related her frustration. Her mother, a strong conservative, repeatedly shared “fake news” on Facebook. Each time, Carly would send her mother news stories and Snopes links refuting the story, but her mother persisted. Eventually, fed up with her daughter’s efforts, her mother yelled, “I don’t care if it’s false, I care that I hate Hillary Clinton, and I want everyone to know that!” (Marwick Citation2018, 505).

This illustrates the way in which social media is used to develop and reinforce collective identities rather than truth-seeking (Campbell Bailey and Hsieh-Yee Citation2021, 14; Marwick Citation2018, 477; Monsees Citation2020, 118), which is, according to Haidt (Citation2013), consistent with the way in which humans operate in tribes, especially in politics. This is indicative of a particular social function of conspiracy theories, namely uniting ‘ingroups’ against perceived threats to their way of life from ‘outgroups’ (van Prooijen and Douglas Citation2018, 902–903).

Epistemology

Up until now, I have discussed terms like ‘accuracy’ and ‘truthfulness’ as if they are uncontested concepts in relation to information seeking and the building of knowledge. In journalistic studies and in deliberative theories of democracy, there is indeed a sense that one can make objective assessments as to the accuracy of information and truth claims about particular concepts. That rather reductive, social scientific view is contested by many, especially in the discipline of media and communication. The classic critique of this type of thinking is Lyotard’s The Post Modern Condition. Even though it was written before the Internet became the commercial behemoth that it is today, it is essentially the founding critical text on the way in which electronic content challenges traditional forms of linear, logical and truth-seeking forms of knowledge (Lyotard Citation1984). This critique extends to a general renunciation of the use of grand narratives as a means of ordering our understanding of the world around us:

But it [information] could also aid groups discussing metaprescriptives by supplying them with the information they usually lack for making knowledgeable decisions . The line to follow … is, in principle, quite simple: give the public free access to the memory and data banks. Language games would then be games of perfect information at any given moment (Lyotard Citation1984, 67).

He goes on to say that as reserves of information can never be exhausted, then there are an infinite number of ways of understanding. Lyotard exegesis inspired a generation of writers who, like him, believed that the migration of the printed word to electronic screens ushered a qualitative change in the way in which we constructed knowledge. An example of this is O’Hara’s call to replace the Platonic concept of ‘justified true belief’ with forms of ‘useful knowledge’ that reflect the way in which we use electronic information in practical ways without regard for its veracity or otherwise (O’Hara Citation2002). Many contemporary scholars highlight the way in which this lack of a shared sense of epistemology heightens partisan conflict in online spaces (Barzilai and Chinn Citation2020; Boyd Citation2018; Kozyreva, Lewandowsky, and Hertwig Citation2020). It also has the practical effect of undermining media literacy techniques like fact-checking, for if we reject the epistemological framework which support those ‘facts’ then we will be indifferent to attempts to correct our supposed missteps (Boyd Citation2018).

Beyond media literacy

While challenging some of the most obvious and egregious online speculation should continue to be an important part of combating conspiracy theories, the incapacity of media literacy strategies alone to permeate the most entrenched arguments means that media and communication scholars need to conduct research in other areas too. This could involve returning to the perennial argument between respective proponents of the transmission model and media participation theories. However, I think there is a need to move beyond this type of general debate towards a more concrete suite of recommendations. In that spirit, I would offer four areas of research through which media and communications scholars can advance our understanding of the contemporary problem of media and fakery.

Historical analyses of citizens’ relationship with political institutions

Studies like the one in this paper which focus on the behaviour of the user need to analyse how an archetypal person relates to his/her wider surroundings. Here, this pertains particularly to people’s attitude towards sources of authority, primarily but not exclusively political institutions. The most obvious of these is people’s alignment to a particular political party and their attitude towards the government of the day. The extent to which this motivates their online search and sharing strategies for news-related stories depends on how strongly they feel about a particular issue and to what extent they associate those issues with certain institutions. An example of this is the issue of immigration, which in countries like the USA and the Netherlands is something about which many feel passionately, especially as, for conservatives, lax immigration policies are strongly associated with so-called distrusted ‘elites’ (Hameleers Citation2020, 3). In the present day USA, the mainstream media is often viewed by conservatives as part of the liberal elite that is biased against them (van der Meer and Hameleers Citation2020, 13). This complicates media literacy strategies which use professional journalistic techniques to critique online misinformation, when conservatives in particular trust social media rather than mainstream journalism.

It would be a mistake, though, to focus exclusively on conservative attitudes. The most significant conspiracy theory of the twenty-first century in the USA, and one which arguably has influenced many of the modern day formulations, was Dylan Avery’s 2006 film about 911, Loose Change (McDermott Citation2020). Whether it is a suspicion on the left about the machinations of the military-industrial complex that manifests itself in works like Loose Change, or hostility to the federal government and professional elites, conspiracy theorists can find fertile ground in American culture. Why this is so is likely related to the USA’s particular historical development, and the way in which citizens responded in different ways to the power of institutions. This historical dimension to research could be augmented by comparative studies to evaluate whether indeed it is the case or not that Americans seem to be more receptive to conspiratorial ways of thinking than are perhaps citizens in other countries with a different political history.

To what extent do political systems exacerbate partisanship?

Linked to this is the question of how the operation of contemporary institutions can shape or mitigate partisanship within the wider population. The tripartite political system in the USA was established by the constitutional founders as a sensible means to balance the competing power and demands of the presidency, legislature and judiciary. These days, though, those checks and balances can lead to sclerosis. This is especially the case with the Senate, where each state, regardless of its size, is allocated two seats. This disproportionately benefits rural, conservative and sparsely populated districts over more urban, liberal and densely populated ones (Silver Citation2020). It is therefore much more difficult for Democrats to win the Senate and hence much more likely that the republican majority in the Supreme Court, the apex of the judiciary, will not be over-turned in the medium term (Silver Citation2020). This state of affairs does not encourage Republicans to compromise, as obstructive behaviour bears little political damage; as the refusal of the Senate leadership to discuss Judge Merrick Garland’s candidacy for a Supreme Court vacancy in 2016 until after the US presidential election demonstrates, partisanship is actually beneficial to republicans. By 2008 nearly half of all Americans lived in ‘landslide counties’, counties where the winning Democratic or Republican candidate prevailed by more than 20% (Haidt Citation2013, loc. 5193). While the constitutional architecture of the USA is not solely responsible for heightening partisanship, it would nonetheless be remiss of researchers on online polarization not to interrogate the way in which the latter is fuelled by the peculiarities of the operation of its contemporary political institutions.

The value of interdisciplinary perspectives

Throughout this piece, we have seen how particular functions of the Internet and social media are designed to grab and retain our attention. This is done through promoting sensational content and encouraging political division, both of which generate a lot of advertising revenue. The question is: how can we marry this analyse of the political economy of digital media with the discussion immediately above about political institutions, as well as with psychological explanations of individual use? As Marwick (Citation2018, 491) has advocated, researchers need to adopt a ‘sociotechnical’ approach to the study of media and fakery. This would mean adopting media and communication theories and methods, like literacy approaches and the political economy of the media, but augmenting those with analyses drawing on attitudes towards political institutions, the structural features of those institutions which promote partisanship, and the psychology of personal belief.

The return of narrative

I have left until the end the area of research which is not only useful in explaining the proliferation of false information online, but also as a means to more directly counter the false narratives upon which it is parasitic. The postmodern project to undermine all narratives has struggled to articulate an effective programme for progressive politics. It could even be argued that the elevation of right-wing politicians like Trump is the logical terminus of a politics based on the kind of surface and spectacle beloved by postmodernists. The ‘alternative facts’ promoted by members of his administration were uncannily similar to O’Hara’s (Citation2002) earlier cited advocacy of useful forms of knowledge. The problem for postmodernist approaches to politics and knowledge construction is that we cannot simply wish away narrative ways of thinking. Conceptualizing knowledge not as epistemology but as something of practical use does not, as O’Hara might hope, transcend narrative; rather narrative re-emerges in a much more virulent form. This often manifests itself in looking at ‘patterns’ or ‘omissions’ in data, which can then be explained with the user’s own narrative, an example of which is the forensic study of photographs at the location of the shooting of John F Kennedy in 1963 for evidence to support the theory that the assassination was part of a grand conspiracy (Shermer Citation2012, 248).

Tackling these revivified narratives can neither be done by postmodernist critiques nor by the big data methodologies that are prevalent in contemporary media and communication scholarship. Instead, it is useful to first to identify the main purpose of the narrative. In a polarized political environment, this is usually to support the goals of one of the main parties or candidates for high office. In this sense, aggressive interventions against the false narrative will often invoke a negative reaction borne of the need to defend the party or candidate best served by that narrative, the so-called backfiring effect. Counter-narratives which disentangle critique of the obvious falsity of a particular position from a more general attack on its raison d’etre are likely to be more persuasive. This might involve some acknowledgement that the person invoking a false narrative has legitimate concerns which should be addressed, even while critiquing the narrative itself. Finally, in partisan political environments in particular, we should all recognize the difficulty of renouncing beliefs that are central to one’s identity. For this reason, whatever one’s revulsion with the views that they are renouncing, it is important that former adherents are received with a spirit of generosity rather than gloating and ridicule.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Andrew White

Andrew White was formerly a Professor of Creative Industries and Digital Media in the School of International Communications at the University of Nottingham Ningbo China, where he taught from 2007 to 2020. He was also the director of the AHRC Centre for Digital Copyright and IP Research in China from 2015 to 2020. He has published his research in the form of journal articles, book chapters and a single-authored monograph with Palgrave Macmillan entitled Digital Media & Society; a Portuguese translation of this book was published in Brazil by Saraiva Editora in 2016. His journalistic articles on the creative industries and digital media have appeared in the Washington Post, The Guardian, Demos Quarterly and the World Economic Forum’s Agenda webpage.

References