4,829
Views
0
CrossRef citations to date
0
Altmetric
Review Essay

Review essay: fake news, and online misinformation and disinformation

Fake news: understanding media and misinformation in the digital age, edited by Melissa Zimdars and Kembrew McLeod, Cambridge, Mass. & London, The MIT Press, 2020, xl + 395 pp., US$38 (paperback), ISBN 978-0-262-53836-7; Lie machines, by Philip N. Howard, New Haven and Oxford, Yale University Press, 2020, xviii + 221 pp., £20 (hardback), ISBN 978-0-300-25020-6; You are here: a field guide for navigating polarized speech, conspiracy theories, and our polluted media landscape, by Whitney Phillips and Ryan M. Milner, Cambridge, Mass. & London, The MIT Press, 2021, xii + 266 pp., US$22.95, ISBN 978-0-262-53991-3

ORCID Icon

The attempted over-turning of the result of the 2020 US presidential election involved the proliferation of multiple online conspiracy theories and fake stories, and culminated in the assault on the US Congress while it was in the process of validating the electoral college count on 6 January 2021. This represented the apotheosis of the growth of misinformation and disinformation in the USA from around the middle of the previous decade. Social media is commonly assumed to be culpable for this growth, with ‘the news’ and current affairs deemed the epicentre of the battle for information credibility. This review begins by explaining the key definitions and discussions of the subject of fake news, and online misinformation and disinformation with the aid of each book in turn. It then moves on to focus on the following themes common to all three books as a means of attempting to provide a comprehensive analysis of the subject at hand: the use of memes and ironic content; the globalisation of misinformation, disinformation and fake news, and the impact on democratic societies; the limitations of media literacy approaches.

Understanding fake news, and online information and disinformation

As the title of their edited volume intimates, Zimdars and McLeod provide a very clear working definition of fake news. This involves the deliberate construction of fake stories for the express purpose of disseminating them on social media for financial profit and/or for political gain (Zimdars, Citation2020a, p. 2). This is distinguished from other forms of fake news in which fakery is an ironic means of raising important social questions. Examples of these types of satire include The Onion and the Colbert Report (Zimdars, Citation2020a, p. 2), along with political provocateurs the Yes Men, the subject of two chapters by McLeod comprising an oral history of and interview with the duo (McLeod, Citation2020a, pp. 307–313, Citation2020b, pp. 299–305). Howard (Citation2020, p. 86) concurs with Zimdars and McLeod, adding that fake news is often made up of a lot of factual content but is packaged in such a way that it misleads the viewer. This reinforces the message of all three books that it is the context of fake news’s production and reception which is crucial to its impact rather than the effectiveness of its attempts to fake verisimilitude. Zimdar’s (Citation2020a, Citation2020b) view that this context is ultimately political – specifically in the United States at least the hardening of partisanship - is shared by many other authors (Campbell Bailey & Hsieh-Yee, Citation2021; Marwick, Citation2018; Monsees, Citation2020).

Phillips and Milner’s (Citation2021, p. 4) You are Here builds on our understanding of the key concepts with their very precise differentiating of disinformation and misinformation, the first a deliberate spreading of false information, the latter spreading the same information inadvertently. This emphasis on the motivation for spreading information is similar to the discussions in Zimdar and McLeod’s (Citation2020) volume. Phillips and Milner’s (2021, p. 4) definition of a third term, malinformation, is similar to Howard’s point above in emphasising how even information that is deliberately spread to cause harm can contain more than a kernel of truth. The book’s originality comes from its identification of the mainstream media’s amplification of fake news stories as a significant factor in enhancing their credibility. This is most obviously the case in the media’s discussions of former US president Donald Trump’s Twitter feed (it should be noted that the book was completed before Trump’s account was removed from the platform). Phillips and Milner’s (2020, p. 130) exemplar is more specific, implicating mainstream journalists’ role in helping to promote the then unknown conspiracy theory QAnon through reports on tweets mentioning it by Roseanne Barr in late 2017 and early 2018. This highlights the quandary of journalists who, while feeling compelled to shed light on even the most uncomfortable of subjects, must also consider the extent to which that exposure establishes credibility to ideas that should remain in the margins. This ecological approach towards the media has firm scholarly foundations (Postman, Citation1991). Such an approach enables us to think beyond media literacy strategies that focus on the individual to a vision of a media ecosystem which benefits society as a whole (boyd, Citation2018).

Howard’s book presents the production and dissemination of false information as systematic in their operation rather than as simply the aggregate of myriad individual random actions. That is to say that these activities are part of a political sphere which is now sociotechnical in its constitution, and which can therefore be ‘hacked’ by actors with the requisite computational skills (Howard, Citation2020, p. x–xiii). While others have a similar conception of the contemporary political sphere (Marwick, Citation2018), Howard utilises his own empirical research to provide a forensic account of how what he refers to as ‘lie machines’ actually operate. This includes a detailed analysis of the way in which bots are constructed and disseminated, often transcending national boundaries, with a particular focus on Russia’s Internet Research Agency (IRA). While this provides useful detail about the operations of an organisation brought to the public’s attention by the inquiry led by US Special Counsel Robert Mueller, there is a danger that it promotes the idea that the spreading of fake news is largely a Russian problem with bots as the bête noire. This is particularly so where information that does not accord with one’s worldview is routinely dismissed as being a Russian bot (McClellen, Citation2020, p. 319) or where the extent of their reach is exaggerated (Schulte, Citation2020, p. 136). Nonetheless, the sophisticated weaponization of information in one jurisdiction which is then targeted at another has in recent years become a more significant part of the matrix of global misinformation and disinformation. And this pertains not only to bots, as demonstrated by the effectiveness of human creators of viral information campaigns like Imitacja Consulting in Poland and Imitação in Brazil (Howard, Citation2020, pp. 87–95).

The use of memes and ironic content

Phillips and Milner’s ecological approach emphasises the role of a certain type of Internet culture in legitimising falsehoods. This culture originates from the founding ideology of the Internet: libertarianism (White, Citation2014, pp. 125–129). While there were a number of early adopters based in universities and scientific institutions who envisaged the Internet primarily as a platform for spreading academic knowledge, its effective privatisation in the mid-1990s subordinated that vision to one in which ‘information wants to be free’ (Phillips and Milner, 2020, p. 50). This cultivated an environment in which regulation was pared back and any attempt to stem the tide of information flowing through cyberspace was deemed to be censorship. The logic of privatisation meant that that information became increasingly commodified, an additional reason why hindering its flow was anathema to the system. An egregious manifestation of these structural deficiencies is the prevalence of clickbait, which are news stories which are intended to optimise revenue by attracting large numbers of users to their sites. While Facebook has adopted measures to steer users towards news that is journalistically credible rather than that which is designed solely to elicit views, the political economy of the Internet ensures that there is little incentive for the tech giants to do this with any serious purpose (Gillespie, Citation2020, pp. 334–335).

Consequently, Internet culture in the early twenty-first century in the United States at least was replete with jocularity and the mocking of others. Irony became the most potent tool in the armoury of those self-proclaimed tech elites who shaped the most popular platforms. Given the demographic make-up of these Silicon Valley operators, the culture of message-boards like Reddit was imbued with a kind of ‘geek masculinity’ (Massanari, Citation2020, p. 180). This culture normalised abuse against outsiders, particularly women and ethnic minorities, which was justified not only because of its irony, but also on the grounds that the platforms were somehow removed from real life (Phillips and Milner, 2020). Memes were developed, often with racist content, whose ironic intent was used as a defence against the charge that many of these were offensive. This provided fertile ground for the alt-right, an early sign of which was the harassment of Zoe Quinn in the #Gamergate episode in 2014 (Massanari, Citation2020, p. 182). For sure, memes can be used in a politically progressive way, as evidenced by the legion of anti-Trump productions. Nonetheless, it can be more difficult for these to gain traction in an online environment that has been designed by those pursuing an altogether different philosophy, and whose algorithms privilege sensational content because of its capacity to raise revenue (Brock, Citation2012, as cited in Phillips and Milner, 2020, pp. 65–66).

The value of an ecological approach is that it views malign activities like trolling, which is often perceived as being organised on an industrial scale by foreign actors, (Howard, Citation2020, pp. 18–19) as a logical by-product of the type of Internet culture in which we all participate. Indeed, the strength of Phillips and Milner’s (2021, pp. 71–72) book is in their own sense of culpability for participating enthusiastically in that Internet culture at that time. In that vein, their earlier point about the amplification of some of these memes by mainstream media is well-made. The problem is that by having this discussion, albeit in an academic environment, are we not culpable too?

The globalisation of misinformation, disinformation and fake news and the impact on democratic societies

Memes and trolling can be particularly effective where their national origin is disguised. The most well-known example of this is the common perception that much of the misinformation, disinformation and fake news during the 2016 US presidential campaign emanated from Russia. Howard’s (Citation2020, pp. 38–51) analysis of the Russian Internet Research Agency’s (IRA) development of its trolling activities illustrates how effectively the organisation built its capacity and honed its craft domestically, before unleashing a campaign in a superficially less receptive country whose language bears little relation to Russian (the USA).

The IRA was established around 2012 partly as a response to the effectiveness of the social media campaigners during the Arab Spring of 2010-2012, but shaped too by the nationalist youth blogging camps in Russia whose activities were sponsored by government departments. The IRA is incredibly well-resourced, occupying several offices in St Petersburg. These formidable resources were first used to target Russian citizens, with one of its first campaigns primed to obfuscate the circumstances surrounding the killing of Russian opposition figure Boris Nemtsov in February 2015. The relative success of these domestic disinformation campaigns and activities in eastern Europe emboldened the IRA to extend its activities to the USA. Its incursion into US politics began with a Twitter account in 2013, but soon migrated to other platforms. Howard’s own testimony to the US Senate divulged that between 2013 and 2018 tens of millions of American Facebook, Instagram and Twitter users were exposed to posts from the IRA’s campaign (Howard, Citation2020, pp. 38-51).

The impact of the Internet Research Agency’s campaign was manifest not only in its immediate tactical successes, but also in its introduction of a new strategy of disinformation into other jurisdictions. This strategy effectively undermines all news stories to the point where citizens become unable to make distinctions between credible and fake news:

The goal is not to provide a clear, dominant narrative, but to sow mistrust of the media themselves: “`It entertains, confuses and overwhelms the audience.’ … It is also rapid, continuous, and repetitive, and it lacks commitment to consistency.” (Bertolin, Citation2015, p. 10), as cited in Andrejevic, Citation2020, p. 23).

While Phillips and Milner downplay the impact of the Russian disinformation campaign on contemporary US politics in favour of domestic explanations for proliferating falsehoods, they do emphasise the importance of what they term ‘pollution’ seeping from one informational jurisdiction to another. The specific example they use is the 2018 Brazilian general election, where issues associated with US conservatives that had not previously been raised in politics in Brazil became prominent (Phillips & Milner, 2021, p. 4).

The idea of information pollution knowing no borders enables us to think through its impact on contemporary democracies. Traditional theories of the public sphere have assumed that the public and the media to which they are exposed reside mainly within the same national territory (Habermas, Citation1962/Citation1991). National media systems help to create a national conversation. Even where citizens were exposed to media from another country, the easy identification of its source enabled them to make a distinction between domestic and international content. In being able to produce content which is almost identical to that from domestic sources, modern social media campaigns are able to mask their origin. This is not solely an issue with heavily-resourced and state-backed campaigners from countries like Russia and China. Implicit in the analyses of tech companies like Facebook and Twitter in these books is that US platforms wield disproportionate power and influence. The role of Canadian tech company Aggregate IQ in the successful Brexit campaign in the UK’s EU referendum in 2016, as well as the sprouting of agencies in other countries devoted to targeting people in other jurisdictions, makes this type of interference a significant part of the global political landscape (Howard, Citation2020, pp. 33; 119). An intuitive response to this problem would be to focus on identifying and exposing these external sources of information. However, the fact that the IRA increased its activities in the USA after the 2016 election suggests that the exposure of its role did not really curb its influence (Howard, Citation2020, p. 47).

The limitations of media literacy approaches

This last point illustrates the limitations of approaches based on improving users’ media literacy. Approaches of this kind are predicated on the belief that creating less credulous media users will nullify the potency of malignant fake news campaigns. The masking of the origin of online content, the proliferation of videos whose fakery is almost impossible to detect (Alibasic & Rose, Citation2019, p. 465), and the sense that the overwhelming amount of information to which we are exposed does not give us the time for proper critical reflection, are all factors in undermining our media literacy; Zimdars’s (Citation2020b, p. 361) admission that, despite her expertise in communication studies, she had shared a fake article during the 2016 US presidential campaign is an illustration of media literacy’s limits. Arguably, though, what is more significant is citizens’ pre-existing sense of the world around them. This is why Phillips and Milner’s account of Internet culture in the first decade of the twenty-first century US is so important. The embeddedness of irony, and memes which were intended not to discuss an issue in a rational way but to spread pre-fixed view as widely as possible, has seeped into the wider political culture. The issue then is not that citizens lack media literacy, but that it is manifest in the skilful use of dissemination strategies. While media literacy approaches assume that those seeking and sharing information online are pursuing and promoting ‘accuracy’, in actuality it is just as likely that they have a ‘directional motivation’, which is that their strategy is in the service of reinforcing a pre-existing political position (Phillips & Milner, 2021, p. 174).

People are more likely to think that directional motivation is justified in a political environment that is conducive to this type of behaviour. When the Internet culture so described by Phillips and Milner is combined with a political arena in which bipartisanship has become increasingly attenuated, this makes the wilful spreading of disinformation, misinformation and fake news an optimal tactic. This partisanship is manifest most worryingly in the growing number of people in the USA who believe that leading Democrats are actually demonic (Phillips and Milner, 2020, p. 124–125). In this type of rancorous environment, it is not really surprising that many believe that destroying what they perceive as their enemies is much more important than mere trifles about the veracity or otherwise of the information that they spread. A classic example of this is Marwick’s relating of a conversation with a friend:

At a recent workshop on partisan media, my friend “Carly” related her frustration. Her mother, a strong conservative, repeatedly shared “fake news” on Facebook. Each time, Carly would send her mother news stories and Snopes links refuting the story, but her mother persisted. Eventually, fed up with her daughter’s efforts, her mother yelled, “I don’t care if it’s false, I care that I hate Hillary Clinton and I want everyone to know that!” (Marwick, Citation2018, p. 505).

Even away from the farthest reaches of the political spectrum, increasing partisanship in the USA has meant that the shared epistemology that is needed for political opponents to agree on a common set of assumptions has broken down, making the spreading of false information more socially acceptable (boyd, Citation2018).

What to do?

While the review paints a bleak picture, two of the books devote their final pages to possible solutions to the problems that they have raised. Howard (Citation2020, pp. 153–167) identifies the political economy of data gathering and retention as the centrepiece of the problem with disinformation, misinformation and fake news. As such, he proposes that restrictions need to be placed on the commercial trading of information, and that there should be new requirements on both the reporting of data use and the identification of those who benefit from it. Expanding the scope of non-profit rules on data and carving out spaces for public service announcements on social media are also recommended. Conversely, Phillips and Milner (2021, pp. 181–202) are more concerned with how we might empower the user to be the agent of change. Unlike media literacy approaches which focus on the development of the individual’s skills and awareness, their ecological perspective encourages citizens to think about how every single one of their online activities affects the whole network. I would agree, though, with Zimdars’s (Citation2020a, p. 7) criticism of singular approaches to what is essentially a multi-faceted problem. This will require tackling, misinformation, disinformation and fake news from a multi-disciplinary perspective, with political analyses at the forefront. For this is ultimately a political rather than technical problem. If we fail to address this adequately then we might have to accept that the malign effects of misinformation, disinformation and fake news will not diminish any time soon.

References

  • Alibasic, H., & Rose, J. (2019). Fake news in context: Truths and untruths. Public Integrity, 21(5), 463–468. https://doi.org/10.1080/10999922.2019.1622359
  • Andrejevic, M. (2020). The political function of fake news: Disorganized propaganda in the era of automated media. In M. Zimdars, & K. McLeod (Eds.), Fake news: understanding media and misinformation in the digital age (pp. 19–28). The MIT Press.
  • Bertolin, G. (2015). Conceptualising Russian information operations: info-war and infiltration in the context of hybrid warfare. IO Sphere. Summer, 10–11.
  • boyd, d. (2018). You think you want media literacy … Do you? Data & Society [Points]. https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2.
  • Brock, A. (2012). From the blackhand side: Twitter as a cultural conversation. Journal of Broadcasting and Electronic Media, 56(4), 529–549. https://doi.org/10.1080/08838151.2012.732147
  • Campbell Bailey, T., & Hsieh-Yee, I. (2021). Combating the sharing of false information: history, framework, and literacy strategies. Internet Reference Services Quarterly, 24(1–2), 9–30. https://doi.org/10.1080/10875301.2020.1863286
  • Gillespie, T. (2020). Platforms throw content moderation at every problem. In M. Zimdars, & K. McLeod (Eds.), Fake news: understanding media and misinformation in the digital age (pp. 329–339). The MIT Press.
  • Habermas, J. (1991/1962). The structural transformation of the public sphere: An inquiry into a category of bourgeois society. MIT Press.
  • Howard, P. N. (2020). Lie Machines. Yale University Press.
  • Marwick, A. (2018). Why do people share fake news? A sociotechnical model of media effects. Georgetown Law Technology, 2(2), 474–512. https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Marwick-pp-474-512.pdf
  • Massanari, A. (2020). Reddit’s alt-right: toxic masculinity, free speech, and/r/The_Donald. In M. Zimdars, & K. McLeod (Eds.), Fake news: understanding media and misinformation in the digital age (pp. 179–189). The MIT Press.
  • McClellen, S. A. (2020). All fake news is equal. In M. Zimdars, & K. McLeod (Eds.), Fake news: Understanding media and misinformation in the digital age (pp. 315–322). The MIT Press.
  • McLeod, K. (2020a). An interview with the Yes Men. In M. Zimdars, & K. McLeod (Eds.), Fake news: Understanding media and misinformation in the digital age (pp. 307–313). The MIT Press.
  • McLeod, K. (2020b). An oral history of the Yes Men. In M. Zimdars, & K. McLeod (Eds.), Fake news: Understanding media and misinformation in the digital age (pp. 299–305). The MIT Press.
  • Monsees, L. (2020). ‘A war against truth’ – understanding the fake news controversy. Critical Studies on Security, 8(2), 116–129. https://doi.org/10.1080/21624887.2020.1763708
  • Phillips, W. & Milner, R. M. (2021). You are here: a field guide for navigating polarized speech, conspiracy theories, and our polluted media landscape. The MIT Press.
  • Postman, N. (1991). Technopoly. Alfred A. Knopf.
  • Schulte, S. R. (2020). Fixing fake news: self-regulation and technological solutionism. In M. Zimdars, & K. McLeod (Eds.), Fake news: Understanding media and misinformation in the digital age (pp. 133–144). The MIT Press.
  • White, A. (2014). Digital media and society: transforming economics, politics and social practices. Palgrave Macmillan.
  • Zimdars, M. (2020a). Introduction. In M. Zimdars, & K. McLeod (Eds.), Fake news: understanding media and misinformation in the digital age (pp. 1–11). The MIT Press.
  • Zimdars, M. (2020b). Viral “fake news” lists and the limitations of labeling and fact-checking. In M. Zimdars, & K. McLeod (Eds.), Fake news: Understanding media and misinformation in the digital age (pp. 361–372). The MIT Press.