Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Latest Articles
995
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The Problem of Disinformation: A Critical Approach

ORCID Icon
Received 16 Aug 2023, Accepted 12 Apr 2024, Published online: 20 May 2024

ABSTRACT

The term disinformation is generally used to refer to information that is false and harmful, by contrast with misinformation (false but harmless) and malinformation (harmful but true); disinformation is also generally understood to involve coordination and to be intentionally false and/or harmful. However, particular studies rarely apply all these criteria when discussing cases. Doing so would involve applying at least three distinct problem framings: an epistemic framing to detect that a proposition in circulation is false, a behavioural framing to detect the coordinated efforts at communicating that proposition, and a security framing to identify threats or risks of harm attendant on widespread belief in the proposition. As for the question of intentionality, different kinds of clues can be picked up within each framing, although none alone is likely to be conclusive. Yet particular studies tend to centre on or prioritise a single framing. Many today aim to make policy recommendations about combatting disinformation, and they prioritise security concerns over the demands of epistemic diligence. This carries a real risk of disinformation research being ‘weaponised’ against inconvenient truths. Against combative approaches, this article argues for a critical approach which recognizes the importance of epistemic diligence and transparency about normative assumptions.

Introduction

The term ‘disinformation’ is used in a variety of ways, and although it is normally understood to present a problem, there are contrasting views of the nature of the problem. Three, in particular, are commonly found in practice and in the academic literature commenting on it: misleading information, intentional deception and communications with harmful consequences. Although they may sometimes be found conjoined, these are three distinct kinds of problems. In view of this, it might be suggested that we should allow a capacious understanding of the term disinformation, so that no matter which criterion is applied, a case of disinformation is just what is flagged as such on some plausible basis like any of the three indicated. The trouble with this suggestion, however, is that the application of one criterion can sometimes conflict with another. The set of cases identified on one basis is partially disjunctive with the set identified on any of the other bases. This can lead to confusion and some intractable difficulties in the way of reaching shared understandings of what is at stake in a given situation.

So, as well as the first order problems that the term can be used to characterize, there is the second order problem of coherently conceptualising what ‘the’ problem of disinformation is. This is what the literature as a whole confronts us with: a variety of different framings and assumptions, all ostensibly talking about the same general problem under different descriptions, without systematic reflection on how the descriptions may have quite different referents. A concern about this is that inferences appropriately drawn from within one conceptual framing may be illicitly absorbed into a study with a different framing. Policies with far-reaching implications are today being proposed on the basis of academic papers about ‘disinformation’; if these papers themselves are based on conceptual confusion or illicit inferences, then the advice they offer cannot be assumed to be sound.

This concern is particularly acute in relation to that substantial section of the literature which aims to contribute to combatting disinformation in the real world of public communications. Since measures proposed to counter one of the problems flagged as disinformation can potentially give rise to other problems as discerned from the perspective of different framings, the effect could be substantially deleterious for society. A further consequence is that a sense of arbitrariness then permeates public awareness, encouraging the impression that the term might be used simply to discredit ideas that the user disapproves of (see e.g. Siegel Citation2023).

Similar objections have been raised against the use of the somewhat related notion of ‘fake news’, and the difficulties in trying to agree definitions of that term have led to several authors recommending avoiding it (Coady Citation2021; Habgood-Coote Citation2019; Wardle Citation2017). However, while the term ‘disinformation’ can certainly be deployed instrumentally to discredit views the speaker disapproves of, the different interpretations encountered are not irredeemably discrepant. In fact, as is to be shown in Section 1, unlike with ‘fake news’, it is possible to specify the key elements that combine to form a core concept with reference to which the differences between particular conceptions of disinformation can be systematically analysed.

Nevertheless, the problem as revealed by this analysis is that those elements can combine in different ways to yield at least 10 differentiable kinds of ‘information disorder’. It is true that each of those kinds can be neatly and more economically classified under one of just three analytically distinct headings. However, two of these headings correspond exactly to what are typically defined respectively as misinformation and malinformation, in contradistinction to disinformation (Baines and Elliott Citation2020; Wardle and Derakhshan Citation2017). The essence of the second order problem, then, is that the category of disinformation, as this is distinguished analytically from misinformation and malinformation, does not actually apply to all the cases of ‘information disorder’ that happen to be gathered under it in practice.

Of course, a solution to this problem is in principle simple: the term ‘disinformation’ should be used only to refer to those cases of ‘information disorder’ that fit the clear definition of it and not to cases of misinformation or malinformation. In practice, however, determining how to categorise any given instance of concern can require significant investigative investment. Yet in the field of disinformation studies there is a marked tendency to prioritise a different question, namely, that of how best to combat disinformation.Footnote1 This has thus come to be treated as a topic of investigation in its own right and as relatively independent of questions about how to identify or diagnose particular cases of it.

Prospects of meaningful success in combatting disinformation, nevertheless, presuppose an effective division of investigative labour: for in order to know what has to be combatted, there is the work to be done of identifying, diagnosing and tracking the cases of disinformation to be countered. The risk attendant on the combative approach is that specialists in this mission may – particularly under felt pressures of time or volume – be advising on how to deal with particular putative cases of ‘disinformation’ without first ensuring that anyone has ascertained that they actually fit that description. For then it might turn out that any putative case of ‘disinformation’ being combatted is neither false nor harmful. When acted on, such misguided advice can have troubling practical outcomes (examples of which are discussed in Section 4 below).

A contrasting approach to the combative, and which is advocated here, is what may be called a critical approach. Taking critique in the original Kantian sense of understanding the conditions of possibility of appearance of a phenomenon, this approach emphasises the need to carry out all the investigative tasks required to ascertain that there is a case of disinformation before attempting to formulate advice on what to do about it. The main tasks are examined, in turn, in Sections 24. Each involves an investigation with a particular framing, referred to respectively as epistemic, behavioural and security framings. The epistemic framing can detect unreliable information – or misinformation – but cannot determine whether it has been communicated intentionally or by innocent mistake, nor whether it is harmful. The use of a behavioural framing can detect the coordinated efforts at communicating a certain proposition but cannot determine whether the proposition is false or whether it is harmful. The security framing provides for an assessment of risks of harm that may be carried by the communication of certain propositions and for the drawing of inferences about threats posed by malign actors who may seek to exploit vulnerabilities, but the harms in question are distinct from concerns about truth and falsehood.

In short, an epistemic investigation determines whether a proposition communicated is false, a behavioural investigation determines whether the communication is coordinated, and a security investigation determines whether communication of the proposition is harmful. As for whether the communication of a given harmful false proposition is intentional, this is a question that the different investigations can all potentially contribute to in their distinct ways by picking up on different kinds of clue: an epistemic inquiry might discern such use of implicature or fallacies as may be indicative of bad faith reasoning that could be symptomatic of an intent to deceive; behavioural investigation might discern who is influencing who, which can contribute to identifying a principal promoting the proposition; security analysis, by assessing who stands to gain from a successful deception, might provide a basis for inferring the intention of a principal. So, while there may always remain something inscrutable about intentions, reasonable hypotheses about likelihoods may provide a basis for deciding on an appropriate response in a given situation of information disorder.

This article is not intended to recommend particular responses, but Section 5 does recommend caution about precipitate efforts to combat disinformation without first carefully diagnosing it. Of particular concern are those efforts that involve doing something other than correcting falsehoods. A reason why correcting falsehoods might not be sufficient to expunge disinformation is that it can be conveyed by means of deceptively selective presentation of true propositions. A practical dilemma thus arising is whether all the communications involved in a disinformation campaign should be deprecated or whether those which are true should be acknowledged as true. A concern to avoid falsehood may conflict with a concern to avoid harm when truths are used manipulatively to convey a proposition that is harmful to society and democratic institutions. There are in fact justificatory precedents for relativising the value of truthfulness, which include those captured by reasoning about Noble Lies and Toxic Truths. This is how the concept of malinformation has come to be merged, in practice, with that of disinformation. But when we engage in normative assessment, there is a clear case for keeping those concepts distinct: whatever may be the justifications for acting against the spread of false propositions, the suppression of true ones risks doing more harm than good to democracy and the public interest. If there are concerns about public trust in institutions being undermined, I conclude, it should be recognized that the problem may lie with those institutions themselves rather than with those who communicate mistrust in them.

1. Framing Discussion of Disinformation

Discussions in the academic literature generally assume disinformation to be a problem in one, or some combination, of three basic ways. In one section of the literature, the term is used of any information that is misleading – like mistaken facts – and is thus used interchangeably with the term ‘misinformation’. This framing of the problem, with its clear focus on the criterion of propositional falsehood, has the benefit of being well defined, but it misses what, from another – and now widely accepted – perspective, crucially distinguishes dis-information from misinformation, namely, an intention to deceive. From this second perspective, innocent mistakes would not count as disinformation. The distinct problem of disinformation would, rather, be encountered in instances of deceptive strategic communications such as may be manifest, for example, in ‘coordinated inauthentic behaviour’ in social media. These can operate without necessarily spreading false information but instead by selectively presenting and omitting truths. A third view of what the problem of disinformation is focuses primarily not on erroneous information nor on coordinated intent to deceive but on how the circulation of certain kinds of ideas may have damaging effects on the fabric of society or its institutions. This concern provides much of the impetus for the current burgeoning of interest in – and funding for research into – the topic of disinformation. However, a problem associated with it is that if certain ideas can have harmful effects, this is for reasons other than their being false. In fact, acknowledgement of this point has led to the coining of the term malinformation to refer to ideas that may be true and yet whose dissemination can have harmful effects. Those concerned about harmful ideas, however, in practice do not necessarily distinguish clearly cases of disinformation from malinformation, for issues of truth and falsity are not their focal concern.

‘Disinformation’ can thus serve as something of an umbrella term, covering various forms of communication that may be false and/or selectively true and/or harmful, depending on what features are assumed to be defining of it. The nature of the problem it is taken to indicate will accordingly vary: as false information it is an epistemic problem; as a strategy of deception, it is a problem of the unacceptable activities of coordinated agency; as disruptive ideas, it is a threat to the security, legitimacy or some other public good of a social or political order. Any one of these problems might be encountered in a given situation without either or both of the others necessarily being present. This means that different investigations based on the three distinct conceptions would not always agree with one another on whether, where or when a problem of disinformation had arisen.

What, then, might be done to bring greater consistency and inter-operability to the theory and practice of disinformation research? One possibility would be simply to allow a capacious understanding of the concept, whereby any situation in which any one of the phenomena that might be designated by the term occurs, no matter how the situation might be judged by those taking a different approach, is accounted an instance of disinformation. This approach would generate no false negative identifications, from any perspective, but as assessed from each particular perspective it would yield a good many false positives. It is effectively what the current literature as a whole presents us with, and it does not make for ready constructive dialogue between proponents of different approaches. A stringent approach, at the other extreme, would account as disinformation only those cases that meet all three of the criteria mentioned. Its results, as viewed from any particular perspective, would include no false positives but many false negatives. It would therefore appear to be a non-starter in terms of gaining traction in a set of literatures that all, in their different ways, already implicitly reject it.

So, neither a capacious approach, which takes any one of the conditions to be sufficient to confirm an instance of disinformation, nor a stringent approach, which takes them all to be jointly necessary, looks promising as a way of harmonising the disparate uses today. At this point, a theoretical option would be to abandon use of the concept. Although it is currently too deeply embedded in practical commitments of governments, private businesses, NGO’s and the academic elements of think tanks for this to be a feasible option beyond scholarly circles, it might nevertheless be argued that academics should refrain from using the term.

Certainly, this is something that has been argued by several scholars concerning the related term ‘fake news’ (e.g. Coady Citation2021; Habgood-Coote Citation2019; Wardle and Derakhshan Citation2017). However, the situation regarding ‘disinformation’ is relevantly different. The basic problem with the concept of ‘fake news’ is that although some philosophical contributions to the academic literature have claimed it is possible to give a clear definition of the term (e.g. Gelfert Citation2018; Michaelson, Sterken and Pepp Citation2019; Rini Citation2017), the particular definitions offered tend to differ, with even some defenders of its use admitting a need to accept a sufficiently capacious understanding ‘to cover all cases of epistemically corrupt news’ (Bernecker, Flowerre and Grundmann Citation2021, 4). Its critics, however, are more troubled by the discrepancies between different particular definitions. Joshua Habgood-Coote highlights some of the concerns:

does ‘fake news’ apply to completely false stories, to partially true stories, or to stories that are true but spread with malicious intent? … Can true stories that are part of a flood of indistinguishable true and false stories count as fake news? … I suspect that if we were to carry out a proper study of linguistic usage, we would find speakers applying it in various incompatible ways. (Habgood-Coote Citation2019, 1039)

Another limitation is that communications can be problematic in ways other than involve fakeness and can come in formats other than news. Thus, an influential paper of Lazer et al. (Citation2018) notes that ‘fake news’ overlaps with other ‘information disorders’ like mis- and dis-information. Moreover, although replies have been tendered (by e.g. Brown Citation2019; Pepp, Michaelson and Sterken Citation2019) there appears to have been something of a broader movement away from treating ‘fake news’ as the focal concept. Indeed, already in an early report for the Council of Europe, Claire Wardle and Derakhshan (Citation2017) expressly dispensed with the term ‘fake news’, finding it ‘woefully inadequate to describe the complex phenomena of information pollution’. Instead, they ‘introduce a new conceptual framework for examining information disorder, identifying the three different types: mis-, dis- and mal-information’.

This framework has the benefit of allowing reasonably clear distinctions to be drawn between cognate concepts, and its international uptake beyond academia suggests that the concept is more readily operationalizable than that of ‘fake news’. For instance, the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), to which Wardle had served as an advisor, adopted her nomenclature in setting up its dedicated Subcommittee on Misinformation, Disinformation and Malinformation (CISA Citation2022). Wardle also advised the European Commission (European Commission Citation2018), and her framework was adopted in a handbook she co-authored for UNESCO (Posetti et al. Citation2018).

However, although Wardle does show how the concept of disinformation is more amenable to cogent analysis than is the notion of ‘fake news’, her framework is affected by a conceptual problem. It frames disinformation, in contrast to mis-information (which is false, but not intended to harm) and mal-information (which is true, but intended to harm), as ‘[i]nformation that is false and deliberately created to harm’ (Wardle and Derakhshan Citation2017, 20). However, information that is not false, but instead consists of strategically selected truths, can often and effectively be deployed to produce false beliefs. For this reason, the concern animating any actual investigation into a potential problem of disinformation is not necessarily whether the information conveyed is itself false, but whether, more especially, the beliefs it might be manipulatively used to instil are false. The sense of Wardle’s position may thus be presented more perspicuously by saying that the proposition conveyed by disinformation is false, in awareness that a given proposition can be conveyed by means other than a transmission of semantically transparent information.Footnote2 Indeed, ordinary language can be used to convey meanings that differ from or even oppose a statement’s straightforward semantic content (see Grice Citation1975). The significance of this point for a discussion of disinformation, which has been highlighted by Sille Obelitz Søe (Citation2018), Samkova and Nefedova (Citation2019) and Hayward (Citation2021), will be expanded on in Section 2. The point here is to emphasise the need for analytically helpful terminology. A variety of terms – not only information, but also others like message, report, story, narrative, communication and so on – can be used to indicate a putative item of disinformation, but each term also has a range of potential denotations and connotations that can introduce confusion into discussions.

More fundamentally, though, the attempt to frame the problem in terms of different types of information allows some conceptual confusion into Wardle’s account. She illustrates her framework with a Venn diagram showing two partially overlapping circles: one with information that is false and another with information that is harmful (Wardle and Derakhshan Citation2017, 20). The non-intersecting areas are respectively labelled ‘misinformation’ and ‘malinformation’; the area of overlap, which contains information that is both false and harmful, is labelled ‘disinformation’. This neat representation puts the three on a single conceptual plane as different kinds of information. What it does not depict is any difference between information that just happens to be both false and harmful, on the one hand, and information that is known to be false and/or intended to be harmful. This is a serious omission, given that the element of intent is widely taken to be a distinguishing feature of disinformation. In order to capture all the variables that are relevant to differentiating between misinformation, disinformation and malinformation, I accordingly suggest a somewhat expanded conceptual framework (see ).

Table 1. Disinformation and cognate concepts.

This framework retains Wardle’s idea that disinformation can be conceptually situated between misinformation and malinformation, but draws some further distinctions in virtue of including three defining variables: not just false/true and harmful/harmless but also intended/unintended. It allows recognition that intention can apply to falsehood, or to harm, or to both. In separate columns are the key elements which are all present in an unequivocal instance of disinformation, as represented in row 6, and the other rows indicate typical scenarios in which one or more, but not all, elements are present. Because this framework accommodates the consideration that what is intended to harm or to deceive may not necessarily do so, row 7 has the category of failed disinformation: this refers to a situation where, for instance, a deception has been exposed and, as a result, any potential harm of sharing of the false proposition is not actualised due to shared knowledge of its falsehood. It thus registers on the table as a harmless falsehood. Harmless falsehoods can also be generated by everyday mistakes or idle gossip, but the difference between these and the situation of a significant problem being averted is registered by different values in other columns of the table.

Regarding the concept of malinformation, four subcategories are included to show the different assumptions about it that are discoverable in the literature. Sometimes it is referred to as harmful information but other times as information provided with malign intent. Given that intent and actuality may not align, the term can have different practical meanings. Sometimes both meanings together might be clearly intended, but other times a user of the term may simply oscillate between them, leaving the uncertainty indicated by the question mark in row 11. In practice, and also in the literature, it is not unusual to encounter the idea of ‘potentially harmful’ which does service in a wide variety of situations where the analyst has a concern that they are not in a position to fully explicate. It will be noted that no illustrations are offered to distinguish between the malinformation subcategories and that the first two columns are left blank. The reason for this indeterminacy is that acceptance of the idea that true information can be harmful, in any given scenario, will depend on further assumptions that are not captured in this framework. Assumptions about what the criteria of harm are, who would be the parties relevantly affected by the harm and in what relationship the analyst stands to those harmed would require specific clarification in any given instance. They would need to be treated with considerable care because the idea of truth being harmful is itself a potentially harmful idea in any society that values truth no less than security. This is a normative issue that will increasingly come to the fore as this article proceeds.

It is well understood, then, that operations describable as disinformation can take a variety of forms using a range of persuasive strategies and deceptive tactics. As a reflection of this circumstance, the literature on the subject includes an array of mutually inconsistent conceptualisations of disinformation. The purpose of the proposed framework is to enable interlocuters to be clear with each other about what claims are and are not being made in the identification of any putative instance of disinformation.

Although the intention here is not to stipulate a stringent definition of disinformation – since that would not necessarily be conducive to constructive dialogue with those who assume other or looser definitions – Row 6 of the table nevertheless offers a characterization that ‘ticks all the boxes’ for a paradigm case. The illustration of it comes from World War 2, and a reason why a wartime example can comfortably fit the stringent definition is that what ‘disinformation’ was generally understood to mean in the decades before the term’s recent popularisation originated in the world of counter-intelligence activities (see, for instance, Blackstock Citation1969; Kirkpatrick Citation1968). Operation Mincemeat involved complex subterfuge (Macintyre Citation2010). In 1943, the British fed misleading information to the Germans about allied invasion plans by planting fake documents in a briefcase chained to a corpse in an officer’s uniform to be found washed up on the Spanish coastline. The documents, as hoped, were passed to the Germans who, believing them genuine, diverted their defence troops in response, thus saving large numbers of Allied lives during the invasion. This successful deception unequivocally involved an orchestrated effort to induce in an enemy a belief that was false and harmful to that enemy. It thus fits squarely within all three – epistemic, behavioural and security – framings.

Of course, the context of this illustration is markedly different from that of contemporary concerns, but what it lacks in topicality it gains in perspicuity and serves to point up how current usage can lack this. For one thing, since the case has been declassified and is removed from any controversy about who is deceiving whom, it can be surveyed in its entirety: the operation involved a principal (the British Government and military command), an agency devising the strategy of deception (the personnel seconded to military intelligence) and those on the ground – and water – who executed the deception. There is thus no uncertainty about who was deceived about what or how. A second point about the chosen example is that a situation of declared war is presupposed. Contemporary discussions of disinformation often foreground the idea of ‘information war’, and this can be intended quite literally, which is why we find military and security apparatuses engaged in it (Hayward Citation2023b). Yet this is not a declared war with a declared enemy, and citizens may not know exactly where their allegiances are supposed to lie or, indeed, why matters of truth and falsehood should be considered matters of allegiance rather than epistemics. Analysis that uses the proposed framework can thus raise awareness of how any talk about disinformation presupposes more background assumptions of both factual and normative kinds than are typically spelt out in contemporary discussions.

The lack of explicitness about assumptions can also be seen to be at the root of the unclarity and tendentiousness specifically surrounding applications of the concept of ‘malinformation’ in a contemporary context. In the war setting, the hostile communication of true information has a clear purpose that is entirely distinct from disinformation: it is not about trying to deceive the enemy’s commanders; its purpose is to demoralise their troops and/or citizens. Our Side’s commanders, for their part, will want to conceal demoralizing knowledge from us about their own weaknesses or defeats. In the contemporary context, by contrast, those aiming to ‘combat’ malinformation claim to be doing it for the benefit of fellow citizens, but if we citizens are not aware of being in a war with a defined enemy this claim can be a rather perplexing one that we have reason to treat with caution.

In contemporary circumstances, then, the concept of disinformation has undergone significant extension and loosening, which is the problem this article is addressing. The article’s core argument is that while a stringent definition may not find wide acceptance just now, it can nevertheless be referred to as a conceptual benchmark when elaborating exactly what any given disinformation specialist is assuming in relation to the set of defining features set out in the framework indicated. On this basis, it is understood that a thorough and conceptually satisfying account of any given case of disinformation, under contemporary circumstances, would necessarily involve a division of investigative labour. Accordingly, the next three sections outline the three key tasks.

2. Disinformation as an Epistemic Problem

From an epistemic perspective, disinformation can be understood as information that is not reliable. Hence, in academic publications, as in public debate, the problem of disinformation is sometimes assimilated to that of misinformation, with both terms being taken to refer to a general problem of unreliable information. Certainly, when this general problem came to be a matter of particular public attention around 2016, the two terms were often used interchangeably. A literature review from 2018 flagged the need for more definitional work at the time, as it found the term ‘disinformation’ being used in a fairly general way to refer to ‘the types of information that one could encounter online that could possibly lead to misperceptions about the actual state of the world’ (Tucker et al. Citation2018, 3).

Today, however, it is quite widely agreed among specialists in the topic that the difference between simple cases of mistaken information and intentionally misleading communications should be marked by a terminological distinction. A simple innocent mistake – or even a careless one – can count as misinformation but not as disinformation. ‘Disinformation’ is understood to imply an intent to mislead, and so, on this understanding, identifying an instance of misinformation is not a sufficient basis for affirming that disinformation is in play. Furthermore, the identification of misinformation is not always even necessary for affirming that disinformation is operating. For it is possible to mislead and deceive by selectively presenting conveniently true information and omitting inconvenient truths.

So, the task of identifying disinformation cannot be reduced to that of identifying misinformation. Paradigmatically, misinformation is identified and corrected by engaging in some kind of fact check. This is a natural first step towards the identification of disinformation too and can also be helpful at any point during an investigation of it. Yet while the checking of facts can provide a very useful corrective for straightforward misinformation, its usefulness is limited with regard to identifying intent to deceive.

Intent to deceive can be hard to discern for several reasons. One is that we may simply not have our suspicions aroused when plausible communicators seem to be reporting in good faith on matters that we ourselves have no direct or expert knowledge about. It is difficult to discern disinformation when you are not alert to the fact that it could be in play. We may also have a certain reticence about suspecting others, let alone accusing them, of bad faith, and a willingness to give the benefit of the doubt can be exploited by the less scrupulous. A further potential advantage enjoyed by those who aim to deceive is that facts are not always the most influential element in shaping people’s beliefs, let alone their behaviour (see e.g. Fessler et al. Citation2014).

Yet even where facts are taken to matter, a problem is that disinformation can operate without necessarily involving any direct factual inaccuracy by relying, instead, on the selective emphasis of certain considerations and suppression of others. As professional propagandists have long understood, a particularly effective method of deception is to hew as close as possible to the truth: being strategically ‘economical’ with the truth can create an intended deception without directly affirming any misinformation at all.

Furthermore, even when straightforward misinformation is purveyed as part of a disinformation campaign, it can be made more difficult to discern than when it arises from an innocent error. Communications that are driven by an intent to deceive can involve taking steps to obscure some of the clues that would be open to ready discovery in the circumstances of an innocent mistake. Accordingly, and in contrast to misinformation simpliciter, we may refer here to aggravated misinformation.Footnote3 Misinformation can be described as aggravated when it is not just communicated and promoted but is also strategically protected from discovery. A purveyor of disinformation may take special care to construct ‘defences’ around any falsehoods that have to be smuggled in with the conveniently selected truths. Defences could include disseminating either pre-emptive rationalisation of anomalies that would otherwise trigger criticism or additional information aimed at diminishing the significance of anomalies. Furthermore, when exposure of disinformation cannot be entirely prevented, its extent can be mitigated by a strategy such as offering a limited hangout – admitting to certain inconvenient truths while steering attention away from more seriously compromising revelations. More aggressive strategies can involve not just distracting attention away from misinformation but turning attention onto those exposing it by attacking their competence, credibility or integrity (Attkisson Citation2017). Disinformation, then, presents distinctive epistemic challenges because it can consist in both selectively true information and aggravated misinformation. Simple fact checking is limited in its power to diagnose these things.

Yet fact checking is not the only possible approach to epistemic diligence. Indeed, the very focus on facts requires a degree of intellectual caution for the reason that, as we know from general philosophical insights into the problem of knowledge, how we construct descriptions of the world involves the application of concepts to the ordering of our experience. Our experience of the world is always cognitively mediated, and what we refer to as a fact is always the report of an observation that has been isolated and pre-packaged, so to speak, from a streaming flux of sensory and cognitive inputs – and the observation is not necessarily our own. Most of our education and exposure to world affairs comes mediated by reports from others. And those, in turn, may be highly reliant on others again. Hence, we can appreciate that a significant part of epistemic diligence is not about checking directly how beliefs correspond to realities in the world, but, rather, it is about testing the coherence of the reports we are ready to accept as reliable accounts of those realities.Footnote4

The resources of epistemic diligence are not confined to the positivist methods of checking factual propositions. Rather, by applying skills of interpretation – hermeneutics – we can check for the kinds of incoherence that are traceable to leaps of reason or other epistemic tricks that we call fallacies. Fallacies are often thought of as logical faults, which some of them are, but the most relevant fallacies in deceptive communication are generally informal fallacies, and these contravene norms of kinds other than those of formal logic (Oswald Citation2022). Many are quite well known and readily recognized by the vigilant. Common informal fallacies include ad hominem, begging the question, straw man, appeal to authority, whataboutery (tu quoque) and the fallacy of composition.

Epistemic diligence can also be applied to another particularly insidious way of producing deception. This exploits the difference between semantic meaning and pragmatic meaning (see Samkova and Nefedova Citation2019; Søe Citation2018). Semantic meaning is captured in the propositional content of a piece of information whereas pragmatic intent can generate a distinct supervenient meaning quite at odds with that overt content. This supervenient meaning is what H.P. Grice (Citation1975) has called ‘implicature’. The implicature of a statement cannot be understood from the literal meaning of its words but only by discerning and interpreting the pragmatic intent – i.e. ‘what the speaker is giving you to understand’ if you ‘read between the lines’. So, it cannot straightforwardly be fact-checked. Grice distinguishes two distinct kinds of implicature. Conventional implicature conveys an implied premise of a statement whose words are used with their conventional meaning. Grice’s example is the statement ‘he is an Englishman; he is, therefore, brave’: the conventional meaning of ‘therefore’ generates the implicature that all Englishmen are brave. Yet the speaker has not said this in so many words, and therefore has not actually made a claim that could be directly challenged as misinformation. By contrast, conversational implicature plays not on the semantic logic of statements but on the normative expectations of communication as it figures in conversation. Conversation, Grice points out, has its own immanent norms that people spontaneously acknowledge by participating in it as a general form of practice. These norms answer to what he calls the Cooperative Principle: it is this that makes a conversation out of what would otherwise be discrete and unrelated speech acts. Grice summarises these norms in terms of a cluster of maxims that participants in a conversation mutually expect each other to follow: give sufficient but not excessive detail; be truthful; be relevant; be perspicuous. It is reasonable to generalise from this and say that the honest reporting of facts to the public, too, should conform to these maxims of cooperative communication. This should certainly be expected from journalists (Kheirabadi and Aghagolzadeh Citation2012, 547), but also from any public communicators, especially those engaged in any kind of public relations. So, alertness to potential breaches of those norms is part of epistemic diligence. For deception can be achieved not only without the making of false statements but also without any straightforward selectivity of serviceable truths.

Accordingly, one has reason to be cautious about accepting a report if it provides less detail than is needed to establish a core claim it makes, or if it gives too much distracting detail, or focuses on information that is a side issue rather than the main one, or if it obfuscates – or is simply less clear than it needs to be on – a pivotal point. These are failings that point to the need for further inquiry before a report can confidently be regarded as reliable. Unreliability may be due to lack of competence or effort, rather than intention to deceive, and so this kind of checking is still just part of epistemic diligence rather than part of the different kind of inquiry to be examined in the next section. But my claim is that it is a necessary part of identifying cases of disinformation, especially where blatant falsification of facts may not be in evidence.

Particularly to be noted about epistemic diligence relating to fallacies and deceptive implicature, rather than only to claims of fact or clarifications of context, is that it can be applied in assessing the reports of fact-checkers themselves. This is significant given that in situations where disinformation is in play, it is possible that certain fact-checkers may have been commissioned to debunk challenging claims by whatever means are available.

To sum up: if someone forms a mistaken belief on the basis of communications that include no factually false claims, then some other form of flawed reasoning has likely intervened in their formulation. Epistemic diligence involves more than fact checking, and so although disinformation can sometimes operate without deploying false facts, the identification of disinformation cannot reliably proceed without engaging in epistemic diligence. This may not suffice to show that a deception is intentional, but it is a necessary step in establishing that deception is even at issue.

3. Disinformation from a Behavioural Perspective

The central question for this section is how one may detect a coordinated intent to deceive when individual messages may not be false and any fallacious reasoning is not known to be intentional. In order to meet this challenge as it arises in online communications, which are a primary area of contemporary concern, Kate Starbird (Citation2019) writes that ‘the key is not to determine the truth of a specific post or tweet, but to understand how it fits into a larger disinformation campaign’. Attention, accordingly would be directed towards such campaigns. But how can a disinformation campaign be identified? In the nature of the case, its operations – and perhaps even some of its immediate effects – may not be straightforwardly observable. So, it is helpful to approach the question with some more general understanding of how campaigns of persuasion that use strategic communications work, quite apart from considering whether their purpose is to inform or disinform.

If disinformation is understood as something distinct from the ad hoc or unintended communication of misleading information, and thus as a matter of coordinated deception, it has the form of what may be called strategic communication. As Maarten Hillebrandt puts it, disinformation can be regarded ‘as a communicative phenomenon consisting of an “assemblage” of people, practices, values, and technologies’ (Hillebrandt Citation2021, 1). Strategic communications are used by various kinds of organisation – across state, non-governmental and commercial sectors – to develop messages and narratives that promote their interests or agendas. Professional teams orchestrate extensive and complex activities to promote selected beliefs, aspirations and cognitive allegiances, sometimes on a mass scale. These activities need not involve the pursuit of a malign purpose or deploy deception; however, strategic communications can in principle be used in good faith to enhance public awareness of valuable information about the world. Nevertheless, the tools and techniques have in practice been honed mainly for the purpose of attaining an advantage for the organisation sponsoring their use, be that advantage commercial, political or military, rather than for the disinterested dissemination of useful knowledge. Strategic communications can include operations of various kinds – from the direct manipulation of mass behaviour by psychological operations (e.g. employing skills of behavioural science to ‘nudge’ people into making choices they might not otherwise make), through the promotion of certain beliefs about events by means of information operations (e.g. promoting a selected side of a public concern so as to influence people’s beliefs), to the influencing of fundamental commitments by public diplomacy (e.g. by associating a certain kind of belief rather than others with the moral values people hold). Multi-faceted operations can be planned, coordinated and executed with military precision – quite literally so when security and intelligence services are involved (see NATO Citation2015).

Yet there can be communications which are produced with some coordination and result in mistaken beliefs but yet are not intended to deceive. Thus, during the recent pandemic, for instance, there were conspicuously conflicting beliefs about important matters – such as the necessity, safety and efficacy of vaccines – which were disseminated by identifiably distinct groups of citizens with some discernible coordination. At least one of these groups would have been disseminating mistaken information, but this does not mean that any given group was pressing its agenda with an intent to deceive. A behavioural analysis might reveal coordination but not an intent to deceive. To draw that kind of inference would require a different sort of analysis.

Orchestrated disinformation can involve extensive organisation and quite elaborate manipulation of the truth: its planning is generally covert, its protagonists and modalities are likely to be unknown beyond a ‘need to know’ basis, and, in general, an integral functional aim is to pass undiscovered. So the question is how do analysts of disinformation in practice go about identifying it?

One ground for suspecting that a disinformation campaign could be in operation, suggested by Maarten Hillebrandt, is finding a cluster of communications – e.g. typically on social media – all sending out a similar message which appears to be ‘geared towards questioning established facts (i.e. commonly accepted observations of reality) or truths (intersubjectively accepted understandings of reality, grounded in facts)’ (Hillebrandt Citation2021, 5). The suspicion would be deepened if they are also ‘presenting contorted counter-truths based on (partially) made-up sources including incorrect facts’ (Hillebrandt Citation2021, 9). These are things that can be noticed on the basis of readily available information without needing exhaustive epistemic diligence to discover, and when there is a pattern to this contrarianism, it can be treated as a behavioural clue.

Notice, though, that all this depends on prior epistemic diligence concerning ‘established facts’. A possible behavioural indication of disinformation at work which does not depend on epistemic presuppositions would be finding practices of ‘silencing potential other senders who defend mainstream truths or question the counter-truths’ (Hillebrandt Citation2021, 9). Acknowledging that a similar point might be made regarding the silencing of those who question ‘mainstream truths’ too, it does seem reasonable to suppose that such practices might be regarded, at least defeasibly, as a presumptive indicator of intent to deceive. For there is at least some ground to suppose that those who do not seek to defend their claims on the basis of reason and argument in open debate, but resort instead to censorship, smearing or ad hominem attacks, take that approach because they could not defend their position in a deliberative forum. Of course, there may be occasions when good faith communicators dismiss what they perceive as vexatious challengers in a way that the latter construe as offensive or oppressive, without this being a sign of any actual deception or intention to deceive. This is why the presumption must generally be regarded as defeasible. Nevertheless, it does remain the case that this criterion does not involve begging any epistemic question and points to potential clues that should not just be ignored.

Other commonly cited behavioural indicators of a suspected disinformation operation include the use of bots, fake identities and ‘inauthentic behaviour’ (Lange and Lechterman Citation2021). The use of bots – automated social media accounts – can artificially amplify the reach of a message, and, when detected, it provides a good reason to infer coordination. Their use is not necessarily an indication of intent to deceive with the content of messages sent, however. As pointed out by Bellingcat’s Director of Training and Research, Aric Toler (Citation2020), bots can have various legitimate uses. Still, the detection of bots at work is not an entirely unreasonable prima facie warrant for further investigation into the ‘authenticity’ of their messaging.

A broadly similar point can be made about the use of fake identities. These are not necessarily used to promulgate bad information or to deceive, and they can be used by good faith actors because they provide what in many kinds of circumstance – and notably those where truth-telling runs counter to the interests of powerful actors – would be justifiably-sought anonymity. Nevertheless, sometimes fake identities are used by social media accounts referred to as trolls, which, unlike bots, are operated by human beings. Their role in disinformation operations is to promote an agenda or discredit an opposing narrative but in more humanly responsive and flexible ways than might be achieved by automated accounts.

As for the task of recognizing human-driven inauthentic behaviour, this is not easy or straightforward. It is one thing to identify when a process is automated, as, for example, when it is observed to perform a range or number of actions that human actors could not possibly replicate, but it is another to identify when a human agent is using an account ‘inauthentically’. It is not even easy to know what would be reliable distinguishing features of ‘inauthenticity’. Attempts by professional consultants to identify trolls by apparently peculiar observed behaviour, such as abnormal (re)tweeting rates and high use of hashtags and URLs can lead to false positive identification. According to Mazza et al. (Citation2022), ‘it is extremely challenging to discriminate inauthentic behavior from authentic users and users who unwittingly contribute to the amplification of false information’ (Mazza, Cola and Tesconi Citation2022, 2). For instance, a definition of inauthentic engagement used by Twitter, Schliebs et al. observe, ‘includes any activity that “attempt[s] to make accounts or content appear more popular or active than they are”’ (Schliebs et al. Citation2021, 5). Yet although that might be regarded as a suspicious sign, it has no necessary connection with disinformation, since there might be legitimate groups that want to amplify their message. Indeed, it is not only genuine activists who can be mistaken for agents of disinformation, but even – as we saw during the recent pandemic – recognized experts on the subject under discussion (Martin Citation2021). In short, one can appreciate that the focus on inauthentic behaviour, if useful for some purposes, is a poor proxy for qualitative analysis of disinformation (Pielemeier Citation2020).

Those issues are compounded when the research aim is to identify coordinated inauthentic behaviour. For ‘coordination’ is quite an open-ended concept and can be attributed no determinate relation to ‘inauthenticity’, as researchers in this field recognize (Nizzoli et al. Citation2021). The question of coordination nonetheless is a crucial one, since it is central to what makes disinformation a phenomenon distinct from simple misinformation. What needs to be recognized is that it is possible for coordination to arise spontaneously without any discernible originary intent, and if a group of individuals arrive spontaneously at a mistaken view of the world, this may be a problem in its own terms, but it is not an indication that they are trying to deceive others. So, to establish that a disinformation campaign is in operation one needs to be able to discern how its coordination is orchestrated. This can be challenging because, as researchers like Starbird emphasise, the coordination of a disinformation campaign may involve opacity as well as complexity: ‘effective disinformation campaigns involve diverse participants; they might even include a majority of “unwitting agents” who are unaware of their role’ (Starbird Citation2019). Given the possible role of a majority of ‘unwitting agents’, one has to be aware that a well-designed disinformation campaign might conceivably run with very little discoverable ‘inauthentic’ behaviour at all. ‘Recognizing the role of unwitting crowds is a persistent challenge for researchers and platform designers’ (Starbird Citation2019).

A question, accordingly, is by means of what investigative methods can researchers distinguish communications whose coordination is orchestrated on behalf of some principal or organisation from communications that are spontaneously coordinated by distributed communicators converging on themes and perhaps even collaboratively developing them? For some investigators, a first step would be to identify clusters of ‘inauthentic behaviour’ and work back from there. Increasingly sophisticated methods for identifying such clusters are being developed (see e.g. Sharma et al. Citation2020), and from observation of these can be posited their likely orchestration by some principal. A question then is how the principal’s identity is to be discovered. Sharma et al. have stated that ‘the earliest reported cases of disinformation campaigns surfaced when investigations by the US Congress found coordinated accounts operated by Russia’s “Internet Research Agency” (IRA) or Russian “troll farm” on social media to influence the 2016 US Election by promotion of disinformation, and politically divisive narratives’ (Sharma et al. Citation2020, 1–2). The paper they cite in this connection – Badawy et al. (Citation2019) – is not, however, an account of identifying a principal from evidence of coordination; rather, a suspected principal is posited ab initio and then the investigation examines the activities of accounts that can putatively be traced back to it. The identity of the principal in this case was provided by US Congress when it released a list of these accounts as part of the official investigation of alleged Russian efforts to interfere in the 2016 US presidential election. This raises a non-trivial question of epistemic diligence: why should it be assumed in this context that a report from US Congress – given its geopolitical stance in relation to Russia – is a reliable source?

If the identity of a disinformation campaign’s principal has not been supplied by reliable intelligence made public, another possibility is that it might reasonably be inferred by determining, from more readily available evidence, whose interests the campaign is most likely to serve. Thus, a familiar heuristic when generating hypotheses about disinformation coordination is to consider who benefits from the effects of the observable messaging.

A more complicated situation arises, however, when the messaging does not clearly suggest a definite alternative view of the world that suits a particular interest but rather appears to generate confusion. Sometimes, of course, a degree of mixed messaging might serve simply to disguise which perception is ultimately being promoted, but there is also a possibility that on some occasions the principal may not primarily be interested in purveying any particular message at all but, instead, have the aim of sowing confusion – so as, for instance, to undermine confidence in institutions. This may make diagnosis more difficult, although a question can still be asked about what principal might benefit from the sowing of confusion.

Once we are brought to consider this sort of possibility, the situation under discussion shifts from one where a principal is seeking to pursue their own interests to one where the principal is engaged in hostilities. This is no longer just an epistemic matter or even a behavioural one – it is potentially a security concern.

4. Disinformation as a Threat to the Security of a Society

The previous section considered methods for ascertaining when there is in progress a communication campaign which is coordinated and deceptive, but a question now is how it might come to pose the kind of danger warned of in some of the literature. According to Freelon and Wells, ‘the threat disinformation poses to healthy democratic practice’ (Freelon and Wells Citation2020, 146) has made it ‘the defining political communication topic of our time’ (Freelon and Wells Citation2020, 145). Pavliuc et al. note that it ‘is increasingly regarded as a threat to public discourse, democratic decision making, society’s cohesion, and ultimately our abilities to identify and agree solutions to the myriads of global challenges we are facing’ (Pavliuc et al. Citation2023, 1). McKay and Tenove argue that disinformation campaigns can undermine the process of democratic will formation, and that their ‘tactics of corrosive falsehoods, unjustified inclusion, and moral denigration … contribute to broader systemic harms such as epistemic cynicism, pervasive inauthenticity, and techno-affective polarization’ (McKay and Tenove Citation2021, 707). The concern here about threats to ethical and deliberative politics is sometimes presented with an explicit emphasis on their security dimension. The UK government’s warning of ‘a real danger that hostile actors use online disinformation to undermine our democratic values and principles’ (United Kingdom Parliament Citation2019, 5) finds amplification in academic literature, with Regina Rini (Citation2019), for instance, seeing social media disinformation as a security threat in weakening the legitimacy of democratic government itself.

There are others, however, who suggest the dangers are seriously exaggerated. One general point made is that people are less influenced by disinformation than is supposed by those sounding an alarm about it (Williams Citation2022). A more specific point relates to the substantial part of that literature which cites Russia as the chief source of danger. For while ‘Russian Disinformation’ is taken as paradigmatic by a good many authors,Footnote5 research has shown negligible effects of actual Russian efforts (Eady et al. Citation2023), and some of the key sources cited to support the claims have since been discredited (see e.g. Gerth Citation2023).

The aim in this section is not to pronounce directly on the substantive applicability of these concerns but to highlight just how much the current conceptualisation of disinformation as a security threat differs from what would formerly have been taken as paradigmatic, according to the analysis of Section 1. In the original context of espionage, instances of disinformation were necessarily very hard to detect, but the nature of the problem to be on guard against was in principle very clear: a hostile foreign power would try to use deception to achieve a strategic advantage, and epistemic diligence was a crucial requirement of efforts to reveal it. Today, however, proponents of a securitized approach claim to detect a vast proliferation of instances. In fact, the sheer quantity is often cited as a central aspect of the contemporary problem. But as putative instances have become more readily identified, the nature of the threat posed by them has become less clear.

There is certainly some institutional continuity with the original understanding of the problem of disinformation in that the intelligence agencies of Western states have become centrally involved in defining and addressing it by framing it as a security threat. For instance, in Britain, the respective heads of MI5, MI6, GCHQ and the British Army’s 77th Brigade have all explicitly declared themselves participants in an ‘information war’ (as referenced in Hayward Citation2023b). In the US, the Department of Homeland Security – which was originally created to coordinate the War on Terror – now prominently features a Cybersecurity and Infrastructure Security Agency (CISA), whose director, Jen Easterly, maintains that ‘the most critical infrastructure is our cognitive infrastructure’ (Easterly, quoted in Klippenstein and Fang (Citation2022)). She set up a dedicated subcommittee to advise on Misinformation, Disinformation, Malinformation (MDM) (Starbird Citation2023). MDM was chaired by University of Washington academic Kate Starbird and it included Stanford University’s Alex Stamos.

In other respects, however, there is marked discontinuity. The conception of disinformation operating differs from that presented in Section 1 as the paradigm case in three major respects. This will be illustrated by reference to two high profile counter-disinformation projects run by prominent academic disinformation specialists partnered with government agencies, private businesses and NGOs.

In the US, the 2020 Election Integrity Partnership (EIP) was set up by Starbird and Stamos, with the encouragement of CISA, ‘to defend our elections against those who seek to undermine them by exploiting weaknesses in the online information environment’ (EIP Citation2020). It focused particularly on unconfirmed information circulating about election irregularities, since these were regarded as a threat to the institutions of democracy. Student volunteers from the University of Washington and Stanford University were tasked with flagging social media posts they deemed problematic so that the platforms might take action – such as deamplifying or suppressing them. According to EIP director Stamos, their goal was ‘if we’re able to find disinformation, that we’ll be able to report it quickly and then collaborate with them on taking it down’ (Stamos quoted in CitationHines v. Stamos p. 10). The posts in question conspicuously included conversations amongst US citizens, with Stamos quoted as saying ‘the vast, vast majority we see we believe is domestic’ (Missouri v. Biden, May Citation2023 Ruling, 83–84). This surveillance and censorship of domestic actors – in apparent breach of US First Amendment protections (House Judiciary Citation2023) – marked a shift away from the original understanding of disinformation as a security threat presented by hostile foreign actors.

A further shift was particularly noticeable with EIP’s successor, the Virality Project (VP), in which the partners were joined by a New York University contingent (Virality Project Citation2022). This project’s core activity was to flag for platform action posts that called into question the safety, efficacy or necessity of the COVID-19 vaccines. Like EIP, VP monitored domestic communications and has similarly been impugned in US courts; however, whereas EIP still operated with a tacit assumption that a particular hostile agency (albeit not necessarily a foreign one) could be coordinating at least some of the messaging, since there were identifiable political interests at stake, VP effectively conceptualised disinformation itself as the enemy. This echoes Wardle’s observation that ‘a disordered information environment requires that every person recognize how they, too, can become a vector in the information wars’ (Wardle Citation2019). The team did not identify any hostile principal directing the putative disinformation operation or even imply one; they professed concern about a ‘highly-networked and coordinated anti-vaccine community’ (Masterson et al. Citation2021), but they did not posit any particular principal, domestic or foreign, as orchestrating or directing it. Nor did they claim that the ‘anti-vaccine community’ had hostile intent. So, what they were concerned with might perhaps have been more aptly be described, by reference to the framework above, as misinformation rather than disinformation.

Except, and this is a decisive third shift from the paradigm case of disinformation, the information being flagged by VP was by no means always false. This epistemic unreliability of the operation was a predictable consequence of the fact that neither the students of Computer Studies and International Relations who were doing the flagging, nor the ‘experts in disinformation’ who seconded them, could claim any expertise in matters of public health or immunology. In fact, it was frankly admitted by the former CISA student intern Isabella Garcia-Camargo who became project manager for EIP and VP that ‘it’s really unclear who we’re supposed to be engaging with to actually deliver the counternarratives’ (Garcia-Camargo Citation2021). Yet her team of undergraduates were confidently making snap judgements that led to censorship and stigmatisation of globally eminent epidemiologists like Jay Bhattacharya, Sunetra Gupta and Martin Kulldorff (Garcia Ruiz Citation2023) – a matter that has been making its way through the US courts and is, at the time of writing, before the US Supreme Court (Mowry Citation2024). For there is ample evidence that the Virality Project was engaged in labelling as ‘disinformation’ communications from genuinely expert researchers who dissented from the orthodox views that state security entities like CISA endorsed as authoritative. Political authority, in other words, has here displaced epistemic authority as the basis for identifying disinformation. This diametrically opposes the situation in the original and paradigm case of disinformation, where political authority of the state would be exercised in enabling the most exacting epistemic diligence possible. One gets an impression, from the documentation now available thanks to FOIA requests and subpoenas, that the aim of contemporary counter-disinformation operations is not so much to prevent people being misled as to prevent them from being influenced by unauthorised narratives. The assumption is that certain unauthorised narratives – such as those of ‘anti-vaxxers’, in the VP case – are harmful, even when they are not untrue.

This is not the place to comment on the substance of such assumptions, but the conceptual point to highlight is that what is at issue in such cases is not a matter of deception but a matter of what is quite distinctly designated as malinformation. This clear distinction, which had already been acknowledged by the EU and UN authorities who commissioned Wardle’s work as noted in Section 1, was also recognized within the US Department of Homeland Security. For in private communications within CISA’s MDM subcommittee, the issue was raised ahead of publishing their report in which, as a result, all references to MDM were amended to read MD, sans malinformation (see the discussion in Hayward Citation2023c). But, as Starbird nevertheless noted, this did not affect the fact that malinformation remained very much within their purview.

We have seen, then, that some of those whose primary concern is with combatting disinformation, framed as a threat to society in some way, have come to operate with an interpretation of the concept which departs radically from that assumed in the original context of its use. We have traced a transition from seeing disinformation as a practice of deception that was covertly conducted by a hostile foreign agent to a practice carried out in public by domestic citizens without any intent to deceive and even without actually deceiving at all. In the wartime context, the practice being criticized in this latter vein – insofar as it involves malinformation – would have been described as propaganda. Specifically, malinformation was used in the propaganda produced by a distinct branch of military operations. Thus, the US military had a department of Morale Operations (OSS Citation1943), modelled on the British Political Warfare Executive (see Smith Citation2018), whose aim was not so much to deceive the enemy command as to demoralise its citizens and footsoldiers.

This section has thus traced the conceptual entropy that has attended the explosion of professed concern about the dangers of disinformation and the contemporary need for a security response. It is not an inherent problem with the concept of disinformation, but it arises from the efforts to apply the concept in depicting communications that the user objects to primarily on normative grounds other than strictly epistemic grounds.

This is not to dismiss the possibility that disinformation and other ‘information disorders’ present serious problems to societies today. But it is to raise the question whether seeking to ‘combat’ them on the kind of basis revealed here as analytically shaky can be expected to do more good than harm. This is the question taken up in the next section.

5. A Contest of Normative Assumptions

What has been shown about the kind of threat said to be posed by contemporary disinformation to democracy or other important public goods is that it does not necessarily arise directly from inaccurate information, which, after all, would be a problem that ought generally to be quite tractable within a framework of democratic institutions. Indeed, the literature that is most insistent on emphasising the dangerousness of disinformation is especially clear that a disinformation campaign can be expected to involve both false and true information, since it is the campaign, not just (if at all) the epistemically defective elements of it, that presents the danger (see e.g. Patrikarakos Citation2017). The critical question is the following. If one is convinced that a dangerous disinformation campaign is in operation, and some policy response is called for – which might include censorship or some other kind of restriction on communications – then one must decide: should all its communications – including the true propositions – be regarded as disinformation or not?

There are two ways of looking at this question. If the content of a true proposition is independently valuable, then stigmatising it as disinformation could have undesirable social or political consequences. Moreover, if an accusation of generating confusion or selectively exploiting truths anyway presupposes that some of the messages may be true, it is not obvious what harm there would be in allowing these to be identified. From the standpoint of conducting an analysis whose primary objective is to understand, as far as possible, what is true and what is not, then separating out true claims from false ones is exactly the right approach to take. Doing this is not inconsistent with identifying a disinformation campaign given that such a campaign can anyway be expected to generate some true messages that serve the instrumental purpose of lending its overarching narrative undeserved credibility. However, that is to view the matter with an epistemic framing. Those who prioritise a security framing may cite potential justifications for the normative position that, if a choice has to be made, the preservation of the institutional preconditions for democracy or the pursuit of the public good has greater importance than epistemic scruple.

Accordingly, those with a mission not just, or necessarily at all, to expose untruths, but, rather, to engage in counter-disinformation, may have reasons to attach more importance to discrediting a whole disinformation operation than to crediting it with the elements of truth that it (whether cynically or in good faith) includes. Certainly, at first blush it may seem perverse to suggest that true statements might be designated as disinformation. Yet if they are wielded with an intent to undermine confidence in the institutions whose epistemic authority a society relies on for its democratic functioning, then they could conceivably be argued to compromise a society’s capacity to support truthful communication more generally. A democratic society depends on the trust and good will of citizens towards each other and towards the system that regulates their interactions. If that trust is severely eroded, there is a risk of system collapse and social crisis. Any society which is broadly successful in staving off severe threats of crisis is likely to be imperfect in various ways. It will have the kind of resilience that comes from tempering principle with pragmatism. This may involve maintaining discretion about more severe imperfections than the public is generally aware of. In this situation, it may be better – for the continued cohesion of the society and its capacity to support citizens’ secure enjoyment of their rights and freedoms – that any particularly egregious imperfections not be unduly publicised.

The general idea here has been referred to by Lee Basham in terms of toxic truths. It is possible that even a generally well-ordered society may harbour truths which, if publicly revealed, would ‘be extremely socially and/or politically disrupting’ (Basham Citation2018, 272). These could be a powerful weapon in the hands of hostile actors who seek to disrupt a social or political order. A defender of the prevailing order will therefore have motive to allow such revelations to be classed as disinformation: protecting certain flaws in a given order from exposure could be regarded as a small compromise in relation to the benefits that maintaining the order confers. Such a position has been defended, for instance, by Cass Sunstein. In an influential article co-authored with Adrian Vermeule (Sunstein and Vermeule Citation2009), responding to what they regard as the particular problem of disinformation fuelled by a growing profusion of ‘conspiracy theories’, he has argued that it is better to take general steps to suppress them, at the cost of sometimes suppressing a warranted one, rather than allow them all to flourish with the feared consequence of undermining faith in democratic institutions.

As well as such arguments supporting the suppression of truth there are also arguments to support the dissemination of untruths. What many regard as a founding text of Western political thought set a precedent: Plato’s Republic introduced the idea of the Noble Lie – whereby a ruling intellectual elite promulgate a myth in order to maintain social harmony and the people’s acceptance of their social position together with its restrictions which he considered integral to maintaining justice in the polity. Although Plato was expressly not recommending a democratic system of government, it might be argued that the general idea of a noble lie harnesses the readiness of people in any society, which is inculcated from infancy, to accept the value of ‘pro-social’ lying which prevents needless upset to others while maintaining kindness and cooperativeness.

General acceptance of particular pro-social lies does presuppose, though, a shared normative framework; the value of the ‘discretion’ used to temper truth depends, in a democracy, on an assessment of risks and benefits that all concerned would broadly share. For pro-social deception can involve ‘second guessing’ the hearer’s interests or concerns, and this can sometimes result in a mistake which is then directly compounded by not recognizing their autonomy as agents with a capacity and right to make their own decisions about how to deal with information affecting them. Pro-social deception of whatever form presupposes substantive commitments regarding the particular values associated with sociality. A deception that is pro-social from the perspective of one society – or of one section of a given society – may appear hostile from the perspective of another. A risk in defending a ‘noble lie’ on the basis of social values, in practice, is that this could materially amount to defending the values of some against the values – or, indeed, against the interests or needs – of others.

Perspective plays an important role in assessing whether a communication has harmful effects. It could be argued that whatever justification might be offered for telling a ‘noble lie’ or concealing a ‘toxic truth’, in a democracy, the terms of acceptance of that justification, if any, should be in the gift of those affected. Yet doing those things inherently involves concealing from their target audience the very fact that they are being done, and so their supposed justification cannot be assented to by those most directly affected by the strategy. This could be seen as an ‘epistemic injustice’ whereby those subject to the deception are kept in the dark (which is a presumptive epistemic wrong) about a matter that affects them (which is a presumptive ethical wrong), while they cannot assent to the non-disclosure being for their own good (which is an offence against their autonomous agency), and it might even not be in their good by any more objective measure. So, their interest certainly cannot just be assumed to be served by the telling of a noble lie or concealment of a toxic truth, since even if this is purportedly done with benign intent it does not necessarily follow that the target audience would – given the opportunity – regard it as benign. A contrary position would be that any proposition is justifiably assessed on its epistemic merits rather than subjected to finessing by concerns of the ‘greater good’.

So, there is a basic choice to be made between the epistemic framing and the security framing – the one which aims to expose deception and the other which might sometimes countenance the use of deception to counter a problem that is seen as more serious than particular untruths might present. To adopt the security framing without any epistemic concern is to opt for a conceptualisation that better fits the description now widely accepted of ‘malinformation’ rather than that of ‘disinformation’. For the term ‘malinformation’ was coined precisely to characterize a concern with information that is taken to be harmful or dangerous even though it might be true.

By taking the recommended critical approach one would avoid branding as cases of disinformation such situations as where people have acted in a coordinated way with innocent intent and in good faith support of campaign positions that might turn out to have been epistemically sound. Such a safeguard is, on the normative assumptions made here, more beneficial for a democracy and public wellbeing than defending its institutions by means of ‘noble lies’ against the exposure of ‘toxic truths’. For if flawed beliefs become widespread as a result of citizens coordinating communication of their concerns, this could certainly count as a problem, but only of the kind that a democratic society is supposed to be able to deal with, rather than as a threat to democracy. The risk of democracy being undermined from within is arguably greater when the problem of disinformation is taken as a justification for measures to suppress freedom of expression.

Conclusion

The problem of disinformation has been shown to be multi-faceted, and I have argued that the academic literature has still to deal satisfactorily with the second-order problem of consistently and coherently conceptualising the first order problem. When the distinct elements that are standardly mentioned as attributes of disinformation are coherently related to each other, as illustrated in Section 1, it is possible to formulate a stringent definition of disinformation and a range of cognate ‘information disorders’. In practice, however, the term is used in looser senses, allowing any of the elements to serve as a sufficient criterion for its identification. This leaves scope for arbitrariness in decisions as to what does and does not get to be considered disinformation. This in turn can have seriously deleterious effects, when those decisions lead to action, as some of the legal actions currently in progress illustrate.

It has thus become reasonable to pose the question whether efforts to ‘combat’ disinformation may present a greater threat to the democratic institutions of society than does disinformation itself. It could be argued that the threat posed by disinformation to the legitimacy or security of democratic institutions is overstated in a literature that prioritises ‘combatting’ communications whose content may not have been proven epistemically unreliable but is simply at odds with ‘mainstream truths’ (Hillebrandt Citation2021, 9). A well-functioning democratic system should not only be able to accommodate dissent but even flourish from it, as was argued at an earlier time even by Cass Sunstein (Citation2003).

Pervasive suspicions of disinformation, according to the argument offered here, could be symptomatic of a society that does not fit the description of a well-functioning democracy. Moral panic about ‘disinformation’ appears then as a response to the symptom which avoids facing up to what the real problem might be. Insofar as complaints focus on the undermining of trust in institutions, they are misdirected if they take the problem to lie with the people whose trust has waned and who have become influenced by dissident perspectives on institutionalised untrustworthiness. Such misdirected complaints may amount to a mystification of the problem. When trust in institutions is lacking, the solution is not to engage in strategic communications aimed at making the population more trusting. The solution is to transform the institutions so as to make them more worthy of trust.

Acknowledgments

An earlier version of this article was presented to the Political Theory Research Group at the University of Edinburgh, and I am grateful for participants’ comments. In developing this final version I have benefited considerably from the anonymous reviewers’ extremely helpful suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Tim Hayward

Tim Hayward is a social and political philosopher whose books include Ecological Thought: An Introduction (Polity 1995), Constitutional Environmental Rights (OUP 2005) and Global Justice & Finance (OUP 2019). His current work examines the influence of strategic communications on political knowledge and the development of norms of international justice. Publications in the field of applied epistemology include ‘Three Duties of Epistemic Diligence’ (Journal of Social Philosophy 2019), ‘“Conspiracy Theory”: The Case for Being Critically Receptive’ (Journal of Social Philosophy 2022), ‘The Applied Epistemology of Official Stories’ (Social Epistemology 2023). He is Professor of Environmental Political Theory at the University of Edinburgh.

Notes

1. As an indication of academic priorities, a Google Scholar search (February 2023) for ‘combat disinformation’ yields 1,520 entries; this contrasts with 388 for ‘identify disinformation’, 154 for ‘expose disinformation’ and 38 for ‘analyse disinformation’.

2. Although the term ‘proposition’ has its own nuances, which philosophers discuss in depth (McGrath and Frank Citation2023), for the purposes of this article it can stand simply for the kind of thing of which truth or falsehood can be predicated, regardless of what material form its communication takes.

3. ‘Aggravated misinformation’ is not to be understood as a subspecies of misinformation. The analytic framework proposed here does not involve a classification of different types of information, as e.g. Wardle’s does. Rather, ‘aggravated’ is what misinformation becomes when it is integrated into a disinformation campaign.

4. For further discussion of this see Hayward (Citation2023a).

5. A Google Scholar search (February 2023) for ‘Russian disinformation’ since 2016 returns 3,460 entries. McKay and Tenove (Citation2021, 707) cite numerous academic studies and government investigations that ‘have concluded that the Russian government pursued a wide-ranging, multi-year disinformation campaign in the United States’.

References