505
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The Nature of Visual Disinformation Online: A Qualitative Content Analysis of Alternative and Social Media in the Netherlands

ORCID Icon

ABSTRACT

Online political disinformation often relies on decontextualized or manipulated images. Visual content can make disinformation more attention-grabbing and credible as it offers a direct index of reality. Yet, most research to date has mapped the salience and nature of disinformation by exclusively focusing on textual content. Responding to urgent calls in the literature, this paper relies on an inductive qualitative analysis of visual disinformation disseminated by alternative media platforms. Based on the analysis, we propose a typology of different applications of visuals in disinformation: (1) Signaling legitimacy and adherence to conventional news values through seemingly unrelated images; (2) illustrating authoritative expert consensus through the visualization of disinformation by alternative experts; (3) emphasizing widespread social support for unconventional truth claims through the inclusion of visuals depicting the vox populi; (4) offering decontextualized proof for conspiracy theories and counter-factual claims. This typology intends to inform future empirical research that aims to detect disinformation narratives across different (digital) contexts.

Disinformation – deliberately false information that is created or disseminated with the intention to cause harm or gain profit (e.g., Chadwick & Stanyer, Citation2022; Wardle & Derakhshan, Citation2017) – is often visual in nature (e.g., Peng et al., Citation2023; Yang et al., Citation2023). As visuals can augment the credibility of false information by offering seemingly authentic proof for disinformation (e.g., Weikmann & Lecheler, Citation2023), it is crucial to understand how visual information is used in disinformation. Yet, although the importance of visual information has been documented extensively in political communication research (e.g., Coleman, Citation2010; Iyer et al., Citation2014; Powell et al., Citation2015), the integration of a visual or multimodal approach in disinformation research is currently lacking in scope (e.g., Peng et al., Citation2023; Weikmann & Lecheler, Citation2023). To offer a more comprehensive overview of the nature of visual disinformation, this paper relies on a qualitative content analysis of visual disinformation spread via online platforms that have been associated with a high degree of false information.

Although visual information has implicitly been incorporated in research mapping the content and effects of disinformation (e.g., Kaiser et al., Citation2022), we currently lack insights on the functionalities of visuals to add legitimacy to online disinformation narratives across different digital settings. Adding to existing content analyses and typologies (e.g., Brennen et al., Citation2021; Weikmann & Lecheler, Citation2023; Yang et al., Citation2023), this paper specifically aims to offer an inductive overview of how visuals may be manipulated, decontextualized or mislabeled to offer proof and legitimacy for counter-factual claims across different communication settings: Alternative media platforms and social media.

Considering that visuals can offer a direct index of reality through a cue-rich representation of events (e.g., Sundar, Citation2008), visual disinformation may be perceived as more authentic, credible, and engaging than textual disinformation (Hameleers et al., Citation2020). Even more so, whereas textual information can be perceived as easier to manipulate and more distant from reality, visuals may be perceived as resilient to doctoring and manipulation. Especially in digital information settings that allow for the cue-rich presentation of disinformation, these advantages may be exploited by malicious actors to make deceptive content seem authentic.

Although existing content analyses have revealed important insights into the prominence of visual disinformation (see e.g., Brennen et al., Citation2021; Yang et al., Citation2023), we currently lack inductive research on the use of visuals across issues and platforms. Hence, Brennen et al. (Citation2021) exclusively focused on fact-checked visual disinformation on COVID-19, whereas Yang et al. (Citation2023) focused their quantitative approach on Facebook. To move forward with this field, this study relies on an in-depth qualitative analysis of visual disinformation shared via different alternative and social media platforms characterized as containing a high level of disinformation, supported by independent fact-checkers often linking to them. Within these platforms, visual information was cross-checked with the verification efforts of fact-checkers, which ensured that the analysis focused on disinformation narratives.

This paper aims to extend existing typologies of visual disinformation (Weikmann & Lecheler, Citation2023) by revealing the potential delegitimizing and legitimizing strategies underpinning visual disinformation across contexts. In this paper, we understand visual disinformation narratives as the integration between visual and textual information used to make deceptive inferences about reality or events. Hence, we do not focus on deceptive or fabricated visual information in isolation, but consider its embedding and re-contextualization in more comprehensive disinformation narratives. For example, beyond the fabrication or manipulation of images, we explore the ways in which authentic visuals are deceptively re-contextualized by adding a manipulated and false textual interpretation to existing footage taken from another setting. We hope that this comprehensive approach to visual disinformation inspires research and interventions to further assess the multifaceted role of visuals in disinformation campaigns.

Theory

The Multimodal Nature of Political Communication

Visual information plays a crucial role in today’s political communication landscape. As cases in point, the ongoing war in Gaza and the Russian invasion of Ukraine in 2022 have been covered in highly visual manners across (online) media. In this setting, citizens have been exposed to arousing and shocking visual content on the victims of war, even though much of the visual evidence turned out to be untrue or taken out of context. As documented in previous literature, the attention-grabbing and emotional nature of visuals may affect public responses to salient issues such as wars and armed conflicts (e.g., Greenwood & Jenkins, Citation2015; Powell et al., Citation2015), which makes it relevant to study the role of visuals in political communication, including deceptive contexts of disinformation (e.g., Dan et al., Citation2021).

The power of visuals in (political) communication has often been approached from a multimodal framing perspective (e.g., Geise & Baden, Citation2015; Powell et al., Citation2015). The multimodal framing effects theory in particular postulates that visuals can amplify the effects of textually framed information (e.g., Geise & Baden, Citation2015). This amplifying effect occurs because visuals are easy to understand, attention-grabbing, and able to establish an emotional connection with recipients (e.g., Powell et al., Citation2015). Furthermore, as visuals offer a seemingly unaltered and direct representation of reality (Messaris & Abraham, Citation2001), the audience is likely to perceive them as highly credible and authentic – which is especially important to consider in contexts of deception (e.g., Hameleers et al., Citation2020).

The role of visuals in deceptive information environments has been documented extensively in research on propaganda (e.g., Seo, Citation2014). For example, Seo’s (Citation2014) content analysis of social media amidst the 2012 Israel-Hamas conflict focused on the roles of visuals in evoking emotions and demonstrating the atrocities of warring sides. Yet, to date, research on the role of visuals in deceptive political communication is mostly applied to specific events (i.e., armed conflicts or pandemics), restricted to specific social media platforms (i.e., Twitter, WhatsApp, or Facebook) or quantitative in approach. Moving beyond existing typologies and analyses (e.g., Brennen et al., Citation2021; Garimella & Eckles, Citation2020; Yang et al., Citation2023), this study aims to offer an inductive exploration of how visuals are used in disinformation narratives across issues and contexts of information that offer a discursive opportunity for disinformation: Alternative media platforms and the references to such platforms on social media.

In this paper, we regard disinformation as all false information that is created, promoted, or de-contextualized with the intention to make profit or cause harm (e.g., Chadwick & Stanyer, Citation2022; Freelon & Wells, Citation2020; Hameleers & Yekta, Citation2023; Wardle & Derakhshan, Citation2017). We more specifically refer to visual disinformation narratives as false or deceptive storylines and statements that are constructed through both textual and visual elements. In reconstructing how disinformation makes claims about truth and reality, we analyze the interaction between textual and visual elements within disinformation statements (i.e., an article or social media post). Concretely, within a disinformation narrative, this paper explores the role that visuals play in legitimizing or contextualizing deceptive statements. This interaction implies that, in visual disinformation, visuals are not always manipulated or fabricated: Real footage or screenshots of real social media posts and expert analyses may be re-contextualized deceptively by pairing them with an intentionally false textual storyline that alters their original meaning.

Offering Proof for Deceptive Claims Through Visual Disinformation

In this paper, we broadly define visual disinformation as the deliberate decontextualization, fabrication, or manipulation of visual information (e.g., Dan et al., Citation2021; Weikmann & Lecheler, Citation2023). Visual disinformation may consist of various interactions between textual and visual cues. Although visual information itself can be fabricated or manipulated, such as in the case of AI-generated visuals or deepfakes (e.g., Dan et al., Citation2021), disinformation narratives can also be constructed by deceptively re-interpreting authentic footage taken from another setting. Thus, visual disinformation can be constructed by manipulating or fabricating images or videos, deceptively re-contextualizing existing images or videos, or combining both text- and image-based manipulation (in that case, both the visual elements and the textual interpretation are based on fabrication or manipulation).

The deceptive re-contextualization of authentic images and videos plays a central role in visual disinformation narratives, and may be even more prominent than manipulated or AI-generated visual disinformation (e.g., Brennen et al., Citation2021; Weikmann & Lecheler, Citation2023). Practices of re-contextualization may, for example, involve the use of decontextualized screenshots of authoritative experts’ social media posts or original footage of warzones taken from another place. In these applications, unmanipulated visuals may be paired with a deceptive textual narrative that alters the meaning of the original image. In line with this, we argue that visual disinformation cannot be understood without taking into account the (deceptive) textual interpretations of visuals, which often come in the form of textual comments surrounding visuals on social media, or the de-legitimization of authentic footage on alternative media platforms.

Towards a Typology of Visual Disinformation

The variety in the production, processing, and effects of visual disinformation has been captured in a recent literature synthesis by Weikmann and Lecheler (Citation2023). Specifically, on the production side, these authors distinguish between two important axes: The level of sophistication (low versus high) and the modal richness of visual information (still images, moving images, or both). In the low sophistication and low modal richness quadrant, for example, we can find the deceptive pairing of a real image taken outside of its context to offer proof for a deceptive claim (Cao et al., Citation2020). A more sophisticated use of still images would be to manipulate features in the image, for example, by deliberately cropping a photo to make it seem as if a crowd is smaller or bigger than it actually was, which has been used during the inauguration of Donald Trump in 2017 (Thomson et al., Citation2020).

In the typology proposed by Weikmann and Lecheler (Citation2023), deepfakes would fit in the most sophisticated and richest modality quadrant. Deepfakes are sophisticated as it (at the time of writing) requires substantial computational power, time, and skills to create a highly-realistic deepfake (e.g., Westerlund, Citation2019). Deepfakes are rich in modality as they combine fabricated speech with moving images of a targeted actor – herewith offering a cue-rich and seemingly authentic representation of reality (Sundar et al., Citation2021). To date, deepfakes that require complex technological skills and/or resources are not often found in political communication (e.g., Brennen et al., Citation2021). However, due to the direct indication of reality they can offer (i.e., by making it seem as if a real person expresses things they never said) they may have a large impact. In addition, fast-paced developments in AI can make them more prominent in the future.

Next to highly sophisticated modes of (audio)visual deception generated by AI, there are also forms of visual and video-based disinformation that rely on lower-tech affordances, such as cropping, photoshopping, or decontextualizing images and videos. An example of such lower sophisticated forms of video-based deception are cheapfakes that are not generated by AI but based on the deliberate decontextualization of existing videos to offer proof for misleading claims (e.g., Dan et al., Citation2021).

Although the visual disinformation space mapped by Weikmann and Lecheler (Citation2023) offers a crucial interpretation frame for the application of visual disinformation, only a few studies have empirically studied the use of visuals in disinformation (e.g. Allcott & Gentzkow, Citation2017; Brennen et al., Citation2021; Yang et al., Citation2023). For this reason, we conducted an in-depth qualitative content analysis of the use of visual information in disinformation published on alternative digital platforms in the Netherlands. Although this endeavor is not meant to be representative of the entire disinformation landscape, we aim to offer a detailed inventory of the different ways in which likely spaces of disinformation may use visuals to offer proof for deceptive claims across different issues.

Visual Disinformation on Alternative Digital Platforms in the Netherlands

Visual disinformation is likely to be present on social media and alternative digital platforms (also see e.g., Brennen et al., Citation2021; Peng et al., Citation2023). At the most general level, alternative media can be understood as reactive platforms (e.g., Haller et al., Citation2019) that offer an alternative perspective on the information presented on established media platforms. Despite the broadness of the category of alternative media, most empirical research has focused on hyper-partisan or radical right-wing alternative media (e.g., Heft et al., Citation2021). Considering that this sub-category of alternative media can be associated with counter-factual knowledge (Heft et al., Citation2021), we focus on such platforms in this paper. Hyper-partisan alternative media are associated with information that attacks, delegitimizes, and challenges the established order and conventional knowledge (e.g., Ylä-Anttila, Citation2018). This oppositional stance is in line with disinformation narratives, which often contain an anti-establishment narrative (e.g., Bennett & Livingston, Citation2018). Hyper-partisan alternative media have also been associated with delegitimizing populist viewpoints that circumvent factual knowledge and expert interpretations by focusing on common sense and the experiences of ordinary people allegedly neglected in established information coverage (Saurette & Gunster, Citation2011).

The delegitimizing perspective of hyper-partisan alternative media does not imply that they exclusively disseminate disinformation. However, the tendency to circumvent and attack conventional knowledge whilst selectively quoting sources that align with platforms’ ideological perspectives makes alternative media a likely platform for the dissemination of disinformation. Hence, on these platforms that are governed less by conventional journalistic norms, the confirmation of delegitimizing narratives may be a stronger motivation than the dissemination of accurate and evidence-based information. For these reasons, this study analyzes disinformation narratives disseminated by hyper-partisan alternative media.

Context and Research Questions

It is important to explicate the context under study, given that the nature of disinformation may be contingent upon a country’s setting (e.g., Rojas & Valenzuela, Citation2019). The study is situated in the Netherlands. This country can be considered as a free democracy where relatively high levels of press freedom and lower levels of polarization may make citizens more resilient to disinformation (e.g., Humprecht et al., Citation2020). Yet, high levels of populist communication, and the frequent use of “fake news” labels by different successful populist movements on the right-wing offer a strong opportunity structure for disinformation narratives that attack conventional knowledge. In this setting, we can expect that populist accusations of disinformation and the emphasis on people-centric constructions of reality prevail. Considering that anti-establishment perspectives and delegitimizing narratives based on populism and conspiracies are widespread on alternative digital media in the Netherlands, we consider this a relevant context for understanding how visual disinformation is constructed in alternative and social media spaces.

To guide our analysis, we take existing conceptualizations of visual disinformation into account. More specifically, the distinction between different levels of sophistication and the richness of modality (Weikmann & Lecheler, Citation2023) will act as sensitizing concepts to guide the inventory of visual disinformation. Although existing conceptualizations guide the inductive analysis, they are not treated as exhaustive or saturated concepts. Rather, the open-ended analysis aims to extend and look beyond these categories to generally answer the question how visual disinformation is constructed, and what the functions of visuals in deceptive information spread via hyper-partisan alternative media are. The following more specific research questions are used to structure this inductive endeavor:

RQ1:

In what ways is visual information constructed and re-interpreted in disinformation disseminated via hyper-partisan alternative media in the Netherlands?

RQ2:

What roles do visuals play in signaling evidence and credibility for deceptive statements in disinformation narratives?

Methods

Data Collection

To answer these research questions, we rely on a qualitative analysis of four different online alternative websites in the Netherlands that have been, based on independent fact-checkers, flagged for disseminating disinformation and conspiracies often.Footnote1 The inclusion criteria for relevant platforms included that messages and posts disseminated by the outlet were refuted by fact-checks regularly (relative to established media sources and other online news platforms). Additionally, because we wanted to analyze visual disinformation that is prevalent in people’s (alternative) media diets, the selected outlets had to be popular in terms of their unique monthly visitors. We additionally aimed for variety in the sample composition: We included platforms with a clear conspiracist and counter-factual perspective on reality (Niburo.co and 9fornews), as well as platforms with a hyper-partisan right-wing agenda (blckbx.tv) or an antiestablishment perspective (café Weltschmerz). Considering that extant literature indicates that the visual nature of disinformation is most likely to be expressed on social media (Peng et al., Citation2023; Yang et al., Citation2023), we analyzed the original websites supplemented by the Facebook and Twitter accounts of the included alternative media sources. The posts and articles were collected in 2021, and spanned the entire year. The reason to focus on this period is that fact-checking initiatives and concerns about disinformation were highly salient in this year (see e.g., Newman et al., Citation2021). In addition, although conspiracies and disinformation around COVID-19 were prominent at the beginning of 2021, a more diverse set of narratives and issues was covered during the later months of this period. Therefore, the twelve months included in the sample frame allow for the inclusion of a diverse set of disinformation narratives during a period in which concerns and responses to disinformation were prevalent.

These sampling criteria aimed to incorporate “most likely” cases of online platforms disseminating disinformation on a variety of topics, such as COVID-19, climate change, and immigration. We used fact-checking platforms to verify that the disinformation narratives (i.e., the combination of visuals and textual claims) included in the sample were indeed flagged as “completely false” by fact-checking platforms. Specifically, data collection started with a random sample of 50 articles for each of the four alternative media platforms. These articles were closely read during a first round of familiarization. For articles that both contained visual information and statements that could be falsified or validated, additional fact-checking information was consulted. Only those articles (n = 120) that were rated as disinformation by independent fact-checks were retained for further analysis. In analyzing these articles, we considered theoretical saturation (e.g., Glaser & Strauss, Citation2017). Concretely, we initially analyzed 50% of the sample frame. Then, additional samples of ten posts per outlet were analyzed in steps, until all 120 articles were analyzed. The findings of this additional analysis were contrasted to the developed and emerging themes and categories of the initial data analysis. Considering that this analysis did not yield substantially new insights into visual disinformation’s nature, we consider that theoretical saturation within the selected outlets was achieved.

For the social media data, the same stepwise procedures were followed. We focused on Facebook and Twitter for two reasons: (1) extant research reveals that these platforms are likely platforms for visual disinformation narratives (e.g., Brennen et al., Citation2021; Yang et al., Citation2023), and (2) the alternative media platforms are active on these social media spaces. In addition, Facebook likely represents the disenchanted audience of the anti-establishment platforms, whereas Twitter may afford a more established style of communication in which alternative truth claims are legitimized through the inclusion of experts and seemingly authentic evidence. Data collection started with 25 posts per platform/social medium, which were then cross-checked with fact-checkers. We again analyzed the data in two phases to assess for the saturation of themes. The total sample size for the final analyses of visual disinformation consisted of 120 articles of alternative media platforms and 140 social media posts in the Netherlands published in 2021. The unit of analysis was the entire article or post, which meant that we could analyze the functions of the visual content in the context of its textual pairing (also see Weikmann & Lecheler, Citation2023).

Analysis Strategy

The data were analyzed according to the grounded theory approach’ stepwise procedures of data reduction (e.g., Charmaz, Citation2006). We applied open coding, focused coding, and axial coding whilst constantly comparing emerging themes to new data. In line with the principles of visual content analysis, we aimed to unpack the meanings signified by visual content (Bock et al., Citation2011). The unit of analysis was the entire visual and the textual description surrounding it. We decided to include the textual context of selected visuals as extant research indicates that visuals are often used as proof for the textual disinformation narrative (e.g., Brennen et al., Citation2021). Hence, the analyses regarded disinformation narratives as the interaction between textual and visual elements, and explored the role that visuals played in legitimizing, (re)contextualizing, extending, or enriching the verbal elements of disinformation. In the analyses, we considered that narratives are bound to (socio-political) contexts, and refer to a reality that is reconstructed through the integration of communication elements (Labov, Citation1972).

First, open coding was applied. During this step of analysis, descriptive labels summarizing the essence of the visual’s meaning and function related to the disinformation narrative were attached to the transcripts. These open codes summarized the content of the visual as well as its relation to the narrative. As an example, a visual depicting a graph of rising immigration was coded in the context of a disinformation narrative on depopulation and assigned the label “Image as evidence for depopulation: Decontextualized graph depicts higher rates of non-native population over the years.” During open coding, researcher-derived as well as in-vivo coding was combined in order to stay close to the materials whilst adding analytical depth to the descriptions of the data.

During focused coding, individual open codes were grouped into wider-order categories and merged if they were similar. During this step, contextual information was reduced in order to allow for dimension building and comparison. An example of focused coding is the grouping of all open codes related to the illustration of support for disinformation narratives by incorporating screenshots of Tweets. Here, we aimed to capture the variety in the different ways in which evidence and support was visualized, which resulted in a distinction between using visuals to signal the support of ordinary people versus authoritative expert sources.

Finally, we aimed to explore the connections between emerging dimensions and themes during axial coding. For this final coding step, we explored how the different functions of visuals in disinformation were presented alongside each other in single narratives, and how they built on to each other to offer an index of authenticity and credibility. Importantly, the interaction between visual and textual elements formed an important part of the analysis. Thus, during the final step of coding, understanding the functions of visuals in relation to the text embedded in the deceptive context revealed in what implicit ways visuals legitimized, recontextualized, challenged, normalized, or otherwise altered the meaning of the textual cues (and the other way around). Concretely, visual information was analyzed by coding how the message conveyed in the disinformation was amplified and contextualized through the use of the visuals. We took existing literature into account to guide this analysis (e.g., Brennen et al., Citation2021), but were also open to functionalities of visuals that were not mapped in previous research.

In the results section, we discuss the main themes resulting from the three steps of analysis. It should be noted that the patterns and meanings presented are not representative of the disinformation landscape on social media or alternative media. Hence, we focus on a limited number of cases analyzed in a specific timeframe. Although claims on the relative dominance of certain functions and applications of visual disinformation are based on the saturation and dominance of codes and themes resulting from the analysis, these claims are not intended to be generalized, but rather offer insights into how certain functions and applications of visual disinformation and decontextualization are more or less dominant in the studied contexts.

Validity and Reliability

Peer debriefing was applied for the data collection procedures and the analyses. For peer debriefing, a second researcher familiar with qualitative research was involved in all three coding steps. Initially, both researchers coded five pieces of visual disinformation that are not included in the main analysis together. The research questions and sensitizing concepts were used to guide open coding. After reaching agreement on the labeling procedures (i.e., level of specificity, connection between visual and textual elements), both researchers independently coded another five articles. After this, the researchers discussed (dis)agreements. Only minor discrepancies were present, which mainly related to different coding styles (one coder applied more descriptive labels, whereas the other connected them more to the functions of the visuals directly). These differences did not impact the main conclusions drawn from the analyzed materials.

Results

Images Used to Mimic Established News Formats

One prominent application of visuals in disinformation across all outlets was to show an image as a background of the headline in the overview of news items (see supplemental materials Figure S1). Most images were taken from a generic setting that did not refer to the context of the textual statements (i.e., showing an allegedly lying politician in parliament, or a stock image of needles in the context of COVID-19 disinformation). In this application, the visual elements enhanced the legitimacy of deceptive text-based storylines by embedding counter-factual claims into seemingly legitimate news formats. Yet, these news formats did not follow the routines of quality journalism and established news, which makes the contextualization of the counter-factual statements as news a deceptive practice.

In this context, images were used to highlight the authenticity and trustworthiness of the false information: The textual disinformation narrative was legitimized by pairing it with visuals that placed deceptive information in a context of legitimate mainstream news. As part of this application, the images did not relate to the disinformation texts by offering (false) visual proof of events, but placed disinformation in a legitimate and seemingly trustworthy and engaging visual news format. Textual and visual elements together constructed the disinformation narrative: The text-based elements forwarded counter-factual claims (i.e., climate change is a hoax made up by elites in power), and the visual information placed these claims into a seemingly authentic and legitimate news format.

Screenshots of Social Media Used to Signal Social Support and (Expert) Consensus

All alternative media platforms included in the sample used screenshots of social media posts. These screenshots often contained Tweets or Facebook posts of prominent public figures or alternative authorities, such as doctors and experts opposing conventional knowledge. The screenshots likely had the function to signal the legitimacy of counter-factual narratives on vaccination, climate change, or immigration. By visually depicting and embedding statements of opinion leaders or experts with many followers, the alternative media platforms emphasized the widespread authoritative and non-marginal support for counter-factual narratives. One example is an elaborate item on Niburu.co in which many different videos and screenshots of alternative doctors and experts denying COVID-19 were included (see supplemental material Figure S2 for an example). The videos were introduced with the following headline: “Awaken doctors speak the truth about the Coronavirus.” The video fragments were used as proof that there is scientific consensus that COVID-19 did not exist, as illustrated with the following quote describing the videos’ content: “If there is something that can wake up the world about the real situation on COVID-19, it is the statements and stories of scientists and doctors.”

Images of the accounts of experts and doctors were also used to legitimize strong anti-establishment narratives and positions, for example, the statement that doctors and nurses should be prosecuted for murder in the context of COVID-19 (see supplemental materials Figure S3). As illustrated by an image used by Café Weltschmerz, a visual showing an alleged authority source (i.e., an organization representing the rights and health of children) is used to emphasize how attorneys are offering evidence to prosecute doctors and hospitals blamed for murder. In a similar vein, the platform 9fornews used imagery of the ex-president of Pfizer to emphasize how COVID-19 was allegedly used as a weapon used for “evil causes.” The following quote from the interview was visually depicted in one of the articles by displaying the deceptive textual statement as a screenshot of a social media post: “They lie about variants so that they can make harmful boosters you do not need. I think they will be used for evil causes.”

Next to screenshots and videos of social media posts depicting more prominent authority figures and opinion leaders, the alternative platforms Niburu.co and blckbx.tv also included images of social media posts of “ordinary citizens,” which were interpreted as popular support for alternative perspectives on reality allegedly kept hidden from the public. Often, more than one screenshot was included. The content of the social media posts captured with screenshots was often negative, emotional, and uncivil, which can be contrasted to the more conventional journalistic and distant reporting styles used by the alternative media platforms. To illustrate this uncivil and clearly anti-establishment position, the following message was presented in the form of a screenshot on Niburu.co in response to a video of the arrest of someone with an immigration background: “They should kick these scumbags out of a plane above the Atlantic.” This screenshot was used next to an image allegedly depicting facts about immigration by demonstrating increasing proportions of foreign populations in Amsterdam, Rotterdam and the Hague. The different images presented alongside each other in the same disinformation article thus had different functions: The social media visual illustrated popular support for an anti-immigration narrative using screenshots of social media posts, and the graph was used to demonstrate facts about rising numbers of immigrants throughout the years.

In different cases, the alternative media platforms also included screenshots of other (alternative) media platforms to illustrate the support for the positions they were communicating. Blckbx.tv, for example, included a screenshot of a Twitter post of a Dutch television channel in which the growing support for conspiracy theories among Dutch citizens was celebrated as the growing consciousness about reality in Dutch society (see supplemental materials Figure S4). In other cases, screenshots and videos of other (international) media channels, including Fox News and Breitbart, were used to signal the shared understanding for certain anti-establishment narratives among other media platforms opposing and challenging conventional knowledge.

Just like the stock images that were used to illustrate credibility and newsworthiness, the screenshots or videos were not manipulated, but visualized the disinformation statements communicated by different actors (i.e., ordinary citizens with lived experience, or alternative experts legitimizing counter-factual claims). Although the visuals often contained text as they depicted social media messages, the alternative media platforms did not integrate the words of alternative experts or ordinary citizens into their articles, but rather presented them separately from the text as images. Thus, deceptive textual statements, for example forwarding conspiracy theories, were often presented in a visual format by using screenshots of statements originally voiced on other platforms. By using a visual format to present counter-factual claims, platforms may circumvent the (algorithmic) detection of disinformation that is mostly based on language models and the flagging of hateful or false speech.

Arguably, the format of screenshots of social media posts and videos linking to other people’s views afforded the platforms to communicate more extreme, uncivil and polarizing positions than conventional news platforms whilst maintaining legitimacy: They did not communicate these positions themselves, but simply showed the interpretations of alternative sources of knowledge and the vox populi. This function of visuals in disinformation extends beyond existing conceptualizations and analyses that have mainly looked at (decontextualized) visuals as evidence for deceptive statements (e.g., Brennen et al., Citation2021). Hence, our findings indicate that, in the context of alternative media platforms, visuals can be used to embed the (at times extreme and uncivil) statements of citizens and opinion leaders within seemingly objective and neutral information that mimics legitimate news. Although textual meanings were conveyed, the inclusion of screenshots of original posts of expert sources and ordinary citizens arguably allowed the platforms to maintain a legitimate and distant position whilst signaling credibility through the inclusion of expert references, both in the form of people with lived experience and alternative experts legitimizing counter-factual claims.

Decontextualized Images As Proof for Alternative Realities

In some cases, the images themselves offered proof for the false claims expressed by alternative media platforms. In this application, visuals were de-contextualized and deliberately taken out of their original context to legitimize counter-factual claims. Thus, authentic images were deceptively used in a different context as evidence for textual disinformation claims. To offer a concrete example, one message published by Café Weltschmerz used an image of an ultrasound showing a fetus of an allegedly “vaccinated pregnant woman” as evidence that the babies of vaccinated women have a lower likelihood to survive. The ultrasound showed irregularities in the fetus, which were highlighted with a marker. However, no reference was made to the source, date, or origins of the image. The image was falsely associated with the disinformation statement that vaccination kills babies, and offered a fear-mongering illustration of the allegedly dangerous side-effects of being vaccinated.

One other prominent example comes from Niburu.co, where a decontextualized graph is used as evidence for population replacement in relation to allegedly rising immigration and hidden schemes of the government (see supplemental materials Figure S5). Although it is not clear how the statistics comparing 1960 to 2020 were constructed, circled lines around high numbers of non-native citizens in 2020 in three cities were interpreted as evidence that the Dutch government is secretly replacing the Dutch population for different elements. Although some statistics in the created decontextualized graph may be accurate, the interpretation of the selectively visualized data is clearly misrepresenting and decontextualizing facts in order to legitimize a prominent claim in conspiracy narratives connected to the Great Reset.

The Visual Representation of Disinformation on Social Media

In order to reach theoretical saturation and enhance the richness of the analysis, we additionally collected data from the Twitter and Facebook accounts of all four media platforms. The overall aim of the additional analysis was to explore the exhaustiveness of the developed categories by looking at the different context of social media. The analysis reveals that the three main applications of visual disinformation found on the official pages of the alternative media platforms were also present on social media. To re-cap, the analysis of alternative media revealed that visuals were used to frame counter-factual narratives as legitimate news, to signal social support or illustrate expert authority, or to offer proof for deceptive claims. We do, however, find two additional applications of visual information in disinformation narratives on social media: The use of memes and visuals explicating political or ideological positions and the visualization of people-centric truth claims.

First, memes were used to present ideologically biased textual disinformation in the background of a visual. In most cases, this format was used to voice extremist statements, conspiracy theories, or statements attacking scientific consensus. In the sense that the visual was used to embed text-based statements into a visual background that fits the affordances and gratifications of the targeted online audience, this application may be based on a similar motive as the use of visuals to present disinformation as legitimate news formats in the context of alternative media platforms. One example from 9fornews is a stock image of a powerplant that serves as the background for the following text: “100 prominent scientists say to parliament: there is no climate change. Read the emotional responses here.” The disinformation posts presented in this form did not offer any sources for the alleged scientific consensus, and did not explicate who these scientists are, or what their relevant field-specific knowledge actually is.

Second, whereas alternative media platforms often relied on visuals to mimic established news formats (i.e., by quoting experts through screenshots, by using stock images, or by showing evidence for false statements), disinformation on social media was often constructed based on the experiences of people with lived experience. Thus, the legitimization of truth claims based on visualizations of expert knowledge was constructed in different ways across contexts. As an example, on social media, pictures of ordinary citizens victimized by the policies of the established order regarding COVID-19 were often shared on social media, demonstrating how the allegedly corrupt elites failed to represent the ordinary people during crises. Thus, although alternative media adhered closer to norms of objectivity and legitimacy through quoting experts and evidence, disinformation narratives on social media were more detached from expert references and empirical evidence: Visuals illustrated closeness to the experiences of ordinary citizens victimized by the established order, a delegitimizing narrative that was in many cases backed up by quoting right-wing populist actors.

Member Checks

To triangulate the overview of the nature of visual disinformation on alternative and social media, we conducted member checks. Specifically, based on the inductive overview of the different applications and potential functionalities of visuals in disinformation, we held semi-structured interviews with eight social media users (see Online Appendix B for details on recruitment, measurement, and interview setting). In the interviews, we specifically asked social media users (1) whether they recognized the different ways in which visual disinformation was constructed and (2) their threat perceptions related to the different forms of visual disinformation and the decontextualization of images.

The findings indicate that six out of eight participants recognized the ways in which visuals were used in a deceptive context on social media and on alternative news websites. Yet, they did not clearly distinguish between the specific applications distinguished in this study. They mostly referred to the use of visual images as proof for deceptive narratives. As one social media user explained: “I often encounter images taken out of context. I see this a lot in coverage on the Israel war, where old photos are used to make us angry.” Most participants did not recognize the embedding of expert references or social support through screenshots, or the legitimization of counter-factual claims through using a visual background of seemingly credible news.

When confronting social media users with the typology, strong concerns on the ability to distinguish true from false information were expressed. All participants expressed the sentiment that the subtle forms of manipulation and decontextualization found in this study are dangerous, given that they are difficult to detect and seemingly real. As one social media user explained: “This makes it impossible to know when they are lying. If real information is used to mislead us, how can we know what is real? This is impossible!” There was a general consensus among interviewees that visual decontextualization is harmful, as it further increases uncertainty and distrust. Some participants (3/8) also referred to a third-person effect in susceptibility. As one social media user said: “I am mostly worried about people that are easily deceived. They may not think twice, and accept it is true when they see a photo or video that offers proof for claims that never happened.”

Discussion

Our main findings extend beyond existing literature indicating how decontextualized visuals are used to offer proof for textual disinformation narratives (also see Brennen et al., Citation2021; Peng et al., Citation2023; Weikmann & Lecheler, Citation2023). Beyond the application of visuals as proof for false claims, our findings point to the complexity of decontextualization in visual disinformation. Importantly, although the placement of the visual information and the accompanying textual re-contextualization made authentic visual content deceptive, the visual elements of disinformation narratives were often not fabricated, manipulated, or altered. Our findings specifically indicate that re-contextualized authentic visual content interacts with false textual statements to signal the authenticity, credibility, and objectivity of counter-factual claims. This interaction can take on various forms. First of all, counter-factual statements and disinformation narratives were placed in legitimate information environments through the use of deceptive visuals. This was done by using visuals mimicking established news environment, or embedding screenshots of social media posts with extreme and radical issue position in a news-like environment.

Second, visuals were used as evidence for false claims and causal connections that were referred to in textual interpretations of the visuals. As such, visuals depicted decontextualized empirical evidence, signaling credibility through the depiction of hard facts (i.e., numbers on immigration). Although the evidence presented in the visual information was not manipulated and accurate in its original context, the textual narratives deceptively re-contextualized visual information to make it fit as evidence for counter-factual claims. This application resonates with historically influential propaganda techniques, such as quoting inappropriate sources of expertise, generalizations, misrepresenting facts, or suggesting dubious causal claims (Conserva, Citation2003). Visual information was used to illustrate such objectivity claims in a vivid and credible manner. For example, beyond referring to inaccurate connections between event A and B, visual proof for the occurrence of both events separately was used to suggest that they were connected. In this case, the textual elements of the disinformation narrative served to connect the unrelated events depicted through the visual elements.

Third, visuals were used to illustrate the authenticity of expert references, which included ordinary people as a source of lived experience (also see Brown, Citation2009). For example, the common sense, opinions, and values of lay persons were often embedded in disinformation narratives by showing screenshots of the social media posts they posted elsewhere – herewith signaling the social support and legitimacy of false claims. In a similar way, more traditional forms of expertise based on field-specific authoritative knowledge were illustrated by including images of alternative experts, or by embedding screenshots of their communication on other platforms (i.e., social media, alternative platforms) into disinformation narratives. In line with the premise that visuals are used to enhance the credibility of disinformation (e.g., Weikmann & Lecheler, Citation2023), our findings suggest that the embedding of visual representations of expert claims can be used to strengthen the perceived authenticity and credibility of expert sources.

Confirming the findings of Brennen et al. (Citation2021), Peng et al. (Citation2023) and Weikmann and Lecheler (Citation2023), we did find that images can be used to offer visual proof for specific disinformation narratives. We found this application in the context of both COVID-19 and immigration disinformation, where the visual information itself was not fabricated or doctored, but rather taken from a different context to offer evidence for conspiracy theories related to COVID-19 (it is a biological weapon that will deform babies) and immigration (The Great Replacement Theory). In these applications, there was no reference to the source or context of the images. The deceptive out-of-context placement of visuals did, however, play only a very marginal role in the disinformation narratives analyzed in this paper.

In line with this, the most prominent application of visuals for alternative media that may be overlooked in existing conceptualizations is the use of images to add legitimacy to disinformation articles presented as conventional news, and communicate social and authoritative support for anti-establishment narratives. In our sample of alternative hyper-partisan media, this was mainly done by associating deceptive claims with the images of alternative doctors and experts, or the social media feeds of members of the public and opinion leaders. Although uncivil or even hate speech was not used by the alternative media platforms themselves, the inclusion of screenshots of ordinary citizens voicing such sentiments afforded the platforms to communicate such viewpoints more indirectly. Arguably, the platforming of uncivil speech in a visual manner allowed the platforms to maintain legitimacy as they were only disseminating the people’s views on the issues that were covered in the media.

Another main finding is that disinformation narratives typically relied on a combination of different visuals with different functionalities. On the landing page of the newsfeeds, the images were mainly used to grab attention and mimic the news routines of established formats. These images were typically very general (i.e., stock images) or showed the actors that were offered a stage or delegitimized (i.e., political actors or alternative experts). In the main body of the articles, images were mainly used to signal social support or expert consensus by the inclusion of screenshots of (social) media posts. The context of these posts was not referred to, and the statements were selected strategically to offer direct support of the deceptive claims made in the disinformation narrative. As a key contribution to the literature, it is important to assess how different visuals may contribute to the legitimacy of disinformation narratives, and to not regard visuals in isolation of their (multimodal) context in disinformation narratives.

On a more general level, our inductive findings lend support for a typology of visuals in disinformation that consist of four major dimensions that represent distinct functions of visuals paired with deceptive narratives: (1) Signaling legitimacy and adherence to conventional news values through seemingly unrelated images; (2) illustrating authoritative expert consensus through the visualization of disinformation by alternative experts; (3) emphasizing widespread social support for unconventional truth claims through the inclusion of visuals depicting the vox populi; (4) offering decontextualized proof for conspiracy theories and counter-factual claims. Although more research is needed to validate this typology across contexts, it aims to serve as a starting point for the (quantitative) assessment of the functions of visuals in disinformation narratives that aim to offer legitimacy to counter-factual claims. To facilitate future research endeavors, Online Appendix A includes a tentative proposal for indicators and considerations that can be assessed in future research.

As a major contribution to existing content analyses and typologies (e.g., Peng et al., Citation2023; Weikmann & Lecheler, Citation2023), the nature of visual disinformation depends on the context of communication. For alternative media platforms, objectivity and truth claims were mostly signaled by visualizing expert knowledge, taking distance from the vox populi through screenshots of their opinions voiced on other platforms, and referring to empirical evidence (also see Hameleers & Yekta, Citation2023). For social media, in contrast, memes were often used to illustrate disinformation statements, and more directly express an emotionalized anti-establishment perspective on issues. In that sense, different from alternative media platforms, social media platforms created a context for epistemic populism (Saurette & Gunster, Citation2011): The construction of truth claims circumvented expert knowledge and conventional references to empirical evidence, and rather forwarded opinions, feelings, and common sense as the basis of delegitimizing truth claims.

The context-bound nature of the functionalities of visuals signaling evidence and legitimacy for disinformation narratives has important implications for policy and practice. First of all, for media literacy programs aiming to induce resilience to visual disinformation, it is important to acknowledge that manipulation and fabrication are not the only forms of visual disinformation: The context in which images are presented is crucial to consider, as authentic images are often deceptively re-interpreted to alter their meaning in disinformation narratives. Thus, increasing awareness about the extent to which visual evidence is placed in the right context and whether it resonates with the textual information is crucial for interventions. Second, it is important to raise awareness about the functionality of visuals in disinformation across sources and (social) media platforms. Hence, it is insufficient to generally warn people about how visuals are (mis)used in disinformation, as their application depends on the platform.

It is useful to contextualize the findings in the socio-political setting of the Netherlands (also see Rojas and Valenzuela (Citation2019) on the need to embed findings in context). Although relatively high levels of media trust, high press freedom, and low levels of polarization correspond to a relatively resilient setting for disinformation (Humprecht et al., Citation2020), the strong resonance of populist ideas in the media and politics (e.g., Aalberg et al., Citation2017) may offer a discursive opportunity structure for epistemic populism that circumvents elite experts. Especially on social media, the application of visuals to establish a closeness to the experiences of ordinary people corresponds to the pervasiveness of populist ideas in Dutch society. However, given the delegitimizing nature of disinformation narratives across different national settings (e.g., Bennett & Livingston, Citation2018), we believe that the typology is transferable to other settings that have witnessed shifts toward factual relativism (e.g., Van Aelst et al., Citation2017) and an emphasis on the legitimization of alternative facts in alternative media spaces (Hameleers & Yekta, Citation2023).

This study has some noteworthy limitations that may be addressed in future research. First, the findings on the application and functions of visuals in disinformation mainly stem from the analysis of secondary data, which tells us little about the intentions of communicators or the effects that different applications of decontextualization have on recipients. To offer more insights into the intentional dimension of disinformation, future research could rely on in-depth interviews with disinformation disseminators. As this may be a difficult to reach population, it may additionally be worthwhile to analyze disinformation narratives over time and as part of orchestrated campaigns: Do certain applications of decontextualization peak at certain key events (i.e., elections, wars, terrorist attacks, climate change summits), and do they co-occur with other forms of deception? Is there a wider network involved in the dissemination of the various applications of decontextualization?

As our findings tell us little about the effects of visual disinformation, it is also relevant to explore the extent to which the different applications of visual disinformation are perceived as credible by recipients, and which audience segments are most vulnerable to the different applications of decontextualization mapped in this paper. Experimental studies manipulating the variety of decontextualization found in this paper may be useful, supplemented by linkage studies that connect content analyses with detailed media exposure measures and beliefs related to the acceptance of disinformation narratives. We further suggest future research to compare the functions and applications of visuals across platforms: YouTube, Instagram, and TikTok may afford visual disinformation that is more rich in modality (i.e., cheapfakes) compared to the platforms included in this study. Finally, although the timeframe of 2021 included disinformation narratives surrounding many different issues, such as climate change, immigration, and COVID-19, especially the beginning of 2021 was characterized by a strong focus on COVID-19. We therefore suggest future research to extend the timeframe to include a wider variety of issues dominating the (dis)information landscape at different timepoints.

Despite these limitations, we hope that the in-depth exploration of the use of visuals in disinformation disseminated by alternative media outlets sparks future empirical research that aims to map the salience of visual disinformation, herewith arriving at a more precise assessment of the scope of disinformation presented in different formats across different platforms.

Supplemental material

Supplemental Material

Download MS Word (254 KB)

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed on the publisher’s website at https://doi.org/10.1080/10584609.2024.2354389.

Additional information

Notes on contributors

Michael Hameleers

Dr. Michael Hameleers (Ph.D., University of Amsterdam, 2017) is Associate Professor in Political Communication and Journalism at the Amsterdam School of Communication Research (ASCoR), Amsterdam, The Netherlands. His research interests include populism, disinformation, and corrective information. He has published extensively on the impact of populism, (visual) disinformation, fact-checking, media literacy interventions and (media)trust in leading peer-reviewed journals. In recent and ongoing projects, he explores the societal impact of populist communication related to different issues, the impact of disinformation in digital information settings, and the longer-term impact of deepfakes and fact-checks. He applies a wide variety of qualitative and quantitative research methods to understand the intersections between media, politics, and society.

Notes

1. The websites are ranked by the platform hoax-wiki, and often referred to by fact-checks when mapping the origins and spread of deceptive information in the Netherlands. Also see https://hoax.fandom.com/nl/wiki/9_For_News.

References

  • Aalberg, T., Esser, F., Reinemann, C., Strömbäck, J., & Vreese, C. H. (Eds.). (2017). Populist political communication in Europe.
  • Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211
  • Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139.
  • Bock, A., Isermann, H., & Knieper, T. (2011). Quantita‐ tive content analysis of the visual. In E. Margolis & L. Pauwels (Eds.), The SAGE handbook of visual research methods (pp. 265–282). SAGE. https://doi.org/10.4135/9781446268278.n14
  • Brennen, J. S., Simon, F. M., & Nielsen, R. K. (2021). Beyond (mis)representation: Visuals in COVID-19 misinformation. The International Journal of Press/politics, 26(1), 277–299. https://doi.org/10.1177/1940161220964780
  • Brown, M. B. (2009). Science in democracy: Expertise, institutions, and representation. MIT University Press.
  • Cao, J., Qi, P., Sheng, Q., Yang, T., Guo, J., & Li, J. (2020). Exploring the role of visual content in fake news detection. In K. Shu, S. Wang, & D. Lee (Eds.), Disinformation, Misinformation, and Fake News in Social Media (Lecture Notes in Social Networks) (pp. 141–161). Cham: Springer.
  • Chadwick, A., & Stanyer, J. (2022). Deception as a bridging concept in the study of disinformation, misinformation, and misperceptions: Toward a holistic framework. Communication Theory, 32(1), 1–24. https://doi.org/10.1093/ct/qtab019
  • Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage.
  • Coleman, R. (2010). Framing the pictures in our heads: Exploring the framing and agenda-setting effects of visual images. In Doing news framing analysis (pp. 249–278). Routledge.
  • Conserva, H. T. (2003). Propaganda techniques. AuthorHouse.
  • Dan, V., Paris, B., Donovan, J., Hameleers, M., Roozenbeek, J., van der Linden, S., & von Sikorski, C. (2021). Visual mis- and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly, 98(3), 641–664. https://doi.org/10.1177/10776990211035395
  • Freelon, D., & Wells, C. (2020). Disinformation as political communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755
  • Garimella, K., & Eckles, D. (2020). Images and misinformation in political groups: Evidence from WhatsApp in India. Harvard Kennedy School Misinformation Review 1(5), 1–12. https://doi.org/10.37016/mr-2020-030
  • Geise, S., & Baden, C. (2015). Putting the image back into the frame: Modeling the linkage between visual communication and frame-processing theory. Communication Theory, 25(1), 46–69. https://doi.org/10.1111/comt.12048
  • Glaser, B. G., & Strauss, A. L. (2017). Discovery of grounded theory: Strategies for qualitative research. Routledge.
  • Greenwood, K., & Jenkins, J. (2015). Visual framing of the Syrian conflict in news and public affairs magazines. Journalism Studies, 16(2), 207–227. https://doi.org/10.1080/1461670X.2013.865969
  • Haller, A., Holt, K., & de La Brosse, R. (2019). The ‘other’ alternatives: Political right-wing alternative media. Journal of Alternative and Community Media, 4(1), 1–6. https://doi.org/10.1386/joacm_00039_2
  • Hameleers, M., Powell, T. E., van der Meer, G. L. A., & Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication, 37(2), 281–301. https://doi.org/10.1080/10584609.2019.1674979
  • Hameleers, M., & Yekta, N. (2023). Entering an information era of parallel truths? A qualitative analysis of legitimizing and de-legitimizing truth claims in established versus alternative media outlets. Communication Research. https://doi.org/10.1177/00936502231189685
  • Heft, A., Knüpfer, C., Reinhardt, S., & Mayerhöffer, E. (2021). Toward a transnational information ecology on the right? Hyperlink networking among right-wing digital news sites in Europe and the United States. The International Journal of Press/politics, 26(2), 484–504. https://doi.org/10.1177/1940161220963670
  • Humprecht, E., Esser, F., & Van Aelst, P. (2020). Resilience to online disinformation: A framework for cross-national comparative research. The International Journal of Press/politics, 25(3), 493–516. https://doi.org/10.1177/1940161219900126
  • Iyer, A., Webster, J., Hornsey, M. J., & Vanman, E. J. (2014). Understanding the power of the picture: The effect of image content on emotional and political responses to terrorism. Journal of Applied Social Psychology, 44(7), 511–521. https://doi.org/10.1111/jasp.12243
  • Kaiser, J., Vaccari, C., & Chadwick, A. (2022). Partisan blocking: Biased responses to shared misinformation contribute to network polarization on social media. Journal of Communication, 72(2), 214–240. https://doi.org/10.1093/joc/jqac002
  • Labov, W. (1972). Language in the inner city: Studies in the Black English vernacular (Vol. 3). University of Pennsylvania Press.
  • Messaris, P., & Abraham, L. (2001). The role of images in framing news stories. In S. Reese, O. Gandy, & A. Grant (Eds.), Framing Public Life (1st ed., pp. 217–226). Routledge.
  • Newman, N., Fletcher, R., Schulz, A., Andi, S., Robertson, C. T., & Nielsen, R. K. (2021). Reuters Institute digital news report 2021. Reuters Institute for the Study of Journalism.
  • Peng, Y., Lu, Y., & Shen, C. (2023). An agenda for studying credibility perceptions of visual misinformation. Political Communication, 40(2), 225–237. https://doi.org/10.1080/10584609.2023.2175398
  • Powell, T. E., Boomgaarden, H. G., De Swert, K., & De Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65(6), 997–1017. https://doi.org/10.1111/jcom.12184
  • Rojas, H., & Valenzuela, S. (2019). A call to contextualize public opinion-based research in political communication. Political Communication, 36(4), 652–659. https://doi.org/10.1080/10584609.2019.1670897
  • Saurette, P., & Gunster, S. (2011). Ears wide shut: Epistemological populism, argutainment and Canadian conservative talk radio. Canadian Journal of Political Science, 44(1), 195–218. https://doi.org/10.1017/S0008423910001095
  • Seo, H. (2014). Visual propaganda in the age of social media: An empirical analysis of Twitter images during the 2012 Israeli–Hamas conflict. Visual Communication Quarterly, 21(3), 150–161. https://doi.org/10.1080/15551393.2014.955501
  • Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative.
  • Sundar, S. S., Molina, M. D., & Cho, E. (2021). Seeing is believing: Is video modality more powerful in spreading fake news via online messaging apps?. Journal of Computer-Mediated Communication, 26(6), 301–319.
  • Thomson, T. J., Angus, D., & Dootson, P. (2020). 3.2 billion images and 720,000 hours of video are shared online daily. Can you sort real from fake? The Conversation. https://theconversation.com/3-2-billion-images-and-720-000-hours-of-video-are-shared-online-daily-can-you-sort-real-from-fake-148630
  • Van Aelst, P., Strömbäck, J., Aalberg, T., Esser, F., De Vreese, C., Matthes, J., & Stanyer, J. (2017). Political communication in a high-choice media environment: A challenge for democracy?. Annals of the International Communication Association, 41(1), 3–27.
  • Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking. Council of Europe report. http://tverezo.info/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-desinformation-A4-BAT.pdf
  • Weikmann, T., & Lecheler, S. (2023). Visual disinformation in a digital age: A literature synthesis and research agenda. New Media & Society. https://doi.org/10.1177/14614448221141648
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52. https://doi.org/10.22215/timreview/1282
  • Yang, Y., Davis, T., & Hindman, M. (2023). Visual misinformation on Facebook. Journal of Communication, 73(4), 316–328. https://doi.org/10.1093/joc/jqac051
  • Ylä-Anttila, T. (2018). Populist knowledge:‘Post-truth’repertoires of contesting epistemic authorities. European Journal of Cultural and Political Sociology, 5(4), 356–388.