6,495
Views
0
CrossRef citations to date
0
Altmetric
Original Article

Cutting through the Hype: Understanding the Implications of Deepfakes for the Fact-Checking Actor-Network

ORCID Icon & ORCID Icon

Abstract

During recent years, growing concerns about emergent formats of visual disinformation and, in particular, deepfakes have shaped public debates and media scholarship. Empirical studies about deepfakes have most often focused on investigating their potential effects on news audiences. However, very little is known about how they impact producers of mediated communication and what strategies are taken to mitigate this potential threat. Based on in-depth interviews, this study focuses on fact-checkers as a central expert group dealing with evolving forms of visual mis- and disinformation on a day-to-day basis. We employ actor-network theory (ANT) to investigate the tangible impact deepfake technology has on their work. Our findings show that, so far, deepfakes as novel actants are only impacting fact-checking routines to a limited degree and are considered a future challenge. Instead, other forms of false and misleading images are considered to be far more disruptive. In particular, fact-checkers struggle with decontextualized videos, whose corrections require the integration of new technology-driven and manual detection techniques not yet available to many.

Introduction

The emergence of deepfake technology has enabled the creation of hyper-realistic audio-visual manipulations and fuelled recent debates about the impact of deceptive images on society. Concerns about them have been expressed especially in journalistic discourses, where fears are voiced that deepfakes will accelerate the spread of misinformation online, thus further eroding public trust in institutions and news visuals in general (Kalpokas and Kalpokiene Citation2022; Wahl-Jorgensen and Carlson Citation2021). Following these debates, recent studies have provided important insights into their potential effects in experimental settings, showing that exposure to deepfakes can impact attitudes towards news media and politicians (e.g., Dobber et al. Citation2021; Hameleers, van der Meer, and Dobber Citation2022; Vaccari and Chadwick Citation2020). However, what remains understudied is how information intermediaries at the juncture of facts and falsehoods are affected by them on a day-to-day basis (Dan et al. Citation2021; Weikmann and Lecheler Citation2022). This is surprising, as it signifies a lack of understanding of how deepfakes intervene with the everyday information production cycle. Specifically, when considering disinformation and emerging technologies such as deepfakes, a necessary step to understand their impact is to investigate how fact-checkers are influenced.

Fact-checkers play an essential role in today’s information environment, as they are specialised in securing accuracy in public discourse. They are vital for both journalists and other public figures in the information production process, who depend on them to not accidentally distribute digital disinformation proliferating online (Ciampaglia Citation2018; Graves and Mantzarlis Citation2020; Tsfati et al. Citation2020). So far, journalistic discourse about deepfakes is characterised by fearful conjectures (Wahl-Jorgensen and Carlson Citation2021), reflecting a certain level of technological determinism regarding their impact (Örnebring Citation2010). Fact-checkers, on the other hand, may be able to keep up with technological changes in a disinformation environment more quickly, for instance if they have better access and capacity to engage with new forms of verification and visual forensics (Gregory Citation2021). As such, they are often called to mitigate the spread of deepfakes in particular (Sohrawardi et al. Citation2020; van Huijstee et al. Citation2021). Accordingly, their response is crucial in characterising the consequences of deepfakes.

The impact of deepfakes on fact-checkers may be described in line with actor-network theory (ANT) (Latour Citation2005; Law Citation1992; Somerville Citation1997). In the past, ANT has been used to describe the influence of new technologies on various social networks including journalism, thereby allowing a more comprehensive view of change in these spaces (Ryfe Citation2021). ANT allows us to conceptualise deepfakes as a new actant entering an established and growing actor-network specialised in detecting misinformation. This actant potentially causes change in the network, as its members adapt to its entrance and gear towards mitigating potential threats. This conceptualisation assists us in eschewing deterministic or overly alarmist argumentations regarding the impact of a new technology. Rather, ANT traditionally imagines technology as part of society and models its impact while also considering reciprocal relationships. This means that deepfakes may alter how fact-checkers work, but might also lose their threatening potential amidst the development of new detection techniques.

Thus, this study builds an ANT-inspired theoretical framework of the impact of deepfakes on fact-checkers, and explores this network through the use of in-depth interviews. In these interviews, we study how fact-checking experts evaluate the emergence of deepfakes and how they have adapted their routines and countermeasures. Accordingly, the interviewed fact-checkers identify acute risks deepfakes constitute vis-à-vis other forms of visual mis- and disinformation. Our results suggest that, at this moment, deepfakes are still treated as a “future problem” and only alter practices to a small extent. Instead, other forms of visual disinformation such as decontextualized videos constitute a larger challenge to fact-checkers’ daily routines, demanding the development of new manual detection processes and more specialised training for journalists. Based on these findings, this article critically discusses current journalistic discourse about deepfakes as a “dangerous” technology and provides nuanced suggestions for future research on the phenomenon.

Conceptualising Fact-Checking as a Network

The fact-checking profession has rapidly changed in recent decades regarding who fact-checks and how. Initially, an in-house assignment that journalists were attributed with, fact-checking is nowadays often practised by independent organisations (Lowrey Citation2017). Singer (Citation2018) classifies fact-checkers as intrapreneurs and entrepreneurs, meaning they can either function as innovators within an existing media organisation or create their own positions. This suggests that fact-checkers differ by affiliation, as they may be part of a media company, NGO, non-profit, or act independently (Graves and Mantzarlis Citation2020). An overlooked group are software developers and digital forensics specialists, who support the diversification of detection techniques in times when mis- and disinformation becomes more digital (Ciampaglia Citation2018). As such, they are helping fact-checkers create technology-based tools to monitor the amount of misinformation spread online (Nakov et al. Citation2021) and develop plugin tools for the verification of fake images (Marinova et al. Citation2020). Social media platforms have become increasingly involved in this process, calling on the help of fact-checkers’ expertise to regulate the spread of questionable content (Leibowicz, McGregor, and Ovadya Citation2021; López-Marcos and Vicente-Fernández Citation2021). Other important actors are open-source intelligence (OSINT) experts, who use publicly available resources to verify images manually (Gregory and French Citation2019). Taken together, the work of these different actors has become more and more collaborative in recent years, as they are jointly observing elections, misinformation about COVID-19 (Luengo and García-Marín Citation2020), or Russia’s war against Ukraine (O’Connor Citation2022).

In addition, the traditional steps of performing a fact-check have changed in a digital disinformation environment. While Graves (Citation2017) describes fact-checking as a five-step process to evaluate factual claims made by public figures, the concepts of verification and debunking have become increasingly important to consider (Mantzarlis Citation2017). Verification includes seeking evidence for the veracity of user-generated content (Wardle and Moy Citation2017); debunking involves the detection and revision of viral misinformation and “fake news” in the larger sense (Mantzarlis Citation2017). This means that fact-checkers must not only assess a statement or look for evidence that a certain claim is false. In addition, they should be capable of stopping the spread of false rumours online by communicating why something is false, thus making it comprehensible for journalists and audiencesFootnote1. This puts increased responsibility on fact-checkers, who are engaging in a multitude of countermeasures against mis- and disinformation. For example, they offer workshops for journalists and audiences by training them in performing verifications and providing media literacy education (Graves and Mantzarlis Citation2020). They have become an essential democracy-building entity and resource for political decision-makers, creating issue visibility and raising awareness, especially in countries where democracy is under threat (Amazeen Citation2019; Singer Citation2021).

The profession of fact-checking shares overlapping tasks and civic values with journalism (Singer Citation2018). Here, scholars frequently draw from social theories to make sense of journalistic practices (Ahva Citation2017). Accordingly, journalism is often referred to as a field in the Bourdieusian sense, aiming to explain how political and economic forces shape the news production process (Maares and Hanusch Citation2022). Field theory accounts for the exogenous factors impacting journalistic roles, routines and the resources (also called capital) through which journalism meets these influences (Vos Citation2019). Another way to characterise journalism is as a network. Here, the conceptualization of network journalism has gained popularity in explaining the interplay of different actors in journalism, ranging from sources and producers to distributors (Heinrich Citation2011; Primo and Zago Citation2015). While this perspective puts a stronger focus on the human actors shaping journalistic practices, it also considers technology as a tool enabling digitisation processes (Bardoel and Deuze Citation2001). However, it does not ascribe agency to technology itself.

In contrast, actor-network theory (ANT) describes society as a seamless network shaped by both human actors and non-human actants alike (Latour Citation1990). This suggests a non-deterministic nature to these networks, where anything participating becomes an actor, applying equally to humans and technological artefacts such as computers (Law Citation1992). In recent years, ANT as a sociotechnical theory has gained popularity in journalism and media research to explain the interplay of new technologies (=actants) and journalists (=actors). For instance, Plesner (Citation2009) investigated how information and communication technology (ICT) such as email has shaped newsroom practices, concluding that ICT’s “effects” should not be overblown. Other studies employ ANT to discuss the role of algorithms and bots in the news production process, showing that machines are pushing the boundaries of what it means to do journalism but are not taking over the profession (Primo and Zago Citation2015). In addition, ANT has been useful in investigating the role of citizen journalists entering the network (Westlund and Lewis Citation2017). Similarly, ANT also constitutes a suitable framework for investigating the role of deepfakes as a novel actant in fact-checking, thus revealing their tangible impact and the existing affordances of the various actors assigned to mitigating their threat. Importantly, the goal of this study is not to predict or speculate how deepfakes will impact society in the future but to obtain empirical evidence of the challenges deepfakes are currently causing – and facing – in today’s news environment.

Deepfakes – A New Challenge for Fact-Checkers?

When the first deepfakes emerged in 2017, they quickly became associated with the creation of fake pornographic content. However, in current news media discourse, deepfakes are mostly discussed as problematic, because they can disinform and erode trust in political institutions (Gosse and Burkell Citation2020). Deepfakes can be defined as advanced forms of visual disinformation instead of misinformation, as the intentionality behind their creation appears obvious: they are meant to deceive and provide false evidence for something that never happened (Vaccari and Chadwick Citation2020). However, they can also constitute misinformation, for instance, if they are evaluated as real and are shared unknowingly and without intent to deceive (Egelhofer et al. Citation2020; Lecheler and Egelhofer Citation2022). We use the term disinformation in the case of deepfakes, as constructing them requires a conscious action. As such, they are characterised by the technological sophistication required for their creation and have recently awakened scholarly interest in the visual aspects of mis- and disinformation research. At the same time, other forms that are a lot easier to create have been largely neglected, such as photoshopped still images as well as photos or videos taken out of context (Weikmann and Lecheler Citation2022). Other relevant but rarely discussed forms are cheap fakes or shallow fakes – videos that have been edited with little technological effort, such as a decelerated clip of Nancy Pelosi that makes her appear to be drunk (Paris and Donovan Citation2019; Qian et al., Citation2023).

There are few empirical findings on the prevalence of deceptive visuals in the news. Brennen, Simon, and Nielsen (Citation2021) investigated the occurrence of visuals in COVID-19 mis- and disinformation, showing that fact-checking articles have mostly debunked relatively simple manipulations of videos and photographs. Simultaneously, there are examples of deepfakes occurring in a political context, albeit few. For instance, a recent deepfake shows a fake Wolodymyr Selenskyj “surrendering” to Russia (Wakefield Citation2022). In 2019, an alleged deepfake of the president of Gabon made it look as if he had recovered from a long illness, which led to an attempted coup (Dan et al. Citation2021). These examples alone have fuelled serious concerns: Because they constitute visual disinformation on the highest technological level, deepfakes are expected to become weaponized for political warfare (Paterson and Hanley Citation2020), successfully promote false beliefs (Diakopoulos Citation2020), and challenge journalists who often experience difficulties when detecting manipulated news visuals (Himma-Kadakas, Ojamets, and & Citation2022). Since they might not be able to detect them, journalists are worried that deepfakes will contribute to an erosion of trust in what the audience sees and hears in the news (Wahl-Jorgensen and Carlson Citation2021). In addition, concerns about deepfakes being instrumentalised by politicians as “crying wolves” have been voiced, as they might use their sheer existence to evade responsibility. It may be claimed that a video showing them in a defamatory situation is manipulated (Ajder and Glick Citation2021). An example is the alleged deepfake of a minister from Myanmar admitting to corruption, which was later evaluated as real (Gregory Citation2021).

Following this concern, a growing body of literature has focused on the effects of deepfake exposure on audiences. Experimental studies have shown that they are deceptive but not necessarily more than other types of disinformation (Ternovski, Kalla, and Aronow Citation2022). In addition, they have been found to increase uncertainty and lower trust in news on social media (Vaccari and Chadwick Citation2020), heighten engagement intentions due to their source-vividness (Lee and Shin Citation2021), and negatively affect attitudes towards individual politicians (Dobber et al. Citation2021). While these studies clarify some of the concerns regarding the threat deepfakes pose to society, they do not address that a majority of (visual) disinformation is consumed by audiences only after it has gone through information intermediaries’ filters, i.e., after it has been vetted by journalists and fact-checkers (Tsfati et al. Citation2020). Here, most important may be a group of people who are held in highest regard when it comes to detecting misinformation. As such, fact-checkers are likely to adapt more quickly to technological challenges than journalists or other intermediaries. Therefore, the disruption deepfakes cause for them can be particularly insightful (Sohrawardi et al. Citation2020).

So far, in light of larger-scale concerns but little real-life evidence, it remains difficult to assess deepfakes’ impact without drifting into speculations. Based on fact-checkers’ daily experience with different forms of visual mis- and disinformation, they are consulted to give a realistic estimation of the dangers posed by deepfakes. We seek to answer the question:

RQ1. What dangers do fact-checkers identify in connection with deepfakes for (1) journalism and (2) news audiences?

In this study, deepfakes are considered a non-human actant in the fact-checking network, where new actors are developing technology-based and manual detection techniques to uncover false and misleading visuals. In addition, fact-checkers are called to counteract digital mis- and disinformation campaigns through education and training. To understand how deepfakes are impacting these processes, we set out to investigate:

RQ2. To what extent do fact-checkers adapt their work routines to deepfakes in terms of (1) detection techniques, and (2) countermeasures?

Method

To provide an evidence-based assessment of deepfakes’ implications, this study is undertaking interviews with 15 fact-checking experts specialised in different aspects of visual mis- and disinformation. Based on the available literature, modern fact-checkers were divided into three groups: (1) in-house fact-checkers working in an existing journalistic outlet or media organisation (n = 5), (2) independent fact-checkers employed by a non-profit or other fact-checking company (n = 6) and (3) fact-checking engineers (n = 5), who are applying their programming or other digital forensics skills to support fact-checking practices. Accordingly, the sample includes interviewees from all actor groups identified in this article labelled as experts E01 - E15 ().

Table 1. Study participants.

Experts were recruited using purposive sampling or “key-informant approach” (Mays and Pope Citation1995), following the goal of finding interviewees with a profound knowledge of visuals and deepfakes in particular. To do so, the main author attended the 8th GlobalFact conference held by the International Fact-Checking Network in 2021, where several high-profile fact-checking experts agreed to be part of the study. In addition, experts were recruited using the contacts made at GlobalFact and through the author’s personal Twitter and Facebook network. In total, 23 experts were contacted (10 female), resulting in a sample of 15 interview partners (5 female) (see ). In advance of the data collection, the study was ethically approved by the Institutional Review Board of the Department of Communication at the University of Vienna (IRB Approval ID: 20211020_082). The interviews were conducted from November 2021 to March 2022 on the video-conference platform Zoom and lasted 37.5 min on average. Experts stem from diverse country contexts but are weighted towards Western fact-checkers (). Theoretical saturation was sufficiently reached.

A semi-structured interview guide was developed in which experts were asked to reconstruct their approach of verifying deepfakes and other types of (visual) disinformation (see the complete interview guide in Appendix 1). As the term deepfake encompasses a broad range of forms, they were initially asked what was the first thing that came into their mind when hearing it. After participants shared their initial association, the semi-structured guide was used to cover different forms of deepfakes, such as face-swaps, AI-art, or voice synthesis. Based on the fact-checkers rich experience with various forms of visual disinformation, the interviews thus covered a large bandwidth of relevant visual manipulations they come across in their work.

The first block of the interview aimed to gain detailed insights into their practices regarding two main incentives: detection techniques (technology-based, manual) and countermeasures (training, education, issue visibility). The purpose was to understand to what extent their work has changed amid the emergence of deepfakes and how they are adapting their routines. In a second question block, experts were asked to assess deepfakes’ impact in light of large-scale concerns that have been expressed. To get them thinking and talking about this complex question, they were prompted by two contrary vignettes in the form of news headlines. The first is from a New York Times article, stating that “Deepfakes are coming. We can no longer believe what we see. It will soon be as easy to produce convincing fake video as it is to lie. We need to be prepared.” (Rini Citation2019). The second news headline by The Verge contrasted this viewpoint: “Deepfake propaganda is not a real problem - We’ve spent the last year wringing our hands about a crisis that doesn’t exist” (Brandom Citation2019). All experts were presented with both vignettes back-to-back. Data was analysed using structuring qualitative content analysis according to Kuckartz (Citation2016). This approach allows for inductive and deductive category building, which is useful to confirm known perspectives on deepfakes’ impact but also to explore new aspects contributed by the experts.

Findings

Perceived Dangers

The interviewed fact-checkers’ perspectives on the impact of deepfakes can roughly be summarised alongside two complementary lines of arguments: Political deepfakes are by no means an urgent problem at the moment. But they do bear certain risks that are already becoming visible. In particular, the following dimensions emerged when consulting fact-checkers about the perceived dangers deepfakes constitute for journalism and its audience (RQ1):

Deepfakes as a Manageable Problem

Even though fact-checkers agree that deepfakes sound dangerous on paper, they have thus far encountered few of them, and if so, predominantly in a satirical context. Consequently, they currently consider them a manageable problem and largely agree with the part of the The Verge headline stating that deepfake propaganda does not exist. The only real-life deepfake frequently mentioned is a fake video of New Zealand’s prime minister Jacinda Ardern, making it look like she is smoking crack cocaine. At the time of conducting the interviews (November 2021 – March 2022), it was one of the only political deepfakes that received considerable attention. Besides that, fact-checkers frequently referred to a popular deepfake video of actor Tom Cruise playing golf, which was described as being “the best on the market” (E01). It is recognized for its extremely high quality by the experts, who at the same time acknowledge that most deepfakes are still of poor quality and, therefore, not very convincing. In this context, it was also noted that it would be fairly easy to verify a deepfake like the one of Tom Cruise by simply asking the depicted person about their whereabouts at the time of the alleged recording. They could easily come forward with proof that it could not have been them in the video. Similarly, a speech given by a politician is likely to be filmed from various angles and with multiple cameras. It would only be a matter of time until the actual video is found.

Fact-checkers estimate it as difficult to deepfake well-known politicians convincingly and suspect that people who are very familiar with their looks and voice would be able to spot the fake easily. In addition, real-life deepfakes are mentioned in connection with either political satire, art projects, or personal entertainment. For example, a deepfake Christmas message of Queen Elizabeth II. was mentioned as well as the multiple ways deepfake technology is used in Hollywood movies to modify or generate actors’ faces. One participant remarked:

“We see lots of them [deepfakes] and they are usually for satire and they’re not believed. So, we’re seeing it for political satire, which is technically fake, but we’re not seeing effective uses of it for propaganda. We’re seeing a couple of attempted pieces, but they’re crap.” (E10)

One of the main reasons why fact-checkers evaluate deepfakes as less harmful than journalistic discourse suggests, is that they find them to still be very difficult to create. “I kind of tend to agree with the New York Times headline that, you know, it will become easy, it’s not currently that easy.” (E02). This in turn creates a barrier for suppliers of disinformation, that would at the moment be required to have advanced programming skills or substantial financial resources to make a deepfake. One expert estimated the costs of the Tom Cruise deepfake around 4.000 US dollars, suggesting that it probably took a visual effects expert several hours of computing (E04). The experts noted that disinformation campaigns usually operate on a much lower level of effort and point toward other, less sophisticated forms of visual disinformation. As such, photoshopped still images, videos and pictures taken out of context as well as cheap fakes were mentioned. Some fact-checkers remarked that journalists might overhype the deepfake problem while disagreeing with The New York Times headline. One called it “hyperbole” (E12) or even “deeply irresponsible” (E07), stating that it shifts the focus away from more important problems of online mis- and disinformation:

“I mean, I can’t think of an example of a Deepfake propaganda video that has caused problems. I can think of hundreds of examples of pictures or texts or Tweets or normal videos taken out of context that have caused problems because they’re re-used as propaganda, but not deepfake videos.” (E12)

When asked about the extent to which journalists are prepared for deepfakes as a new form of disinformation, experts gave mixed answers depending on their national background. While the US-based experts interviewed for this study had a positive assessment of journalists’ fact-checking skills in their country, the European, and Asian fact-checkers we spoke to were pessimistic. One interviewee even called the level of media competence in Germany, Austria and Europe in general “a single disaster” (E01). A fact-checker from Pakistan evaluated journalists in their country as unprepared, “because digital literacy is very, very low. There are also a lot of journalists that still don’t even know the meaning of deepfake” (E11).

Opposed to this stands the notion that some journalists may also overestimate the problem, thus making themselves and the audience overly alert. This, in turn, feeds into the so-called “liar’s dividend” (Ajder and Glick Citation2021; Chesney & Citron, Citation2019), meaning that politicians might instrumentalize deepfakes’ existence to claim they did not do or say something. Fact-checkers suspect that by just claiming that damaging footage of them is fake, politicians might benefit from deepfake rhetoric to secure their public image. The audience could be misled by this, even though no sophisticated manipulation is required.

Next to this, experts suggest that the danger of deepfakes for the public is not deception at mass scale. Instead, they stress that the creation of revenge porn against women still constitutes the number one problem arising in connection with deepfakes, as people outside the public eye who get targeted do not have the means to defend themselves. Some go as far as to state that political deepfakes don’t matter at all, but journalists overlook the real problem the technology constitutes for women and other vulnerable groups.

“You have lots of images of yourself online and you have no idea that they’re being taken and used and your face is being used in pornographic videos. I mean that to me is a horrific situation. It is a real problem and it’s not being discussed. And instead, the ways that the media talk about it, it’s all in a political context which I don’t think is going to be a problem at all.” (E07)

The “Real Problem”: Decontextualization

Throughout the interviews, experts kept referring to real-life examples of disinformation they come across in their day-to-day work. Most prominently they mentioned decontextualized images, meaning photographs or videos taken out of context and mis-used as pictorial evidence for false claims. They are labelled as showing a certain person or situation while depicting something completely different. Examples are scenes from a conflict that were filmed elsewhere or sometime in the past, which are sometimes even re-used in multiple contexts.

“There’s a really infamous video that goes around and it’s basically taken from a training exercise and it’s taken at night. And there’s a lot of gunfire and there’s explosions and there’s smoking and everything. But it was a training exercise and it gets posted anytime there’s reports of a big fight or a big battle or a war starting. This video will be posted without fail.” (E09)

Besides being extremely easy to make, fact-checkers stressed that they work particularly well as forms of disinformation. One reason is that even if such pictures or videos have been debunked, they continue to be shared and impact perceptions. “In this case, people are often not even interested in what the photo actually shows. It is actually only the catalyst to arouse certain feelings” (E14). In addition, due to their sheer quantity, they are considered the most dangerous form of visual disinformation today.

Similarly, cheap fakes or shallow fakes were mentioned, in which case a video is minimally edited. Frequently mentioned was a slowed-down video clip of Nancy Pelosi, that makes it look as if she is drunkenly slurring her speech. Another interesting phenomenon are AI-generated photographs of people that do not exist, which are often used for fake social media profiles. Lastly, fact-checkers mentioned cases in which lookalikes or voice imitators were used to trick people, which they consider particularly dangerous in an audio-message context. Especially in countries like Brazil or India that rely heavily on WhatsApp as a social medium, fake audio messages are considered dangerous because they do not require a visual.

“I worry therefore that if I was trying to impact an election, I might create a deepfake audio of the president saying something because you can’t see anything, you know, your ears are much less likely to be able to spot it.” (E07)

Changes in Work Routines

Despite a consensus that they do not constitute a political problem at the moment, certain challenges were identified by fact-checkers in connection with deepfakes. Regarding the changes deepfakes cause to their working routines and employed countermeasures (RQ2), the interviewees revealed the following:

Technological Challenges

First and foremost, fact-checkers expect deepfake creation to become significantly easier with time, although being at odds about how long it will take. “And of course, you have to keep in mind that the technology is still quite young. And it’s not perfect yet. But when it’s established a little bit, when anyone with a little bit of experience can produce deepfakes, then it can be quite easy that at some point there will be a crisis that has been triggered by a deepfake” (E13). Especially the interviewed fact-checking engineers stress the ever-improving technology behind deepfake creation and the fact that it’s becoming more and more accessible: “You could technically do that in a few seconds. You could download an app and put your face in a movie scene” (E04). While recognising that creation software is becoming easier, fact-checkers are also noticing multiple problems with detection software, namely accessibility, accuracy and interpretability. Fact-checking engineers reported providing their software to big international media organisations and private enterprises, but also noted that small fact-checking organisations in particular could often not afford tools that are not publicly available. Even more importantly, existing software appears to be flawed, often providing inaccurate or confusing results. The detectors themselves are based on artificial intelligence, and artificial intelligence makes mistakes. In fact, deepfake creation and detection mostly work with the same technology called Generative Adversarial Network (GAN). Experts explained the verification process as “playing each against the other one” (E05), which often results in errors. Moreover, detection tools usually provide a percentage to which a video is supposedly manipulated, which is perceived as useless.

“If an algorithm comes to the result – we had that only yesterday in a demo – in which I was shown the image is 67% wrong. What does that mean? For me, there is only 100% right or 100% wrong.” (E01)

To conclude, deepfake technology is thus far only impacting human routines to a minimal extent. New tools are being developed and introduced to the fact-checking network; however, they are flawed and not yet integrated into daily routines. Even though the experts identified certain risks, there is a consensus that political deepfakes are not the primary concern fact-checkers or journalists should focus on.

Integration of Manual Detection Techniques

In addition to identifying risk factors of audio-visual disinformation, the interviewed fact-checkers gave detailed insights into the work practices and techniques they use to fact-check, verify and debunk acute forms of disinformation. In contrast to deepfake detection models, the mentioned tools are all open-access. In this context, experts explained that the fact-checking field benefits increasingly from open-source-intelligence (OSINT) methods, which are based on the premise that anyone can practise and access them. These techniques include looking at metadata of visual footage, checking satellite imagery and scraping data from publicly available sources. Importantly fact-checkers stress the importance of training journalists and the public in these easy-to-employ, manual techniques. In sum, the following steps were described and recommended:

Reverse-Image Search

One important method fact-checkers are using to verify visuals is conducting a reverse-image search, either provided by Google or another search engine. This is a fairly simple but often effective technique to check whether an image has occurred elsewhere in the past and is being misused in a new context. For example, a helpful add-on provided by Amnesty International automatically extracts multiple frames from a YouTube video which can then be plugged into a reverse image search tool (Amnesty International Youtube DataViewer Citation2017). Another approach is mirroring the image beforehand, as decontextualised images have often been flipped. Ultimately, the timestamp reveals crucial information to determine when an image first occurred.

Frame-by-Frame

Video footage is generally more difficult to fact-check. Watching the video frame-by-frame often reveals imperfections, manipulations or hidden objects that give away information about the video’s origin. It is a simple technique that can be helpful even in the case of sophisticated manipulations such as deepfakes, for example to uncover glitches.

Revealing Objects

Watching the video over and over again, sometimes up to 100 times, is often necessary for fact-checkers. This can lead to noticing crucial cues like objects in the background of the video that allow conclusions on where and when it was filmed. “You’re looking at if it’s outside, like, is there vegetation or trees, what kind of trees are they, the police officer, does he or she have a uniform? Are there patches on their uniform? Is there a pattern on the uniform? That might have changed over time.” (E12) Similarly, it can help to check weather reports or even use tools that determine shadow length, which is only the same on exactly two days per year and thus a strong indicator for when a photograph was taken.

Geolocation

Using a range of online map services often proves helpful for fact-checkers, as each service uses different satellites. Consequently, they each cover different regions and can be useful depending on the visual in question. In Germany, for instance, Mappilary has better coverage than Google Streetview.

In sum, fact-checkers predominantly use these manual techniques to verify visual disinformation. They assess the role of technology critically, depending on their affiliation. While creators of tools are optimistic regarding automated processes, others report that using the above mentioned “by hand” options work best for them at the moment.

Discussion

This study is one of the first to interview fact-checkers specialised in visual mis- and disinformation about deepfakes, thus exploring the relationship between deepfakes as a technological actant and the human actors assigned with mitigating their threat. Our results show that fact-checkers see the political use of deepfakes as a constrained issue, considering that few have caused turmoil so far. Instead, they point to other actants of visual disinformation such as decontextualizations that are potentially causing more harm even though they are being largely neglected in media coverage (RQ1). Moreover, fact-checkers are adapting their detection techniques and countermeasures with the help of novel actors to a small extent in light of deepfakes. Again, they point to decontextualization as the most prominent and challenging form they are facing, requiring the introduction and teaching of manual detection techniques (RQ2). This study contributes to the fields of visual disinformation, fact-checking, and journalism by demonstrating that (1) deepfakes are not the only form of visual disinformation to be concerned about, (2) that deepfakes can be harmful in both political and non-political contexts and (3) that deepfake technology is not inherently more dangerous than other forms of disinformation.

First, our findings are in line with Brennen, Simon, and Nielsen (Citation2021), who investigated the amount of visual misinformation debunked by fact-checking websites during the COVID-19 pandemic: Most of them dealt with minimally edited pictures or decontextualized images. Parallel to this, our results situate deepfakes as one of many forms through which visuals can deceive, thus stressing the diversity of visual mis- and disinformation proliferating online. We suggest that future effects studies should include a broader range of stimuli when investigating the effects of visual disinformation (see Qian et al., Citation2023). Previous studies have mostly investigated deepfakes’ deceptiveness and effects in comparison to textual disinformation, still images or real videos (e.g., Dobber et al. Citation2021; Lee and Shin Citation2021; Vaccari and Chadwick Citation2020). However, it is important to understand the effects of less sophisticated forms, most importantly decontextualizations, as audiences are more likely to be exposed to them. Moreover, the fact-checkers’ evaluation of deepfakes as a minimal threat stands in contrast to research suggesting that deepfake creation has become increasingly easy (e.g., Dan et al. Citation2021). Indeed, since the interviews for this study were conducted, new creative AI-tools such as Dall-E two and Meta’s make-a-video have emerged. These make it easy to quickly instruct an AI to draw a photo or create a video of an imagined situation. However, even though making such a fake can be achieved via an online application within seconds, the outcomes may still not be as convincing as when an AI artist puts hours of work into faking a real person’s facial movements and voice (e.g., Hameleers, van der Meer, and Dobber Citation2022). The interviewees also evaluate the danger of deception through a deepfake as low because audiences are often very familiar with the portrayed political actors. Though there is empirical evidence that people struggle to spot deepfakes (e.g., Murphy and Flynn Citation2022; Shin and Lee Citation2022), studies found that real videos are still evaluated as substantially more credible (e.g., Hameleers, van der Meer, and Dobber Citation2022). Importantly, these findings show that we do not know enough about the threatening potential of this actant in flux, making timely research on the impact of deepfakes all the more important.

Second, our results suggest that there is a relatively narrow conceptualization and understanding of deepfakes and their impact on society at large in journalistic discourse. Gosse and Burkell (Citation2020) found that while journalistic articles do address pornographic deepfakes targeting women and other vulnerable groups such as the LGBTQ community, this aspect largely serves as a backstory. The interviewed fact-checkers confirmed that there seems to be a lack of attention given to the perpetual harm deepfake pornography causes to people affected by them. However, this shows that deepfakes could not only be used for political gain, but also to enforce inequalities in society. As such, mass-scale deception does not seem to be the main focus of deepfake revenge porn, but the violation of dignity in an act of cyber-misogyny (Gosse and Burkell Citation2020). Important to consider is also that deepfakes of famous politicians are easily verifiable, by simply asking the targeted politician whether they are really depicted in the video. Citizens cannot rely on the same defence mechanism, as fact-checkers do not investigate or rectify private content.

Third, this study demonstrates a discrepancy between fears about the detrimental impact of deepfakes and the problem they practically cause. Here, actor-network-theory has proven helpful to reflect upon deepfakes as a participating actant whose impact should, however, not be overstated. Previous research has shown that news media reports are largely based on speculations, as journalists envision worst-case scenarios about deepfakes’ detrimental consequences for politics and the journalistic community (Wahl-Jorgensen and Carlson Citation2021). This kind of technological determinism is common for journalists when faced with changes to their profession, as they tend to ascribe a high degree of agency to technology (Örnebring Citation2010). In contrast, fact-checkers seem to take a more hands-on approach to deepfakes, treating them no differently than other mis- and disinformation actants. It appears that fact-checker’s estimation of the threat and their approaches to deepfakes are unknown to journalist – a disconnect that requires further investigation by future research. One reason for journalists’ disconcertion could be that many of them have little experience in verifying and debunking misleading visuals (Himma-Kadakas, Ojamets, and & Citation2022). Consequently, deepfakes appear to be an unconquerable problem. Learning the manual detection techniques described in this article may be a useful starting point for journalists and audiences to gain more agency (Qian et a., Citation2023). This could also lead to making the audience participating actors of the fact-checking network themselves, capable of managing different disinformation actants.

In addition, deepfakes in journalistic and scholarly discourse are largely understood as a technology-based weapon (Diakopoulos Citation2020; Gosse and Burkell Citation2020; Wahl-Jorgensen and Carlson Citation2021). Following ANT, deepfakes may not only be evaluated based on the underlying technology, but in relation to human actors’ reactions to it. In this context, it has been largely neglected that the sheer discussion about something potentially being fake may already lead audiences to misjudge but also normalise the phenomenon’s occurrence (Egelhofer et al. Citation2020). Even claiming a real video is a deepfake can be an effective strategy to misuse visuals to support a false claim. This highlights once again the importance of taking into account more strongly human actions and logic instead of overstating technology’s agency. We suggest that journalistic coverage as well as scientific research should give this aspect a stronger focus in the future, thus generating a fuller picture of the societal consequences deepfakes may have, especially as the term finds its way into everyday language (Ternovski, Kalla, and Aronow Citation2022).

Our research does not come without limitations. First, even though we interviewed fact-checkers from South America and Asia, most of the interviewees came from Germany, the UK, and the US. While by and large all experts agreed on the main themes regarding deepfakes and decontextualization, the problem with fake audio-messages came up in a Venezuelan context, where WhatsApp is a popular medium. The study at hand can only give an indication of what sort of falsehoods fact-checkers are dealing with internationally. More content analyses on the constitution and prevalence of the different forms of visual mis- and disinformation are urgently needed, especially outside of the global north (Ajder and Glick Citation2021). In addition, this study only consulted fact-checkers, who are an essential part of securing accuracy in public discourse and are specialised in dealing with mediated falsehoods. However, there are more viewpoints to consider. For instance, future research may consult political stakeholders and legislators to get an understanding of the legal issues deepfakes cause, such as the violation of personal data protection. Moreover, we did not interview any in-house fact-checkers working for social media platforms directly. Future research may take into account the role of platforms when it comes to regulating visual disinformation in particular, to generate a fuller picture of the initiatives currently taken. The interviews were conducted prior to Russia’s war against Ukraine; but videos and other images taken out of context is precisely what disinformation experts are now observing in this context. Especially the platform TikTok – which allows for extremely easy visual edits – has turned into a fertile ground for decontextualized imagery posted there by Russian news organisations (O’Connor Citation2022). We encourage future studies to observe this case closely, as it provides a suitable starting point to observe the prevalence and effects of visual disinformation in the wild – something our study is only able to do to a limited degree.

Conclusion

Despite the above-mentioned limitations, this study has important theoretical and practical implications. It uniquely presents ANT as a suitable framework to study the fact-checking profession, which is diversifying on both the actor and actant-level. This generates a more profound and updated understanding of who practises fact-checking today, and how the network is impacted by technology. Moreover, this study offers a realistic assessment of the disruptions – and limits thereof – deepfakes cause in today’s information production cycle. Accordingly, it sheds light on overlooked issues such as deepfake pornography and decontextualized images, which should be moved to the centre of academic and journalistic discourse. In addition, it gives practical guidelines for journalists and news audiences on how to verify visual mis- and disinformation themselves. Most importantly, it provides a broader conceptualisation of deepfakes as a societal phenomenon whose impact, though serious, should not be a source of panic.

Supplemental material

Appendix_interview_guide.docx

Download MS Word (17.6 KB)

Acknowledgements

Many thanks to Jana Laura Egelhofer for your valuable feedback on this article. We would also like to sincerely thank the interview partners who shared such compelling and important insights into their work.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 In this paper, we use fact-checking as an umbrella term and verification and debunking whenever it fits the definitions we provide in the text.

References

  • Ahva, Laura. 2017. “Practice Theory for Journalism Studies.” Journalism Studies 18 (12): 1523–1541.
  • Ajder, Henry, and Joshua Glick. 2021. JUST JOKING! Deepfakes, Satire and the Politics of Synthetic Media. Report, WITNESS and Co-Creation Studio at MIT Documentary Lab.
  • Amazeen, Michelle A. 2019. “Practitioner Perceptions: Critical Junctures and the Global Emergence and Challenges of Fact-Checking.” International Communication Gazette 81 (6–8): 541–561.
  • Amnesty International Youtube DataViewer 2017. Amnesty International. https://citizenevidence.amnestyusa.org/
  • Bardoel, Jo, and Mark Deuze. 2001. “Network Journalism”: Converging Competencies of Old and New Media Professionals.” Australian Journalism Review 23 (2): 91–103.
  • Brandom, Russell. 2019, March 5. Deepfake propaganda is not a real problem - We’ve spent the last year wringing our hands about a crisis that doesn’t exist. The Verge. https://www.theverge.com/2019/3/5/18251736/deepfake-propaganda-misinformation-troll-video-hoax
  • Brennen, J. Scott, Felix M. Simon, and Rasmus Kleis Nielsen. 2021. “Beyond (Mis)Representation: Visuals in COVID-19 Misinformation.” International Journal of Press/Politics 26 (1): 277–299.
  • Chesney, Robert, and Danielle Keats Citron. 2019. “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” California Law Review 107 (6): 1753–1820.
  • Ciampaglia, Giovanni Luca. 2018. “Fighting Fake News: A Role for Computational Social Science in the Fight against Digital Misinformation.” Journal of Computational Social Science 1 (1): 147–153.
  • Dan, Viorela, Britt Paris, Joan Donovan, Michael Hameleers, Jon Roozenbeek, Sander van der Linden, and Christian von Sikorski. 2021. “Visual Mis- and Disinformation, Social Media, and Democracy.” Journalism & Mass Communication Quarterly 98 (3): 461–664.
  • Diakopoulos, Nicholas. 2020. “Computational News Discovery: Towards Design Considerations for Editorial Orientation Algorithms in Journalism.” Digital Journalism 8 (7): 945–967.
  • Dobber, Tom, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. 2021. “Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?” International Journal of Press/Politics 26 (1): 69–91.
  • Egelhofer, Jana Laura, Loes Aaldering, Jakob Moritz Eberl, Sebastian Galyga, and Sophie Lecheler. 2020. “From Novelty to Normalization? How Journalists Use the Term “Fake News” in Their Reporting.” Journalism Studies 21 (10): 1323–1343.
  • Gosse, Chandell, and Jacquelyn Burkell. 2020. “Politics and Porn: How News Media Characterizes Problems Presented by Deepfakes.” Critical Studies in Media Communication 37 (5): 497–511.
  • Graves, Lucas. 2017. “Anatomy of a Fact Check: Objective Practice and the Contested Epistemology of Fact Checking.” Communication, Culture & Critique 10 (3): 518–537.
  • Graves, Lucas, and Alexios Mantzarlis. 2020. “Amid Political Spin and Online Misinformation, Fact Checking Adapts.” The Political Quarterly 91 (3): 585–591.
  • Gregory, Sam. 2021. “Deepfakes, Misinformation and Disinformation and Authenticity Infrastructure Responses: Impacts on Frontline Witnessing, Distant Witnessing, and Civic Journalism.” Journalism 23 (3): 708–729.
  • Gregory, Sam, and Eric French. 2019. How do we work together to detect AI-manipulated media? https://lab.witness.org/projects/osint-digital-forensics/
  • Hameleers, Michael, Toni G. L. A. van der Meer, and Tom Dobber. 2022. “You Won’t Believe What They Just Said! the Effects of Political Deepfakes Embedded as Vox Populi on Social Media.” Social Media + Society 8 (3): 1–12.
  • Heinrich, Ansgard. 2011. Network Journalism: Journalistic Practice in Interactive Spheres. New York and London: Routledge.
  • Himma-Kadakas, Marju, and Indrek Ojamets, & 2022. “Debunking False Information: Investigating Journalists’ Fact-Checking Skills.” Digital Journalism 10 (5): 1–22.
  • Kalpokas, Ignas, and Julija Kalpokiene. 2022. “On Alarmism: Between Infodemic and Epistemic Anarchy.” In I. Kalpokas & J. Kalpokiene (Eds.), Deepfakes a Realistic Assessment of Potentials, Risks, and Policy Regulation (pp. 41–53). Cham, Switzerland: Springer.
  • Kuckartz, Udo. 2016. “Die Inhaltlich Strukturierende Qualitative Inhaltsanalyse.” In U. Kuckartz (Ed.), Qualitative Inhaltsanalyse. Methoden, Praxis, Computerunterstützung (3rd ed., pp. 97–117). Weinheim and Basel: Beltz Juventa.
  • Latour, Bruno. 1990. “On Actor-Network Theory. A Few Clarifications plus More than a Few Complications.” Philosophia 25 (3): 47–64.
  • Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory (B. Latour, Ed.; 1st ed.). Oxford: Oxford University Press Inc.
  • Law, John. 1992. “Notes on the Theory of the Actor-Network: Ordering, Strategy, and Heterogeneity.” Systems Practice 5 (4): 379–393.
  • Lecheler, Sophie, and Jana Laura Egelhofer. 2022. “Disinformation, Misinformation, and Fake News: Understanding the Supply Side.” In J. Strömbäck, Å. Wikforss, K. Glüer, T. Lindholm, & H. Oscarsson (Eds.), Knowledge Resistance in High-Choice Information Environments. London: Routledge.
  • Lee, Jiyoung, and Soo Yun Shin. 2021. “Something That They Never Said: Multimodal Disinformation and Source Vividness in Understanding the Power of AI-Enabled Deepfake News.” Media Psychology 25 (4): 1–16.
  • Leibowicz, Claire R., Sean McGregor, and Aviv Ovadya. 2021. “The Deepfake Detection Dilemma.” Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 736–744.
  • López-Marcos, Casandra, and Pilar Vicente-Fernández. 2021. “Fact Checkers Facing Fake News and Disinformation in the Digital Age: A Comparative Analysis between Spain and United Kingdom.” Publications 9 (3): 36.
  • Lowrey, Wilson. 2017. “The Emergence and Development of News Fact-Checking Sites: Institutional Logics and Population Ecology.” Journalism Studies 18 (3): 376–394.
  • Luengo, María, and David García-Marín. 2020. “The Performance of Truth: Politicians, Fact-Checking Journalism, and the Struggle to Tackle COVID-19 Misinformation.” American Journal of Cultural Sociology 8 (3): 405–427.
  • Maares, Phoebe, and Folker Hanusch. 2022. “Interpretations of the Journalistic Field: A Systematic Analysis of How Journalism Scholarship Appropriates Bourdieusian Thought.” Journalism 23 (4): 736–754.
  • Mantzarlis, Alexios. 2017. [@Mantzarlis]. March 15). “What is the difference between fact- checking and verification? I made this horrible thing that perhaps clarifies a question I get a lot.” [Image attached] [Tweet]. Twitter. https://twitter.com/mantzarlis/status/842028036325298176
  • Marinova, Zlatina, Jochen Spangenberg, Denis Teyssou, Symeon Papadopoulos, Nikos Sarris, Alexandre Alaphilippe, and Kalina Bontcheva. 2020. “WeVerify: Wider and Enhanced Verification for You Project Overview and Tools.” 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 1–4.
  • Mays, Nicholas, and Catherine Pope. 1995. “Qualitative Research: Rigour and Qualitative Research.” BMJ 311 (6997): 109–112.
  • Murphy, Gillian, and Emma Flynn. 2022. “Deepfake False Memories.” Memory 30 (4): 480–492.
  • Nakov, Preslav, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated fact-checking for assisting human fact-checkers. http://arxiv.org/abs/2103.07769
  • O’Connor, Ciarán. 2022. TikTok is at the forefront of Russia’s propaganda war. https://www.isdglobal.org/isd-in-the-news/ciaran-oconnor-tiktok-is-at-the-forefront-of-russias-propaganda-war/
  • Örnebring, Henrik. 2010. “Technology and Journalism-as-Labour: Historical Perspectives.” Journalism 11 (1): 57–74.
  • Paris, Britt, and Joan Donovan. 2019. Deepfakes and cheap fakes: The manipulation of audio and visual evidence. In Data & Society.
  • Paterson, Thomas, and Lauren Hanley. 2020. “Political Warfare in the Digital Age: Cyber Subversion, Information Operations and ‘Deep Fakes.’ Australian.” Australian Journal of International Affairs 74 (4): 439–454.
  • Plesner, Ursula. 2009. “An Actor-Network Perspective on Changing Work Practices.” Journalism 10 (5): 604–626.
  • Primo, Alex, and Gabriela Zago. 2015. “Who and What Do Journalism?: an Actor-Network Perspective.” Digital Journalism 3 (1): 38–52.
  • Qian, Sijia, Cuihua Shen, and Jingwen Zhang. 2023. “Fighting Cheapfakes: Using a Digital Media Literacy Intervention to Motivate Reverse Search of out-of-Context Visual Misinformation.” Journal of Computer-Mediated Communication 28 (1): 1–12.
  • Rini, Regina. 2019. June 10. Deepfakes are coming. We can no longer believe what we see. It will soon be as easy to produce convincing fake video as it is to lie. We need to be prepared. The New York Times. https://www.nytimes.com/2019/06/10/opinion/deepfake-pelosi-video.html
  • Ryfe, David. 2021. “Actor-Network Theory and Digital Journalism.” Digital Journalism 10 (2): 267–283.
  • Shin, Soo Yun, and Jiyoung Lee. 2022. “The Effect of Deepfake Video on News Credibility and Corrective Influence of Cost-Based Knowledge about Deepfakes.” Digital Journalism 10 (3): 412–432.
  • Singer, Jane B. 2018. “Fact-Checkers as Entrepreneurs: Scalability and Sustainability for a New Form of Watchdog Journalism.” Journalism Practice 12 (8): 1070–1080.
  • Singer, Jane B. 2021. “Border Patrol: The Rise and Role of Fact-Checkers and Their Challenge to Journalists’ Normative Boundaries.” Journalism 22 (8): 1929–1946.
  • Sohrawardi, Saniat Javid, Sovantharith Seng, Akash Chintha, Bao Thai, Andrea Hickerson, Raymond Ptucha, and Matthew Wright. 2020. DeFaking deepfakes: Understanding journalists’ needs for deepfake detection. www.figma.com
  • Somerville, Ian. 1997. “Actor-Network Theory: A Useful Paradigm for the Analysis of the UK Cable/on-Line Sociotechnical Ensemble?” AMCIS 1997 Proceedings 37. https://aisel.aisnet.org/amcis1997/37/.
  • Ternovski, John, Joshua Kalla, and Peter Aronow. 2022. “Negative Consequences of Informing Voters about Deepfakes: Evidence from Two Survey Experiments.” Journal of Online Trust and Safety 1 (2): 1–16.
  • Tsfati, Yariv, Hajo G. Boomgaarden, Jesper Strömbäck, Rens Vliegenthart, Alyt Damstra, and Elina Lindgren. 2020. “Causes and Consequences of Mainstream Media Dissemination of Fake News: Literature Review and Synthesis.” Annals of the International Communication Association 44 (2): 157–173.
  • Vaccari, Cristian, and Andrew Chadwick. 2020. “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.” Social Media + Society 6 (1): 1–13.
  • van Huijstee, Mariëtte, Pieter van Boheemen, Djurre Das, Linda Nierling, Jutta Jahnel, Murat Karaboga, and Martin Fatun. 2021. Tackling deepfakes in European policy.
  • Vos, Tim P. 2019. “Field Theory and Journalistic Capital.” In The International Encyclopedia of Journalism Studies (pp. 1–5). New Jersey: Wiley.
  • Wahl-Jorgensen, Karin, and Matt Carlson. 2021. “Conjecturing Fearful Futures: Journalistic Discourses on Deepfakes.” Journalism Practice 15 (6): 803–820.
  • Wakefield, Jane. 2022, March 18. Deepfake presidents used in Russia-Ukraine war. BBC News. https://www.bbc.com/news/technology-60780142
  • Wardle, Claire, and Will Moy. 2017. Is that actually true? Combining fact-checking and verification for #GE17. https://firstdraftnews.org/articles/fullfact-ge17/
  • Weikmann, Teresa, and Sophie Lecheler. 2022. “Visual Disinformation in a Digital Age: A Literature Synthesis and Research Agenda.” New Media & Society. Advance online publication. doi: 10.1177/14614448221141648.
  • Westlund, Oscar, and Seth C. Lewis. 2017. “Reconsidering News Production: How Understanding the Interplay of Actors, Actants, and Audiences Can Improve Journalism Education.” In R. S. Goodman & E. Steyn (Eds.), Global Journalism Education: Challenges and Innovations (pp. 409–447). Knight Center for Journalism in the Americas, University of Texas at Austin.